So , Dare to propose the following algorithm :
1、 Find the center point of the image , And obtain all the horizontal and vertical pixels of the center point
2、 Use numpy The vectorization operation of , Compare the size of horizontal or vertical pixels with the black edge threshold
3、 Find that the above comparison results are True Result , That is, the image boundary after removing the black edge
import cv2
import numpy as np
threshold = 40 # threshold
gray = cv2.imread('./bgRemovedTEst.png') # Import image
gray = cv2.cvtColor(gray, cv2.COLOR_RGB2GRAY) # Convert to grayscale image
nrow = gray.shape[0] # Get the picture size
ncol = gray.shape[1]
rowc = gray[:,int(1/2*nrow)] # It is impossible to distinguish more than half of the black areas
colc = gray[int(1/2*ncol),:]
rowflag = np.argwhere(rowc > threshold)
colflag = np.argwhere(colc > threshold)
left,bottom,right,top = rowflag[0,0],colflag[-1,0],rowflag[-1,0],colflag[0,0]
cv2.imshow('name',gray[left:right,top:bottom]) # Effect display
cv2.waitKey()
The advantages of doing so are obvious , The code logic is clear , And didn't use for Cycle efficiency is very high , The time complexity is o(n), For normal image size, it is faster than the traditional method 100 Double the look .
However, the above routine cannot handle the case that the black edge range is more than half . For this purpose, you can manually take the center point as the dividing line , Change to random sampling , Until the point is not black .