I have written an if-elif statement, which I believe not be very efficient:
first_number = 1000
second_number = 700
switch = {'upperRight': False,'upperLeft': False,'lowerRight': False,'lowerLeft': False,'middleLeft': False,'middleRight': False,'upperMiddle': False,'lowerMiddle': False,'middle': False
}for i in range(first_number):for j in range(second_number):if pixel_is_black:if i <= int(first_number/3) and j <= int(second_number/3):switch['upperLeft'] = Trueelif i <= int(first_number/3) and int(second_number/3) < j <= int(2*second_number/3):switch['middleLeft'] = Trueelif i <= int(first_number/3) and j > int(2*second_number/3):switch['lowerLeft'] = Trueelif int(first_number / 3) <= i < int(2 * first_number / 3) and j < int(second_number / 3):switch['upperMiddle'] = Trueelif int(first_number / 3) <= i < int(2 * first_number / 3) and int(second_number / 3) < j <= int(2 * second_number / 3):switch['middle'] = Trueelif int(first_number / 3) <= i < int(2 * first_number / 3) and j >= int(2 * second_number / 3):switch['lowerMiddle'] = Trueelif i >= int(2 * first_number / 3) and j <= int(2 * second_number / 3):switch['upperRight'] = Trueelif i >= int(2 * first_number / 3) and int(second_number / 3) < j <= int(2 * second_number / 3):switch['middleRight'] = Trueelif i >= int(2 * first_number / 3) and j >= int(2 * second_number / 3):switch['lowerRight'] = Truefor i in switch:if(switch[i] == True):print(i)
As you can see, the statement looks pretty ugly.
Since number is big, it takes almost 2 seconds for the execution. In the loop, I am going through the pixels of an image. In the if-elif statement, I divide the image in 9 parts, and print the respective number, if the pixel is black in that area.
Is there any way I could lower the CPU time?
I tried this answer, but my statement conditions are different.
Thank you.
Since you're working with images and using numpy, I think the easiest thing to do would be split up the image into blocks and see if any pixels inside those blocks are black. For example, suppose I have an edge image where the middle doesn't have any black pixels, like this:
We can use a list comprehension to turn the image into blocks:
h, w = img.shape[:2]
bh, bw = h/3, w/3bw_ind = [0, int(bw), 2*int(bw), w]
bh_ind = [0, int(bh), 2*int(bh), h]
blocks = [img[bh_ind[i]:bh_ind[i+1], bw_ind[j]:bw_ind[j+1]] for i in range(3) for j in range(3)]
Now just to make things a little simpler, we can make a list of the keys in your list dictionary in the same order that the blocks are in; that way, blocks[0]
would correspond to switch_list[0]
which is "upperLeft"
.
switch_list = ['upperLeft', 'upperMiddle', 'upperRight','middleLeft', 'middle', 'middleRight','lowerLeft', 'lowerMiddle', 'lowerRight']
Then the last thing to do is find the black pixels in each block. So we want to go through the 9 blocks (using a loop) and then compare the values inside the block to whatever color we're interested in. If you had a 3-channel 8-bit image, then black is usually represented with a 0
in each channel. So for a single pixel, if it was black, then we could compare it to black with pixel == [0,0,0]
. But this returns a boolean for each value:
>>> pixel == [0,0,0]
[True, True, True]
The pixel is only black when all three of these values match, so we can use .all()
on the result, which will only return True
if the whole array is True
:
>>> (pixel == [0,0,0]).all()
True
So this is our indicator that a single pixel is black. But we need to check if any pixel is black inside our block. Let's go down to a single channel image for simplicity first. Suppose we have the array
M = np.array([[0,1], [2,3]])
If we used a logical comparison here, M == 5
, we would return an array of booleans, the same shape as M
, comparing each element to 5
:
>>> M == 5
array([[False, False] [False, False]])
In our case, we don't need to know every comparison, we just want to know if a single pixel inside the block is black, so we just want a single boolean. We can use .any()
to check if any value was True
inside M
:
>>> (M == 5).any()
False
So we need to combine these two operations; we'll make sure that all the values match our color of interest ([0,0,0]
) in order to count that pixel, and then we can see if any of our pixels returned True
from that comparison inside each block:
black_pixel_in_block = [(block==[0,0,0]).all(axis=2).any() for block in blocks]
Note the axis=2
argument: .all(axis=2)
will reduce the multi-channel image into a single channel of booleans; True
at a pixel location if the color matched in every channel. And then we can check if any of the pixel locations returned true. This reduces to a boolean for each block, telling if it contained the color. So we can set the dictionary values to True
or False
depending on whether or not a black pixel was found:
for i in range(len(switch_list)):switch[switch_list[i]] = black_pixel_in_block[i]
And finally, print
the result:
>>> print(switch)
{'upperRight': True,
'upperLeft': True,
'lowerRight': True,
'lowerLeft': True,
'middleLeft': True,
'middleRight': True,
'upperMiddle': True,
'lowerMiddle': True,
'middle': False}
The operations alone here took ~0.1 seconds on an (2140, 2870) image.
Along the same lines you could first create a matrix of True
, False
values for the whole image with .all()
, and then split that up into blocks, and then use .any()
inside the blocks. This would be better for memory since you're storing 9 (h,w)
blocks instead of 9 (h,w,depth)
blocks.