I'm creating a code which can detect the percentage of green colour from an image.
.
I have a little experience with OpenCV but am still pretty new to image processing and would like some help with my code. How should I change this code so that it is capable of calculating the percentage of green instead of brown? And if it isn't too troublesome could someone please explain how the changes affect the code? Below is the link to the image I would like to use.
Credit for the code goes to @mmensing
import numpy as np
import cv2img = cv2.imread('potato.jpg')brown = [145, 80, 40] # RGB
diff = 20
boundaries = [([brown[2]-diff, brown[1]-diff, brown[0]-diff],[brown[2]+diff, brown[1]+diff, brown[0]+diff])]for (lower, upper) in boundaries:lower = np.array(lower, dtype=np.uint8)upper = np.array(upper, dtype=np.uint8)mask = cv2.inRange(img, lower, upper)output = cv2.bitwise_and(img, img, mask=mask)ratio_brown = cv2.countNonZero(mask)/(img.size/3)print('brown pixel percentage:', np.round(ratio_brown*100, 2))cv2.imshow("images", np.hstack([img, output]))cv2.waitKey(0)
I've modified your script so you can find the (approximate) percent of green color in your test images. I've added some comments to explain the code:
# Imports
import cv2
import numpy as np# Read image
imagePath = "D://opencvImages//"
img = cv2.imread(imagePath+"leaves.jpg")# Here, you define your target color as
# a tuple of three values: RGB
green = [130, 158, 0]# You define an interval that covers the values
# in the tuple and are below and above them by 20
diff = 20# Be aware that opencv loads image in BGR format,
# that's why the color values have been adjusted here:
boundaries = [([green[2], green[1]-diff, green[0]-diff],[green[2]+diff, green[1]+diff, green[0]+diff])]# Scale your BIG image into a small one:
scalePercent = 0.3# Calculate the new dimensions
width = int(img.shape[1] * scalePercent)
height = int(img.shape[0] * scalePercent)
newSize = (width, height)# Resize the image:
img = cv2.resize(img, newSize, None, None, None, cv2.INTER_AREA)# check out the image resized:
cv2.imshow("img resized", img)
cv2.waitKey(0)# for each range in your boundary list:
for (lower, upper) in boundaries:# You get the lower and upper part of the interval:lower = np.array(lower, dtype=np.uint8)upper = np.array(upper, dtype=np.uint8)# cv2.inRange is used to binarize (i.e., render in white/black) an image# All the pixels that fall inside your interval [lower, uipper] will be white# All the pixels that do not fall inside this interval will# be rendered in black, for all three channels:mask = cv2.inRange(img, lower, upper)# Check out the binary mask:cv2.imshow("binary mask", mask)cv2.waitKey(0)# Now, you AND the mask and the input image# All the pixels that are white in the mask will# survive the AND operation, all the black pixels# will remain blackoutput = cv2.bitwise_and(img, img, mask=mask)# Check out the ANDed mask:cv2.imshow("ANDed mask", output)cv2.waitKey(0)# You can use the mask to count the number of white pixels.# Remember that the white pixels in the mask are those that# fall in your defined range, that is, every white pixel corresponds# to a green pixel. Divide by the image size and you got the# percentage of green pixels in the original image:ratio_green = cv2.countNonZero(mask)/(img.size/3)# This is the color percent calculation, considering the resize I did earlier.colorPercent = (ratio_green * 100) / scalePercent# Print the color percent, use 2 figures past the decimal pointprint('green pixel percentage:', np.round(colorPercent, 2))# numpy's hstack is used to stack two images horizontally,# so you see the various images generated in one figure:cv2.imshow("images", np.hstack([img, output]))cv2.waitKey(0)
Output:
green pixel percentage: 89.89
I've produced some images, this is the binary mask of the green color:
And this is the AND
ed out of the mask and the input image:
Some additional remarks about this snippet:
Gotta be careful loading images with OpenCV
, as they are loaded in
BGR
format rather than the usual RGB
. Here, the snippet has this
covered by reversing the elements in the boundary list, but keep an
eye open for this common pitfall.
Your input image was too big to even display it properly using
cv2.imshow
. I resized it and processed that instead. At the end,
you see I took into account this resized scale in the final percent
calculation.
Depending on the target color you define and the difference you
use, you could be producing negative values. In this case, for
instance, for the R = 0
value, after subtracting diff
you would
get -20
. That doesn't make sense when you are encoding color
intensity in unsigned 8 bits. The values must be in the [0, 255]
range.
Watch out for negative values using this method.
Now, you may see that the method is not very robust. Depending on what you are doing, you could switch to the HSV
color space to get a nicer and more accurate binary mask.
You can try the HSV-based
mask with this:
# The HSV mask values, defined for the green color:
lowerValues = np.array([29, 89, 70])
upperValues = np.array([179, 255, 255])# Convert the image to HSV:
hsvImage = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)# Create the HSV mask
hsvMask = cv2.inRange(hsvImage, lowerValues, upperValues)# AND mask & input image:
hsvOutput = cv2.bitwise_and(img, img, mask=hsvMask)
Which gives you this nice masked image instead: