So I often run huge double-sided scan jobs on an unintelligent Canon multifunction, which leaves me with a huge folder of JPEGs. Am I insane to consider using PIL to analyze a folder of images to detect scans of blank pages and flag them for deletion?
Leaving the folder-crawling and flagging parts out, I imagine this would look something like:
- Check if the image is greyscale, as this is presumed uncertain.
- If so, detect the dominant range of shades (background colour).
- If not, detect the dominant range of shades, restricting to light greys.
- Determine what percentage of the entire image is composed of said shades.
- Try to find a threshold that adequately detects pages with type or writing or imagery.
- Perhaps test fragments of the image at a time to increase accuracy of threshold.
I know this is sort of an edge case, but can anyone with PIL experience lend some pointers?
Here is an alternative solution, using mahotas and milk.
- Start by creating two directories:
positives/
and negatives/
where you will manually pick out a few examples.
- I will assume that the rest of the data is in an
unlabeled/
directory
- Compute features for all of the images in positives and negatives
- learn a classifier
- use that classifier on the unlabeled images
In the code below I used jug to give you the possibility of running it on multiple processors, but the code also works if you remove every line which mentions TaskGenerator
from glob import glob
import mahotas
import mahotas.features
import milk
from jug import TaskGenerator@TaskGenerator
def features_for(imname):img = mahotas.imread(imname)return mahotas.features.haralick(img).mean(0)@TaskGenerator
def learn_model(features, labels):learner = milk.defaultclassifier()return learner.train(features, labels)@TaskGenerator
def classify(model, features):return model.apply(features)positives = glob('positives/*.jpg')
negatives = glob('negatives/*.jpg')
unlabeled = glob('unlabeled/*.jpg')features = map(features_for, negatives + positives)
labels = [0] * len(negatives) + [1] * len(positives)model = learn_model(features, labels)labeled = [classify(model, features_for(u)) for u in unlabeled]
This uses texture features, which is probably good enough, but you can play with other features in mahotas.features
if you'd like (or try mahotas.surf
, but that gets more complicated). In general, I have found it hard to do classification with the sort of hard thresholds you are looking for unless the scanning is very controlled.