How can I detect upscaled photos?

by Damian Yerrick   Last Updated June 17, 2017 14:18 PM

I have a collection of JPEG photos, each 500 to 600 pixels on the longest side. How can I detect which ones have been algorithmically enlarged from a substantially smaller photo?

An online marketplace requires each seller to upload photos of products that it sells, and these photos must be at least 500 pixels wide or 500 pixels tall because product photos with little detail cause a poor experience for buyers. I can already tell if a seller is trying to circumvent this requirement by adding a solid-color border, such as extending the standard white background with more white. But lately, sellers have started to circumvent this by upscaling old photos taken before the 500-pixel requirement was published. What is a good way to determine whether photos have been enlarged with nearest-neighbor, bilinear, or bicubic interpolation?

Answers 3

I do not that this is possible in the general sense. There are many possible upscaling algorithms, with a signature that may be difficult to detect unambiguously without knowledge of the image content (as an extreme example, an upscaled area of uniform colour is still uniform colour...).

Possibly an option would be to calculate a metric for image complexity, such as an entropy estimate (eg see

If you do this over a large number of images, you can generate statistics for the whole collection. You could then manually review images that are outliers in those statistics.

Unforunately, this is always going to result in false positives and images that have been up scaled well may not be caught (but if they are good, does it matter?)

Mark Moore
Mark Moore
February 12, 2015 20:47 PM

Have a DOG sniff out blur in the photos.

If you're going to be penalizing for digitally enlarged photos, you might as well penalize for out-of-focus photos too. The blurred edges and details in both cause the same bad experience for viewers, regardless of whether it is caused by a small original or poor focus. What you want to do is detect blur, which is an absence of high spatial frequencies.

Try taking the difference between an image and a blurred copy of itself. If an image is already blurry, a 1-pixel Gaussian blur isn't going to change the image as much as if the image were sharp. So there will be more difference between a sharp image and a blurred version than there is between a blurry image and a further blurred version. In computer vision, this technique is called the "difference of Gaussians" (DOG).

  1. Open the image in GIMP or another layered photo editor.
  2. Duplicate the layer. (In GIMP: Layers > Duplicate Layer)
  3. Apply a Gaussian Blur with a radius of 1 pixel to this new layer.
  4. Change the layer mode to "Difference". The image will go black except for the edges.
  5. Repeat steps 1-4 for a known sharp image of similar subject matter, composition, and size.
  6. Compare the intensity of the edges in the two difference images. You can eyeball this or use a histogram.

I just tried this on a 400x480 pixel photo and on the same thing that had been reduced to 200x240 (50%) and then enlarged back to 400x480 (200%), and the edges in the upscaled photo were quite noticeably fainter. It won't be conclusive on a mild enlargement such as 140%, but it will catch blatant cases. Recent versions of GIMP include a DOG macro that automates steps 2 through 4: Filters > Edge-Detect > Difference of Gaussians, then set the radii to 1.0 and 0.0.

DOG won't catch nearest neighbor, but you can do that by looking for a pattern of rows and columns that are identical to their immediate neighbor toward the top or left.

  1. Open the image.
  2. Duplicate the layer.
  3. Offset the new layer one pixel up or to the left.
  4. Change the layer mode to "Difference".
  5. Look for a pattern of blank lines.
Damian Yerrick
Damian Yerrick
February 13, 2015 01:23 AM

Actually you can

You don't need a dog to sniff the picture. Go to:

On this page you can upload your image and will get original dimensions, like this:

  "is_upscaled": true,
  "current_width": "2000",
  "current_height": "928",
  "original_width": "1750",
  "original_height": "696",
  "accuracy": "82%",
  "success": 1

Sometimes it doesn't guess the original resolution correctly. I think it depends what up-scaling algorithm was used on the photo. Also I discovered that if a photo was upscaled and then compressed to a JPEG format with heavy compression (like 30%) the JPEG artifacts make it harder for this page to guess. But if your photos are of good quality, upscaled using popular methods (Lanczos, Bilinear) it should be quite accurate.

June 17, 2017 13:30 PM

Related Questions

I shot using S3 and it's pixelated

Updated November 15, 2016 08:07 AM

Make photos on Facebook nicer?

Updated July 01, 2016 08:07 AM