Images and videos are some of the most compelling forms of evidence that can be presented in a courtroom. Yet it is important that the steps we take when preparing them stand up to scrutiny.
Within the field of forensic image and video analysis one of the biggest issues we face is the CSI effect: the phenomenon whereby representations of forensic science on popular TV shows gives a distorted perception of what is possible; from endless zooming from satellite imagery, to enhancing the reflection of a reflection of a reflection. We very often have to explain, even to “the experts”, what is science and what is fiction.
This is complicated also by the fact that sometimes we are able to extract information from images and videos where at a first glance there is absolutely nothing visible. However, very often we can’t do anything to improve images that to that average person don’t look that bad.
Recently, there has been a lot of noise about every possible application of deep learning, a subset of the field of artificial intelligence which normally exploits big data to train systems to behave more or less like the human brain.
These technologies have been used for image enhancement and there are a lot of popular studies and experiments which achieve miracle-effects, almost at the level of what you see in fiction. There’s just one big problem: these kinds of systems are not simply image enhancement or restoration tools. They are creating new images based on a best guess, which may look similar but could be challenged from a legal perspective as the result is different to the actual data originally captured. To put it in laymen terms, they are not enhancing pictures, but creating them, based on some hint from initial data.
The tenets of forensic science
Forensic science is the use of science for legal matters. To properly speak about a scientific examination, we have to follow the three pillars of the scientific method: accuracy, repeatability and reproducibility.
If we consider digital images and video, there are countless papers describing very interesting approaches to image enhancement but are not suitable for forensics. They can be very good to enhance creative photography, but cannot be applied to evidence without destroying its value. So, how can an algorithm fail to be acceptable for forensics for each of the points mentioned above?
We cannot use algorithms that introduce some bias, most often because they add new information which does not belong to the original image. This is in contrast with proper enhancement or restoration techniques. While often used in an interchangeable manner, there is an important difference between image enhancement and restoration.
- Image enhancement is a kind of process used to improve the visual appeal of an image, enhancing or reducing some feature already present in the image (for example correcting the brightness).
- Image restoration is a kind of process where we try to understand the mathematical model which describes a specific defect and, inverting it, tries to restore an image as much as possible close to a hypothetical original without the defect (for example correcting a blurred image or lens distortion).
In both cases, in general, the process does not add new data to the image, but relies only on what is already there, just processing according to some predefined algorithms. For this reason, we will never be able to obtain a readable license plate from three white pixels. We receive this request very often, this is what many expect, but we can only show better what’s already in the image or video, we cannot – and must not – add new data into the evidence.
Another category of algorithms which are not suitable for forensics are those which are not repeatable, like those based on generating a random sequence of values to try. However, some of these algorithms properly give very similar (even if different) results in normal situations. So, they may be used with a pseudo-random approach. In laymen’s terms, computers are not actually able to generate random numbers, but only pseudo-random sequences. If we keep the so-called “seed” fixed, we can always reproduce the same sequence and thus always get the same repeatable result.
Finally, algorithms must be known and all of the involved parameters must be available. We must be able to describe the process with sufficient details to let a third-party person of relevant skills to reproduce the same results independently. So, a “super-secret-proprietary” algorithm is not suitable for forensic work.
Enhancing images for forensic use is not just about trying a few sliders and combining filters until you see something better. Are you confident the images you present within a legal investigation would stand up to scrutiny? And do you have the procedures in place to challenge digital evidence introduced by other parties?
By Martino Jerian, CEO and Founder, Amped Software