Scientists are developing an AI camera capable of color filming in total darkness

Night vision

Scientists at the University of Irvine have developed a camera system that combines artificial intelligence (AI) with an infrared camera to capture color photos even in total darkness.

Human vision perceives light on what is called the “visible spectrum”, wavelengths of light between about 300 and 700 nanometers. Infrared light exists beyond 700 nanometers and is invisible to humans without the aid of special technology, and many night vision systems can detect infrared light and transpose it into a digital display that gives humans a monochromatic view.

Scientists set out to take this process a step further and combined this infrared data with an AI algorithm that predicts color to render images the same way they would appear if light existed in the visible spectrum.

Typical night vision systems render scenes as a monochromatic green display, and newer night vision systems use ultra-sensitive cameras to detect and amplify visible light. Scientists say computer vision tasks with low-illumination imaging have used image enhancement and deep learning to help detect and characterize objects from the infrared spectrum, but not with an accurate rendition of the same scene in the visible spectrum. They want to change that.

“We sought to develop an imaging algorithm powered by optimized deep learning architectures in which the infrared spectral illumination of a scene could be used to predict a rendering of the visible spectrum of the scene as perceived by a human with visible-spectrum light,” the scientists explain. in a research paper published at Plos One.

“This would digitally render a spectrum scene visible to humans when they are otherwise in complete ‘darkness’ and only illuminated by infrared light.”

This illustration depicts the image processing goal, to predict visible spectrum images using only infrared illumination and deep learning to process NIR data.

To achieve their goal, the scientists used a monochromatic camera sensitive to visible and near-infrared light to acquire an image dataset of printed images of faces under multispectral illumination spanning standard visible red (604 nm), green (529 nm) and blue. (447 nm) as well as infrared wavelengths (718, 777 and 807 nm).

Top row: Spectral reflectance over 32 channels of a printed Windows color palette with color photo and merged channels 447, 529, 604. Bottom row: Spectral reflectance for 6 selected illuminant wavelengths and color of the visible spectrum of a photo created by merging channels 447, 529, 604.

“Conventional cameras acquire blue (B), green (G) or red (R) data pixels to produce a color image perceptible to the human eye. We investigated whether a combination of infrared illuminants in the red spectrum and near-infrared (NIR) could be processed using deep learning to recompose an image with the same appearance as if viewed with visible-spectrum light.We established a controlled visual context with limited pigments to testing our hypothesis that DL can render scenes visible to humans using NIR illumination that is otherwise invisible to the human eye.

The team was able to optimize a convolutional neural network to predict visible spectrum images from near-infrared information only. The study is what they describe as the first step towards predicting human vision from near-infrared illumination.

“To predict RGB color images from individual illuminations or combinations of wavelength illuminations, we evaluated the performance of the following architectures: a basic linear regression, a CNN inspired by U-Net (UNet) and an augmented U-Net with adversarial loss (UNet-GAN),” they explain.

Left: Visible spectrum ground-truth image composed of red, green, and blue input images. Right: Predicted reconstructions for UNet-GAN, UNet, and linear regression using three infrared input images.

“Further work may profoundly contribute to a variety of applications, including night vision and studies of visible light-sensitive biological samples,” the scientists say.

Even as impressive as the early results are, the AI ​​is incomplete. Currently, the system can only work successfully on human faces.

“Human faces are, of course, a very small group of objects, if you will. That doesn’t immediately translate to coloring a general scene,” said Professor Adrian Hilton, director of the Center for Vision, Speech and Signal Processing (CVSSP) at the University of Surrey. new scientist.

“As it stands, if you apply the method trained on faces to another scene, it probably wouldn’t work, it probably wouldn’t make any sense.”

Still, with more input data and more training, there’s no reason to believe the system couldn’t become even more accurate and reliable.

The research paper entitled “Deep learning to enable color vision in the dark” can be read on Plos One.


Picture credits: Header photo licensed via Depositphotos.

amoloans