Single pixel perception sharpens focus in frames

A video camera that mimics the way the brains of humans and animals focus their visual attention on the most important object in a scene has been developed in Scotland.

The sensor, developed by researchers at Glasgow University, uses just one light-sensitive pixel to build up moving images of objects placed in front of it.

The device, unveiled in Science Advances, operates by prioritising important objects within the scene, while devoting less processing power to peripheral regions.

Single-pixel sensors are much cheaper than the megapixel devices found in digital cameras, and can create images at wavelengths where conventional cameras are expensive or do not exist, such as at the infrared or terahertz frequencies.

The sensors consist of a conventional lens and a digital micromirror device (DMD), which is made up of a number of mirrors. Each mirror can be individually switched on, to transmit the light from that section of the scene to the sensor, or switched off, to block the light.

Chess board-like masks, in which half of the mirrors are turned on and half are turned off, are used to create the patterns that are displayed on the DMD, according to Dr David Phillips, who led the research.

But unlike previous single-pixel cameras, the new sensor can determine which parts of the scene should be higher resolution, and which should be lower, for each sequential frame, he said.

“I need to make the same number of mask pattern [light] measurements as I want pixels in my final image,” said Phillips.

So by using larger pixels at the edge of the scene, the researchers can reduce the total number of pixels in the image, he said.

“This means we can make fewer measurements of the scene, so we can get a higher frame rate, by trading the resolution in some places,” he said.

To allow the camera to operate autonomously, the researchers have pre-loaded it with a large number of DMD grid patterns. “The camera looks at its previous images to identify changes in the scene from one image to the next,” he said.

It then selects the grid pattern that will put the high-resolution areas over those parts of the scene where something has changed, he said.

“So in this way it autonomously follows the motion in the scene.”

Read full original article »