Smart cameras that are able to learn what they see and understand

0

Thanks to a research partnership between the Universities of Bristol and Manchester, which have built cameras that can learn and understand what they see, smart cameras may be one step ahead.

Researchers in robotics and artificial intelligence (AI) realize there is a problem with how the environment is interpreted and processed by existing systems.

Sensors, such as digital cameras designed to capture images, are also currently paired with computing devices, such as graphics processing units (GPUs) designed to accelerate video game graphics.

This implies that only after taking in sensory information and moving it between sensors and processors can AI systems experience the environment.

However, many items that can be seen are also unrelated to the mission at hand, such as the specifics of the roadside leaves on the trees as an autonomous car passes by. However, right now, all of this information is carefully gathered and sent by sensors, obstructing the system with irrelevant data, consuming power and taking time to process.

A different approach is needed to allow efficient vision for smart machines.

Two papers from the partnership between Bristol and Manchester have shown how to combine recognition and learning to create novel cameras for AI systems.

Walterio Mayol-Cuevas, Professor of Robotics, Computer Vision and Mobile Systems at the University of Bristol and Principal Investigator (PI), remarks, “We need to push the boundaries beyond the avenues that have been followed so far in order to create efficient perception systems.”

“We can take inspiration from the way natural systems process the visual world – we don’t perceive everything – our eyes and our brains work together to make sense of the world, and in some cases the eyes themselves do the processing to help the brain reduce what’s not relevant.”

The way the frog’s eye has detectors that track flying objects, right where the images are perceived, shows this.

The job, one led at Bristol by Dr. Laurie Bose and the other by Yanan Liu, has shown two refinements towards this aim.

A type of AI algorithm that allows visual understanding, directly at the image level, by implementing Convolutional Neural Networks (CNNs).

Without ever having to register those images or send them through the processing pipeline, the CNNs the team built can identify images thousands of times per second.

The researchers looked at demonstrations of handwritten number classification, hand movements and even plankton classification.

Research results point to a future of smart, dedicated AI cameras – visual systems that can easily give the rest of the system high-level information, such as the type of object or event that takes place in front of the camera.

By removing the need to capture images, this method would make systems far more effective and stable.

The SCAMP architecture developed by Piotr Dudek, University of Manchester professor of circuits and systems and PI, and his team, made this work possible.

The SCAMP is a processing chip for the camera that the team calls an array of pixels (PPA).

A PPA has a processor embedded in each individual pixel that can interact in a completely parallel way with each other to process.

This is perfect for vision algorithms and CNNs.

“Integrating pixel-level sensing, processing and memory not only allows high-performance, low-latency systems, but also promises low-power, high-efficiency hardware,” Professor Dudek said.

“SCAMP devices can be implemented with similar footprints to current camera sensors, but with the ability to use a general-purpose, massively parallel processor directly at the point of image capture.”

Sir, Dr.

The SCAMP design has been integrated into lightweight drones by Tom Richardson, a Senior Lecturer in Flight Mechanics at the University of Bristol and a project member.

“The exciting thing about these cameras is not only the emerging capacity for machine learning, but also the speed at which they run and the lightweight configuration,” he explained.

For fast, extremely agile aerial platforms that can literally learn on the fly, they are absolutely ideal!
The study, funded by the Research Council for Engineering and Physical Sciences (EPSRC), has shown that when designing AI systems, it is important to question the assumptions that are in the space.

U

Share.

Leave A Reply