Parallel convolution-based processors amplify AI

0

In our digital age, the exponential growth of data traffic presents computing power with some real challenges.

And the upward trend will continue with the introduction of machine learning and AI, such as self-driving cars and speech recognition.

All of this places an immense strain on existing computer processors’ ability to keep up with demand.

Now, an international team of scientists has shed light on how the problem can be addressed.

The researchers developed a new approach and architecture that uses light-based or “photonic” processors that have been shown to outperform conventional electronic chips by processing information much faster and in parallel, combining processing and data storage on a single chip.

For so-called matrix-vector multiplications, which are the backbone of neural networks (algorithms that model the human brain) that are themselves used for machine learning algorithms, the researchers created a hardware accelerator. Since different light wavelengths (colors) do not interact with each other, for parallel computations, the researchers were able to use multiple light wavelengths.

To do so, they used as a light source another revolutionary technology built at EPFL, a chip-based “frequency comb,”

“Our study is the first to use frequency combs in the field of artificial neural networks,” said Professor Tobias Kippenberg of EPFL, one of the leaders of the study. Study by Professor Kippenberg has pioneered the evolution of frequency combs. “The frequency comb provides a variety of optical wavelengths that are processed independently in the same photonic chip.”

“Light-based processors for accelerating machine learning tasks allow complex mathematical tasks to be processed at high speeds and throughputs,” said senior co-author Wolfram Pernice of the University of Münster, one of the research-led professors. This is much faster than traditional electronic data transfer based chips such as graphics cards or advanced hardware such as TPUs (tensor processing units).

The researchers tested them on a neural network that recognizes handwritten numbers after designing and fabricating the photonic chips.

A term from the field of machine learning, these biology-inspired networks are mainly used in the processing of picture or audio data. “The convolutional operation between input data and one or more filters that can detect edges in an image, for example, is a good fit for our matrix architecture,” says Johannes Feldmann, now at the University of Oxford’s Department of Materials. Nathan Youngblood (University of Oxford) adds, “Exploiting wavelength division multiplexing allows higher data rates and computational densities, operations per area of the processor, that have not been achieved before.”

“This work is a true showcase of European collaborative research,” says David Wright of the University of Exeter, who leads the EU FunComp project that funded the work. “While each research group involved is a world leader in its own way, it was bringing all these pieces together that really made this work possible.”

The study is published this week in Nature and has wide-ranging applications: higher concurrent (and low-power) processing of data in artificial intelligence, larger neural networks for more accurate predictions and more precise data analysis, large volumes of clinical data for diagnostics, improving rapid analysis of sensor data in self-driving vehicles, and expanding cloud computing infrastructu

Reference: “Parallel convolutional processing using an integrated photonic tensor core” by J.

Feldmann, N. Youngblood, M. Karpov, H. Gehring, X. Li, M. Stappers, M. Le Gallo, X.

Fu, A. Lukashchuk, A. S. Raja, J. Liu, C.

D.

Wright, A. Sebastian, T. J. Kippenberg, W. H. P. Pernice, and H.

Bhaskaran, January 6, 2021, Nature.DOI: 10.1038/s41586-020-03070-1.

Share.

Leave A Reply