More
    HomeComputer SciencesLightning-fast optical deep learning machine

    Lightning-fast optical deep learning machine

    It takes a smart home device a few seconds to reply when you ask it for the weather forecast. One factor contributing to this latency is that connected devices lack the memory and processing capacity necessary to store and run optical deep learning machine models required for the device to comprehend what a user is requesting of it. The solution is calculated and sent to the device from a data center that may be hundreds of miles away, where the model is kept.

    Researchers at MIT have developed a brand-new technique for performing computations directly on these gadgets that significantly lowers this latency. Their method moves the optical deep learning machine model operations that use a lot of memory to a central server, where the model’s parts are written onto light waves.

    Fiber optics is used to transfer the waves to a connected device, allowing massive amounts of data to be sent via a network at incredible speeds. Then, the receiver uses a simple optical device to quickly calculate using the parts of a model sent by the light waves.

    When compared to previous ways, this technology increases energy efficiency by more than a hundred times. Since a user’s data won’t need to be sent to a central site for processing, it might also increase security.

    With the help of this technique, a self-driving automobile might be able to make judgments instantly while consuming a very small amount of the energy that power-hungry computers now consume. Additionally, it might be used to analyze live video via cellular networks, allowing high-speed picture categorization on a spaceship millions of kilometers from Earth or even offering latency-free communication between a user and their smart home gadget.

    Every time you want to run a neural network, you must run the program. How quickly you can pipe the program in from memory determines how quickly you can run the program. Our pipe is enormous; every millisecond or so, it sends an entire feature-length film over the internet. That is how quickly data enters our system. And it has that kind of compute speed. ” As a member of the MIT Research Laboratory of Electronics and a senior author, Dirk Englund is an associate professor in the Department of Electrical Engineering and Computer Science (EECS).

    Lead author and EECS graduate student Alexander Sludds, EECS graduate student Saumil Bandyopadhyay, Research Scientist Ryan Hamerly, and other members of MIT, the MIT Lincoln Laboratory, and Nokia Corporation also contributed to the paper with Englund. The study will be published in Science.

    Lightening the load

    In optical deep learning machine, neural networks use layers of interconnected nodes, or neurons, to identify patterns in datasets and carry out tasks like speech recognition and image classification. The weight parameters in these models, however, which are numerical values that alter the input data as it is processed, can number in the billions. These weights need to be remembered. At the same time, billions of algebraic computations are needed to complete the data transformation process, which consumes a lot of power.

    Sludds says that moving data (in this case, the weights of the neural network) from memory to the parts of a computer that do the actual computation is one of the main things that slows down speed and wastes energy.

    Consequently, he says, “our idea was, why don’t we take all that heavy lifting—the process of fetching billions of weights from memory—and move it away from the edge device and put it someplace where we have abundant access to power and memory, giving us the ability to fetch those weights quickly.”

    They created the Netcast neural network architecture, which stores weights on a central server coupled to a revolutionary piece of hardware known as a smart transceiver. This smart transceiver, which is the size of a thumb and can receive and send data, uses silicon photonics technology. This lets it get trillions of weights from memory every second.

    Weights are received as electrical signals, which are then imprinted onto light waves. The transceiver transforms the weight data, which is encoded as bits (1s and 0s), by turning on and off lasers. A laser is turned on for a 1 and off for a 0. In order to avoid having a client device contact the server in order to get them, it mixes these light waves and then periodically sends them across a fiber optic network.

    “Optics is fantastic since it offers a variety of methods for transmitting data. In contrast to electronics, you may place data on multiple hues of light, which allows for a significantly larger data throughput and wider bandwidth. According to Bandyopadhyay,

    Trillions per second

    The broadband “Mach-Zehnder” modulator, a straightforward optical component, exploits the light waves once they reach the client device to carry out extremely quick analog computing. This entails writing input data from the apparatus—like sensor data—onto the weights. The receiver then gets each specific wavelength of light and measures the result of the calculation.

    Researchers came up with a way to use this modulator to do billions of multiplications per second, which speeds up the device’s processing while using very little power.

    “Making something more energy-efficient will enable you to produce it more quickly. But there is a cost involved. We have developed a device that requires only a milliwatt of power to run and can perform billions of multiplications per second. “That is an order-of-magnitude increase in speed and energy efficiency,” says Sludds.

    They sent weights via an 86-kilometer fiber that connects their lab to MIT Lincoln Laboratory in order to evaluate this architecture. Netcast made it possible to do machine learning quickly and with great accuracy (98.7% for image classification and 98.8% for digit recognition).

    I was astonished by how little work was required to obtain such great accuracy right out of the box, even though we had to perform some calibration. “We were successful in obtaining commercially useful precision,” Hamerly furthers.

    In order to improve performance even further, the researchers want to iterate on the smart transceiver chip in the future. In order for the receiver, which is presently the size of a shoe box, to fit on a smart device like a cell phone, they also want to reduce its size to that of a single chip.

    NTT Research, the National Science Foundation, the Air Force Office of Scientific Research, the Air Force Research Laboratory, and the Army Research Office all contributed to the research’s funding.

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here

    Must Read

    spot_img