Reservoir Computing with Spin-Torque nano-oscillators

Time: Thursday, October 11th, 10:10
Speaker: Mark STILES, NIST

Human brains can solve many problems with orders of magnitude more energy efficiency than traditional computers. As the importance of such problems, like image, voice, and video recognition increases, so does the drive to develop computers that approach the energy efficiency of the brain. Progress must come on many fronts ranging from new algorithms to novel devices that are optimized to function in ways more suited to these algorithms than the digital transistors that have been optimized for the present approaches to computing. Magnetic tunnel junctions have several properties that make them attractive for such applications. They are actively being developed for integration into CMOS integrated circuits to provide non-volatile memory. This development makes it feasible to consider other geometries that have different properties. By changing the shape of the devices, they can be non-volatile binary devices, thermally unstable superparamagnetic binary devices, and non-linear oscillators. In this talk, I describe using magnetic tunnel junctions that are non-linear oscillators as the basis for reservoir computing. Reservoir computing uses recurrent neural networks to compute problems like voice recognition. Due to their state dependence, recurrent neural networks can be quite difficult to train. In reservoir computing, the training is simplified by specifying the input weights, letting the internal weights of the network take their natural values, and training only the output weights. A further simplification is to use a single device as the reservoir by using time multiplexing, the natural fading memory of the device, and external feedback. Testing this approach with standard datasets shows that this simplified approach can achieve state of the art results with a nanoscale reservoir.

Go back