News and posts

Tutorial: Stochastic Computing Hardware

Time: Thursday, October 11th, 14:00
Speaker: Tara HAMILTON, Western Sydney University

In this tutorial we will explore two aspects of Stochastic Computing Hardware: Stochastic Electronics and Stochastic Computation. Stochastic Electronics is based on the idea that “noise” (in all its forms) can be used constructively to improve computational performance in “brain-like” processors. Here I will show several examples of how stochasticity can enhance performance while also reducing power consumption and circuit footprint. The other idea, Stochastic Computation, dates back to the 1960s and provides an alternative to conventional binary representations of information in digital systems. Stochastic computation can significantly decrease the computational cost of traditionally high-cost digital operations such as multiplication. Both stochastic electronics and stochastic computation are related ideas and both are based on the probabilistic nature of neural processing. Throughout this tutorial I will discuss how these ideas can (and have) been leveraged by Spintronics.

Go back

Reservoir computing implemented with skyrmion fabrics

Time: Thursday, October 11th, 11:10
Speaker: George BOURIANOFF, Intel Corporation – retired

Artificial Intelligence (AI) related hardware and software products are projected to be the fastest growing segment of the semiconductor and microelectronic industries with projected compound annual growth rates exceeding 100% per year for the next 10 years. Reservoir Computing (RC) is one promising computational approach that enables use of naturally occurring dynamic systems for AI applications. It is a type of recursive neural network commonly used for recognizing and predicting spatial-temporal events relying on a complex hierarchy of nested feedback loops to generate a memory functionality. The RC paradigm does not require any knowledge of the reservoir topology or node weights and can therefore utilize naturally existing networks formed by a wide variety of physical processes. Most efforts prior to this have focused on utilizing memristor or optical techniques to implement recursive neural networks. This presentation examines the potential of magnetic skyrmion fabrics and the complex current patterns which form in them as an attractive physical instantiation for Reservoir Computing. We present new results showing that application of 100mV, 1GHz pulse trains of either square pulse or sinusoidal pulses will generate a strong dynamic response of the skyrmion fabric of approximately 4%. The response is observed through a dynamically varying magnetoresistance with similar time dependence strongly suggesting that the applied signal induces a magneto-dynamic response in the fabric. We hypothesize that the strong magneto-resistive response provides the basis for full RC functionality.

Go back

Reservoir Computing with Spin-Torque nano-oscillators

Time: Thursday, October 11th, 10:10
Speaker: Mark STILES, NIST

Human brains can solve many problems with orders of magnitude more energy efficiency than traditional computers. As the importance of such problems, like image, voice, and video recognition increases, so does the drive to develop computers that approach the energy efficiency of the brain. Progress must come on many fronts ranging from new algorithms to novel devices that are optimized to function in ways more suited to these algorithms than the digital transistors that have been optimized for the present approaches to computing. Magnetic tunnel junctions have several properties that make them attractive for such applications. They are actively being developed for integration into CMOS integrated circuits to provide non-volatile memory. This development makes it feasible to consider other geometries that have different properties. By changing the shape of the devices, they can be non-volatile binary devices, thermally unstable superparamagnetic binary devices, and non-linear oscillators. In this talk, I describe using magnetic tunnel junctions that are non-linear oscillators as the basis for reservoir computing. Reservoir computing uses recurrent neural networks to compute problems like voice recognition. Due to their state dependence, recurrent neural networks can be quite difficult to train. In reservoir computing, the training is simplified by specifying the input weights, letting the internal weights of the network take their natural values, and training only the output weights. A further simplification is to use a single device as the reservoir by using time multiplexing, the natural fading memory of the device, and external feedback. Testing this approach with standard datasets shows that this simplified approach can achieve state of the art results with a nanoscale reservoir.

Go back

Tutorial: Reservoir Computing

Time: Thursday, October 11th, 09:00
Speaker: Daniel BRUNNER, Femto-ST

I will present fully-implemented large scale recurrent photonic neural networks. We photonically created parallel connections for up to2025 nonlinear oscillators and realized efficiently converging photonic reinforcement learning. The system is applied to a chaotic signal prediction benchmark.

Go back

Tutorial: Spin-wave logic: from Boolean to neuromorphic computing

Time: Wednesday, October 10th, 15:30
Speaker: Philipp PIRRO, TU Kaiserslautern

Today’s computational technology based on CMOS has experienced enormous scaling of data processing capability as well as of price and energy consumption per logic element. However, to continue this development successfully into the future, and with the rapid development of artificial intelligence and neural networks in mind, complementary approaches to conventional logic schemes are needed. One of these alternative routes is wave-based computing, which, however, suffered longtime from the lack of a down-scalable system which could be interconnected with conventional CMOS technology. In this context, spin waves, the elementary excitations of the spin system and their quanta, the magnons, have been intensively investigated and successfully brought to the micro- and nanoscale. Also, the connections to conventional electronic and spintronics circuits have been established within a new field known as magnon spintronics. Due to their large variety of intrinsic linear and nonlinear wave phenomena, spin waves constitute a promising candidate for nanoscaled wave-based computing and data processing in general.
We will first discuss the different computing approaches based on (spin-) waves and the advantages and challenges of an interference-based logic. Then, we present a selection of experimentally realized (macroscopic) prototypes for spin-wave based Boolean logic like the majority gate and the magnon transistor. To show the potential of advanced nanoscopic devices, micromagnetic simulations demonstrating the working principles of integrated magnonic circuits are presented. Inspired by the hybrid analog and digital data processing structure of biological brains, we use the unique properties of these circuits to develop an approach to realize neuromorphic computing based on spin waves.

Go back

Tutorial: Artificial Spin Ice and Elements of Control for Computation

Time: Wednesday, October 10th, 14:00
Speaker: Laura HEYDERMAN, ETH Zurich

Artificial spin systems [1] have received much attention in recent years, following the creation of arrays of dipolar-coupled nanomagnets with frustrated geometries analogous to that of the rare earth titanate pyrochlores [2], which are referred to as spin ices due to a geometric frustration that leads to a spin arrangement analogous to the proton ordering in water ice [3]. Such arrays of nanomagnets are appropriately named artificial spin ice [4] where single-domain elongated magnets, which have Ising-like moments pointing along one of two directions, are commonly placed on the sites of a square or kagome lattice. At every vertex where the magnets meet, there is a characteristic low energy configuration where the so-called ice rule is obeyed. Going beyond these basic designs, it is possible to devise artificial spin systems with various intricate motifs that display behaviour that is characteristic of a particular geometry.
In this tutorial, I will introduce this topic with the aim to provide the link between artificial spin ice and computation by covering a number of possibilities for control. For example, the creation and separation of emergent magnetic monopoles and their associated Dirac strings on application of a magnetic field can be controlled by modifying the shape of particular nanomagnets in the array [5]. Such modifications to individual nanomagnets can also be used to control the chirality of artificial kagome spin ice building blocks consisting of a small number of hexagonal rings of magnets [6]. Indeed, chirality control is a recurring theme, with a further example being the control of the dynamic chirality, which can be achieved in the so-called chiral ice [7], where the stray field energy associated with the magnetic configurations at the edges of the array defines the sense of rotation of the average magnetisation on thermal relaxation. In addition, the chirality of domain walls will determine how they pass through a connected artificial kagome spin ice [8].
I will address other possibilities to control and access magnetic information in artificial spin ice including further geometries, temperature, local magnetic and electric fields, fast dynamics, as well as methods for computation [9].

[1] L.J. Heyderman and R.L. Stamps, J. Phys.: Condens. Matter 25, 363201 (2013)
[2] R.F. Wang et al. Nature 439, 303 (2006)
[3] M.J. Harris et al. Phys. Rev. Letts. 79, 2554 (1997)
[4] C. Nisoli, R. Moessner and P. Schiffer, Rev. Mod. Phys. 85, 1473 (2013)
[5] E. Mengotti et al. Nat. Phys. 7, 68 (2011); R.V. Hügli et al. Phil. Trans. Roy. Soc. A 370, 5767 (2012)
[6] R.V. Chopdekar et al. New Journal of Physics 15, 125033 (2013)
[7] S. Gliga et al. Nat. Mater. 16, 1106 (2017)
[8] K. Zeissler et al. Sci. Rep. 3, 1252 (2013); A. Pushp et al. Nat. Phys. 9, 505 (2013)
[9] H. Arava et al. Nanotechnology 29, 265205 (2018); P. Gypens et al. Phys. Rev. Appl. 9, 034004 (2018)

Go back

Associative memory operation using analog spin-orbit torque device

Time: Wednesday, October 10th, 11:10
Speaker: Shunsuke FUKAMI, Tohoku University

Neuromorphic computing has attracted great attention because of its capability to execute complex tasks that the conventional von Neumann computers cannot readily complete. Here, we present an analog spintronic device based artificial neural network. Spintronic devices, in general, offer non-volatility and virtually infinite endurance, showing promise for realization of low-power “edge” neuromorphic computing hardware with online learning capability. An antiferromagnet-ferromagnet heterostructure operated by spin-orbit torque employed here allows us to control the magnetization state in an analog manner and thus can be used as an artificial synapse [1]. Using the developed artificial neural network with the analog spintronic device, we show a proof-of-concept demonstration of an associative memory operation based on the Hopfield model, a representative model of neuromorphic computing [2].
This work is partly supported by R&D Project for ICT Key Technology of MEXT, JSPS KAKENHI No. 17H06093, JST-OPERA, and ImPACT Program of CSTI.
[1] S. Fukami et al., Nature Materials, vol. 15, 535 (2016).
[2] W. A. Borders et al., Appl. Phys. Express, vol. 10, 013007 (2017).

Go back

Bioinspired Computing Leveraging the Non-Linearity of Magnetic Nano-Oscillators

Time: Wednesday, October 10th, 10:10
Speaker: Damien QUERLIOZ, Integnano – C2N

The brain displays many features typical of non-linear dynamical networks, such as synchronization or chaotic behavior. These observations have inspired a whole class of models that harness the power of complex non-linear dynamical networks for computing. In this framework, neurons are modeled as non-linear oscillators, and synapses as the coupling between oscillators. These abstract models are very good at processing waveforms for pattern recognition or at generating precise time sequences useful for robotic motion. However there are very few hardware implementations of these systems, because large numbers of interacting non-linear oscillators are indeed. In this talk, I will explain why coupled magnetic nano-oscillators are very promising for realizing cognitive computing at the nanometer and nanosecond scale. Then, I will present our experimental and theoretical results. In particular, I will show how we can perform speech recognition using the transient dynamics and the synchronization of a few oscillators. I will also show how this line of research can take inspiration from both neuroscience and the field of machine learning. I will finish by the open questions raised by our research.

Go back

Critical brain dynamics, a brief overview

Time: Wednesday, October 10th, 09:00
Speaker: Dante CHIALVO, CEMSC3-UNSAM
For two decades we proposed that the most fascinating brain properties are related to the fact that it always stays close to a second order phase transition. In such conditions, the collective of neuronal groups can reliably generate robust and flexible behavior, because at the critical point is the largest number of metastable states to choose from. Here we review the motivation, arguments and recent results, as well as some implications for neuromorphics.

MemComputing: leveraging physics to compute efficiently

Time: Tuesday, October 9th, 15:00
Speaker: Massimiliano DI VENTRA, UCSD
It is well known that physical phenomena may be of great help in computing some difficult problems efficiently. A typical example is prime factorization that may be solved in polynomial time by exploiting quantum entanglement on a quantum computer. There are, however, other types of (non-quantum) physical properties that one may leverage to compute efficiently a wide range of hard problems. In this talk I will discuss how to employ one such property, memory (time non-locality), in a novel physics-based approach to computation: Memcomputing [1, 2, 3, 4]. As examples, I will show the polynomial-time solution of prime factorization, the search version of the subset-sum problem [5], and approximations to the Max-SAT beyond the inapproximability gap [6] using polynomial resources and self-organizing logic gates, namely gates that self-organize to satisfy their logical proposition [5]. I will also show that these machines are described by a Witten-type topological field theory, and they compute via an instantonic phase, implying that they are robust against noise and disorder [7]. The digital memcomputing machines we propose can be efficiently simulated, are scalable and can be easily realized with available nanotechnology components. Work supported in part by MemComputing, Inc. (http://memcpu.com/).
[1] F. L. Traversa and M. Di Ventra, Universal Memcomputing Machines, IEEE Transactions on Neural Networks and Learning Systems 26, 2702 (2015).
[2] M. Di Ventra and Y.V. Pershin, Computing: the Parallel Approach, Nature Physics 9, 200 (2013).
[3] M. Di Ventra and Y.V. Pershin, Just add memory, Scientific American 312, 56 (2015).
[4] M. Di Ventra and F.L. Traversa, Memcomputing: leveraging memory and physics to compute efficiently, J. Appl. Phys. 123, 180901 (2018).
[5] F. L. Traversa and M. Di Ventra, Polynomial-time solution of prime factorization and NP-complete problems with digital memcomputing machines, Chaos: An Interdisciplinary Journal of Nonlinear Science 27, 023107 (2017).
[6] F. L. Traversa, P. Cicotti, F. Sheldon, and M. Di Ventra, Evidence of an exponential speed-up in the solution of hard optimization problems, Complexity 2018, 7982851 (2018).
[7] M. Di Ventra, F. L. Traversa and I.V. Ovchinnikov, Topological field theory and computing with instantons, Annalen der Physik 529,1700123 (2017).