2018 Abstracts SmN

Tutorial: Reinforcement Learning

Time: Monday, October 8th, 10:20
Speaker: Eleni VASILAKI, University of Sheffield
In this tutorial, I will precent the key concepts behind Reinforcement Learning (RL), and an overview of various reinforcement learning frameworks. I will also discuss its link to Deep Learning, and comment on what RL tells us about happiness.

Tutorial: The Landscape of Deep Learning: a Quick Overview

Time: Monday, October 8th, 09:10
Speaker: Teodora PETRISOR, Thales Group
This talk is a walk-through to the basic principles, theoretical concepts and some of the successful models in Artificial Neural Networks today with a particular focus on the so-called Deep Learning paradigm, all from an algorithmic viewpoint. We will also highlight some of the challenges to be addressed for the large-scale industrialization of these methods.

Go back

All-optical switching and brain-inspired concepts for low energy information processing

Time: Monday, October 8th, 14:00
Speaker: Theo RASING, Radboud University

The explosive growth of big data and artificial intelligence offers a huge potential for new digital products and unexplored business models. While data has become an indispensable part of modern society, the sheer amount of data being generated every second is breathtaking, both in its scale and in its growth, while the number of devices generating these data is rapidly expanding. This not only pushes current technologies to their limits, but also that of our energy production: our ICT and data centres already consume around 7% of the world’s electricity production and with the growth rate of ICT-technologies, this energy consumption is rapidly becoming unsustainable. In stark contrast, the human brain, with its intricate architecture combining both processing and storing of information, only consumes about 10 Watt of energy while having a similar capacity as a supercomputer consuming around 10 Megawatt.
We try to develop materials and concepts that mimic the efficiency of the brain by combining local processing and storage, using adaptable physical interactions that can implement learning algorithms. We demonstrate, by modelling, that a reconfigurable and self-learning structure can be achieved, which implements the prototype perceptron model of a neural network based on magneto-optical interactions. Importantly, we show that optimization of synaptic weights is achieved by a global feedback mechanism, such that learning does not rely on external storage or additional optimization schemes. For the experimental realization of adaptive synaptic structures, we choose to use optically controllable magnetization in a thin Co/Pt film1, using circularly polarized picosecond2 pulse trains. The combined stochastic/deterministic nature of all-optical switching in this material2 offers the possibility to continuously vary the magneto-optical Faraday rotation with the number of pulses, yielding the necessary ingredient to realize a perceptron-like structure. First results of such a learning structure will be demonstrated.

1. C.-H. Lambert et al, Science 345, 1337 (2014)
2. R. Medapalli et al, Phys. Rev.B 96, 224421 (2017)

Go back