Feb 5 – 7, 2024
Universität Salzburg (Paris-Lodron-Universität)
Europe/Berlin timezone
Registration and call for abstracts extended to 5 January

Contribution List

34 out of 34 displayed
Export to PDF
  1. Antonin Raffin
    2/5/24, 9:30 AM
    Keynote

    This talk covers the challenges and best practices for designing and running real-world reinforcement learning (RL) experiments.
    The idea is to walk through the different steps of RL experimentation (task design, choosing the right algorithm, implementing safety layers) and also provide practical advice on how to run experiments and troubleshoot common problems.

    Slides are also online: <a...

    Go to contribution page
  2. Felix Berkenkamp
    2/5/24, 11:00 AM
  3. Dr Andrea Santamaria Garcia (KIT)
    2/5/24, 1:00 PM
    Invited Talk

    Reinforcement Learning (RL) has demonstrated its effectiveness in solving control problems in particle accelerators. A challenging application is the control of the microbunching instability (MBI) in synchrotron light sources. Here the interaction of an electron bunch with its emitted coherent synchrotron radiation leads to complex non-linear dynamics and pronounced fluctuations.

    Addressing...

    Go to contribution page
  4. Sabrina Appel (GSI)
    2/5/24, 1:30 PM
    Invited Talk

    The complexity of the GSI/FAIR accelerator facility demands a high level of automation in order to maximize time for physics experiments. This talk will give an overview of different optimization problems at GSI, from transfer lines to synchrotrons to the fragment separator. Starting with a summary of previous successful automation, the talk will focus on the latest developments in recent...

    Go to contribution page
  5. Annika Eichler (DESY)
    2/5/24, 2:00 PM
    Invited Talk

    DESY has many years on experience on optimization and control of particle accelerators. Reinforcement learning has been explored within the last three years. In this talk the results of this investigation are summarized and an outlook is given. Further control and optimization challenges for operation are presented and discussed.

    Go to contribution page
  6. Dr Gregor Hartmann (HZB)
    2/5/24, 2:30 PM
    Invited Talk

    In order to improve BESSY's experimental environment, several ML-based applications are used at HZB. These efforts cover challenges arising at the accelerator, beamlines and detectors at the experiment. This talk provides on overview of these activities focussing on RL and providing insights in the optimization of a beamline, tuning of an e-gun as well as electron beam positioning in BESSY's...

    Go to contribution page
  7. Dr Verena Kain (CERN)
    2/5/24, 4:00 PM
    Invited Talk

    CERN has a long tradition of model-based feedforward control with a high-level of abstraction. With the recently approved project “Efficient Particle Accelerators”, the CERN management commits to go one step further and invest heavily into automation on all fronts. The initiative will therefore also further push data-driven surrogate models, sample-efficient optimisation and continous control...

    Go to contribution page
  8. Dr Auralee Edelen (SLAC)
    2/5/24, 4:30 PM
  9. 2/5/24, 5:00 PM
  10. Georg Schäfer (Josef Ressel Centre for Intelligent and Secure Industrial Automation)
    2/6/24, 9:10 AM
    Student Talk

    The Quanser Aero2 system is an advanced laboratory experiment designed for exploring aerospace control systems, featuring two motor-driven fans on a pivot beam for precise control. Its capability to lock axes individually offers both single degree of freedom (DOF) and two DOF operation. The system’s non-linear characteristics and adaptability to multivariable configurations make it especially...

    Go to contribution page
  11. Mr Hannes Waclawek (osef Ressel Centre for Intelligent and Secure Industrial Automation)
    2/6/24, 9:30 AM
    Student Talk

    The success and fast pace of Machine Learning (ML) in the past decade was also
    enabled by modern gradient descent optimizers embedded into ML frameworks such
    as TensorFlow. In the context of a doctoral research project, we investigate how
    these optimizers can be utilized directly, outside of the scope of neural
    networks. This approach holds the potential of optimizing explainable models
    with...

    Go to contribution page
  12. Jannis Lübsen (TUHH/DESY)
    2/6/24, 9:50 AM
    Student Talk

    Safety guarantees for Gaussian processes require the assumption that the true hyperparameters are known. However, this assumption usually does not hold in practice. In this talk, a method is introduced to overcome this issue which estimates confidence intervals of hyperparameters from their posterior distributions. Finally, it can be shown that via appropriate scaling safeness can be robustly...

    Go to contribution page
  13. Sabrina Pochaba
    2/6/24, 10:10 AM
    Student Talk

    Reinforcement Learning (RL) is a rising subject of Machine Learning (ML). Especially Multi-Agent RL
    (MARL), where more than one agent interacts with an environment by learning to solve a task, can model
    many real-world problems. Unfortunately, the Multi-Agent case yields more difficulties in the already chal-
    lenging field of Reinforcement Learning, like scalability issues, non-stationarity or...

    Go to contribution page
  14. Alexander Schütt (Helmholtz-Zentrum Berlin)
    2/7/24, 9:00 AM
    Student Talk

    Synchrotron light source storage rings aim to maintain a continuous beam current without observable beam motion during injection. One element that paves the way to this target is the non-linear kicker (NLK). The field distribution it generates poses challenges for optimising the topping-up operation.

    Within this study, a reinforcement learning agent was developed and trained to optimise the...

    Go to contribution page
  15. Juan Montoya Bayardo
    2/7/24, 9:20 AM
    Student Talk

    The Sonobot Unmanned Surface Vehicle (USV), developed by EvoLogics, is a system platform tailored for hydrographic surveying in inland waters. Despite its integrated GPS and autopilot system for autonomous mission execution, the Sonobot lacks a collision avoidance system, necessitating constant operator monitoring and significantly limiting its autonomy.

    Recognizing the untapped potential of...

    Go to contribution page
  16. Antonio Manjavacas Lucas (University of Granada)
    2/7/24, 9:40 AM
    Student Talk

    As a critical radiological facility, the International Fusion Materials Irradiation Facility - DEMO Oriented Neutron Source (IFMIF-DONES) will implement effective measures to ensure the safety of its personnel and the environment. To enable the proper implementation of these measures, the ISO 17873 standard has been adopted throughout the design process of the facility. The proposed dynamic...

    Go to contribution page
  17. Jonathan Edelen (RadiaSoft LLC)
    2/7/24, 11:00 AM
    Contributed Talk

    RadiaSoft is developing machine learning methods to improve the operation and control of industrial accelerators. Because industrial systems typically suffer from a lack of instrumentation and a noisier environment, advancements in control methods are critical for optimizing their performance. In particular, our recent work has focused on the development of pulse-to-pulse feedback algorithms...

    Go to contribution page
  18. Niky Bruchon (CERN)
    2/7/24, 11:30 AM
    Contributed Talk

    Despite the spreading of Reinforcement Learning (RL) applications for optimizing the performance of particle accelerators, this approach is not always the best choice. Indeed, not all problems are suitable to be solved via RL. Before diving into such techniques, a good knowledge of the problem, the available resources, and the existing solutions is recommended. An example of the complexities...

    Go to contribution page
  19. Luca Scomparin
    2/7/24, 1:00 PM
    Contributed Talk

    Reinforcement Learning (RL) has been successfully applied to a wide range of problems. When the environment to control does not exhibit stringent real-time constraints, currently available techniques and computational infrastructures are sufficient. At particle accelerators, however, it is often possible to encounter stringent requirements on the time necessary for an action to be chosen, that...

    Go to contribution page
  20. Daniel Ratner (SLAC)
    2/7/24, 1:30 PM
    Contributed Talk
  21. Catherine Laflamme (Fraunhofer Austria Research GmbH)
    2/7/24, 2:00 PM
    Contributed Talk

    Reinforcement learning (RL), a subgroup of machine learning, has gained recognition for its astonishing success in complex games, however it has yet to show similar success in more real-world scenarios. In principle, the ability for RL to generalise past experience, act in real time, and its resilience to new states makes it particularly attractive as a robust decision-making support for...

    Go to contribution page
  22. Michael Schenk (CERN)
    2/7/24, 2:30 PM
    Contributed Talk

    Free energy-based reinforcement learning (FERL) using clamped quantum Boltzmann machines (QBM) has demonstrated remarkable improvements in learning efficiency, surpassing classical Q-learning algorithms by orders of magnitude. This work extends the FERL approach to multi-dimensional optimisation problems and eliminates the restriction to discrete action-space environments, opening doors for a...

    Go to contribution page
  23. Antonio Manjavacas (University of Granada)
    Poster

    As a critical radiological facility, the International Fusion Materials Irradiation Facility - DEMO Oriented Neutron Source (IFMIF-DONES) will implement effective measures to ensure the safety of its personnel and the environment. To enable the proper implementation of these measures, the ISO 17873 standard has been adopted throughout the design process of the facility. The proposed dynamic...

    Go to contribution page
  24. Chenran Xu (IBPT)
    Poster

    In recent work, it has been shown that reinforcement learning (RL) is capable of outperforming existing methods on accelerator tuning tasks. However, RL algorithms are difficult and time-consuming to train, and currently need to be retrained for every single task. This makes fast deployment in operation difficult and hinders collaborative efforts in this research area. At the same time, modern...

    Go to contribution page
  25. Jan Kaiser (DESY), Chenran Xu (IBPT)
    Poster

    The optimisation and control of particle accelerators present significant challenges due to the limited availability of beam time, high computational costs, and the complexity of the underlying physics. Machine learning has emerged as a powerful tool to address these challenges, but its application is hindered by the scarcity of high-quality data and the computational intensity of traditional...

    Go to contribution page
  26. Sabrina Appel (GSI)
    Poster

    In accelerator labs like GSI/FAIR, automating complex systems is key for maximising physics experiment time. This study explores the application of a data-driven model predictive control (MPC) to refine the multi-turn injection (MTI) process into the SIS18 synchrotron, departing from conventional numerical optimisation methods. MPC is distinguished by its reduced number of optimisation steps...

    Go to contribution page
  27. Jan Kaiser (DESY), Chenran Xu (IBPT)
    Poster

    In the pursuit of optimising particle accelerators, the choice of method for autonomous tuning is critical for enhancing performance and operational efficiency. This study delves into comparing deep reinforcement learning-trained optimisers (RLO) and Bayesian optimisation (BO) for this purpose, motivated by the need to address the complex, dynamic nature of accelerators. Through simulation and...

    Go to contribution page
  28. Dr Leander Grech (University of Malta)
    Poster

    Noisy intermediate-scale quantum (NISQ) computers work by applying a set of quantum gates to an initial ground state, to transform it into a final state that represents the solution to complex computational problems, such as molecular energy evaluation or optimising for the shortest routes in the travelling salesman problem. The effective realisation of NISQ computers requires the...

    Go to contribution page
  29. Simon Hirlaender (PLUS University Salzburg), Sabrina Pochaba
    Poster

    In typical reinforcement learning applications for accelerators, system dynamics often vary, leading to
    decreased performance in trained agents. In certain scenarios, this performance degradation is severe,
    necessitating retraining. However, employing meta-reinforcement learning in conjunction with an
    appropriate simulation can enable an agent to rapidly adapt to environmental changes. This...

    Go to contribution page
  30. Simon Hirlaender (PLUS University Salzburg)
    Poster

    Reinforcement Learning (RL) is emerging as a valuable method for controlling and optimizing particle accelerators, learning through direct experience without a pre-existing model. However, its low sample efficiency limits its application in real-world scenarios. This paper introduces a model-based RL approach using Gaussian processes to address this efficiency challenge. The proposed RL agent...

    Go to contribution page
  31. Jonathan Edelen (RadiaSoft LLC)
    Poster

    RadiaSoft has been developing machine learning (ML) methods for automating processes within the accelerator landscape for the past five years. One critical area of this work has been the full automation of sample alignment at neutron and x-ray beamlines to ensure both high quality experimental data and efficient use of operator hours. Historically, sample alignment has been a manual or a...

    Go to contribution page
  32. Nikola Milosevic (Max Planck Institute for Human Cognitive and Brain Sciences)
    Student Talk

    Reinforcement Learning (RL) has become a cornerstone of machine learning, showcasing remarkable success in addressing real-world control problems and providing insights into cognitive processes in the brain. However, navigating the intricacies of modern RL proves challenging due to its numerous moving parts, escalating agent complexity, and the application of deep learning in a non-i.i.d....

    Go to contribution page
  33. Dr Andrea Santamaria Garcia (KIT)
    Poster

    Reinforcement Learning (RL) is a unique learning paradigm that is particularly well-suited to tackle complex control tasks, can deal with delayed consequences, and learns from experience without an explicit model of the dynamics of the problem. These properties make RL methods extremely promising for applications in particle accelerators, where the dynamically evolving conditions of both the...

    Go to contribution page
  34. Jan Kaiser (DESY)
    Poster

    In the quest to harness the full potential of particle accelerators for scientific research, the need for precision and efficiency in their operation is paramount. Traditional tuning methods, while effective, fall short in optimising performance swiftly and accurately, leading to underutilisation of valuable beam time. This study applies deep reinforcement learning to autonomously tune...

    Go to contribution page