Conveners
Talks: RL for particle accelerators I
- Georg Hoffstaetter (Cornell Univ.)
Talks: RL for particle accelerators II
- Auralee Edelen (SLAC)
Talks: General applications for particle accelerators
- Verena Kain (CERN)
Talks: General applications of RL
- Jason St. John (Fermilab)
-
Jonathan Edelen (RadiaSoft LLC)4/3/25, 2:00 PMTalk
For more than half a decade, RadiaSoft has developed machine learning (ML) solutions to problems of immediate, practical interest in particle accelerator operations. These solutions include machine vision through convolutional neural networks for automating neutron scattering experiments and several classes of autoencoder networks for de-noising signals from beam position monitors and...
Go to contribution page -
Jason St. John (Fermilab)4/3/25, 2:20 PMPoster + Talk
We present design considerations and challenges for the fast machine learning component of a third-order resonant beam extraction regulation system being commissioned to deliver steady beam rates to the mu2e experiment at Fermilab. Dedicated quadrupoles drive the tune toward the 29/3 resonance each spill, extracting beam at kV multiwire septa. The overall Spill Regulation System consists of...
Go to contribution page -
Adrian Menor de Onate (CERN)4/3/25, 2:40 PMPoster
The slow extracted beams at the CERN Super Proton Synchrotron (SPS) are transported over several 100 m long transfer lines to three targets in the North Area Experimental Hall. The experiments require to eliminate intensity fluctuations over the roughly 5 s particle spill and hence to debunch the extracted beams. In this environment, secondary emission monitors (SEMs) have to replace the...
Go to contribution page -
Georg Hoffstaetter (Cornell Univ.)4/3/25, 3:30 PMPoster + Talk
In BNL’s Booster, the beam bunches can be split into two or three smaller bunches to reduce their space-charge forces. They are then merged back after acceleration in the Alternating Gradient Synchrotron (AGS). This acceleration with decreased space-charge forces can reduce the final emittance, increasing the luminosity in RHIC and improving proton polarization. Parts of this procedure have...
Go to contribution page -
Mr Borja Rodriguez Mateos (CERN)4/3/25, 3:50 PMTalk
Aging of the stripper foil and unexpected machine shutdowns are the primary causes for reduction of the injected intensity from CERN’s Linac3 into the Low Energy Ion Ring (LEIR). As a result, the set of optimal control parameters that maximizes beam intensity in the ring tends to drift, requiring daily adjustments to the machine control settings. This paper explores the design of a...
Go to contribution page -
Dr Penny Madysa (GSI Helmholtzzentrum für Schwerionenforschung)4/4/25, 9:00 AMTalk
The complexity of the GSI/FAIR accelerator facility demands a high level of automation in order to maximize time for physics experiments. Accelerator laboratories world-wide are exploring a variety of techniques to achieve this, from classical optimization to reinforcement learning.
Geoff, the Generic Optimization Framework & Frontend, is an open-source framework that harmonizes access to...
Go to contribution page -
Matthew Schwab (FSU Jena)4/4/25, 9:20 AMPoster + Talk
Manual alignment of optical systems can be time consuming and the achieved performance of the system varies depending on the operator doing the alignment. A reinforcement learning approach using the PPO algorithm was used to train agents to align simple two-mirror optical setups, as well as a full regenerative laser amplifier. The goal is to produce agents that can reproducibly align the setup...
Go to contribution page -
Antonin Sulc (LBNL)4/4/25, 9:40 AMTalk
Recent advances in fine-tuning large language models (LLMs) with reinforcement learning (RL) techniques have demonstrated their ability to generalize, unlike the often-used Supervised Fine-Tuning (SFT).
Many aspects of particle accelerators, such as beam parameters, have well-defined objectives, making them ideal candidates for RL-driven optimization.In this work, we explore the...
Go to contribution page -
Hayg Guler (IJCLab)4/4/25, 10:00 AM
-
Kai Dresia4/4/25, 10:50 AMPoster + Talk
Deep reinforcement learning (DRL) has demonstrated great potential for controlling and regulating complex real-world systems such as nuclear fusion reactors such as tokamaks and particle accelerators. Another promising application is the DRL-based control of liquid-propellant rocket engines (LPREs), which have been a focus of research at the German Aerospace Center (DLR) for the past six...
Go to contribution page -
Sabrina Pochaba4/4/25, 11:10 AMPoster + Talk
Reinforcement learning (RL) is gaining more and more importance in the field of machine learning (ML). One subfield of RL is Multi-Agent RL (MARL). Here, several agents learn to solve a problem simultaneously rather than a single agent. For this reason, this approach is suitable for many real-world problems.
Go to contribution page
Since learning in a multiple agent scenario is highly complex, further conflicts can... -
Leander Grech (University of Malta)4/4/25, 11:30 AMPoster + Talk
Noisy intermediate-scale quantum (NISQ) computers promise a new paradigm for what is possible in information processing, with the ability to tackle complex and otherwise intractable computational challenges, by harnessing the massive intrinsic parallelism of qubits. Central to realising the potential of quantum computing are perfect entangling (PE) two-qubit gates, which serve as a critical...
Go to contribution page -
Auralee Edelen (SLAC)4/4/25, 11:50 AMTalk
Beams at LCLS require precise shaping in position-momentum phase space to meet the needs of different users. In particular, the shape of the longitudinal phase space needs to be customized, while ensuring the transverse phase space meets the requirements for Free Electron Laser (FEL) lasing. We present results of using RL for longitudinal phase space shaping, and compare these with approaches...
Go to contribution page