-
4/2/25, 9:00 AM
-
Andrea Santamaria Garcia (University of Liverpool)4/2/25, 9:30 AM
-
4/3/25, 9:00 AM
-
Holger Schlarb (DESY)4/3/25, 9:05 AM
-
Phillip Neumann (DESY)4/3/25, 9:15 AM
-
Heike Hufnagel Martinez (DESY)4/3/25, 9:25 AM
-
Jonathan Edelen (RadiaSoft LLC)4/3/25, 2:00 PMTalk
For more than half a decade, RadiaSoft has developed machine learning (ML) solutions to problems of immediate, practical interest in particle accelerator operations. These solutions include machine vision through convolutional neural networks for automating neutron scattering experiments and several classes of autoencoder networks for de-noising signals from beam position monitors and...
Go to contribution page -
Jason St. John (Fermilab)4/3/25, 2:20 PMPoster + Talk
We present design considerations and challenges for the fast machine learning component of a third-order resonant beam extraction regulation system being commissioned to deliver steady beam rates to the mu2e experiment at Fermilab. Dedicated quadrupoles drive the tune toward the 29/3 resonance each spill, extracting beam at kV multiwire septa. The overall Spill Regulation System consists of...
Go to contribution page -
Adrian Menor de Onate (CERN)4/3/25, 2:40 PMPoster
The slow extracted beams at the CERN Super Proton Synchrotron (SPS) are transported over several 100 m long transfer lines to three targets in the North Area Experimental Hall. The experiments require to eliminate intensity fluctuations over the roughly 5 s particle spill and hence to debunch the extracted beams. In this environment, secondary emission monitors (SEMs) have to replace the...
Go to contribution page -
Georg Hoffstaetter (Cornell Univ.)4/3/25, 3:30 PMPoster + Talk
In BNL’s Booster, the beam bunches can be split into two or three smaller bunches to reduce their space-charge forces. They are then merged back after acceleration in the Alternating Gradient Synchrotron (AGS). This acceleration with decreased space-charge forces can reduce the final emittance, increasing the luminosity in RHIC and improving proton polarization. Parts of this procedure have...
Go to contribution page -
Mr Borja Rodriguez Mateos (CERN)4/3/25, 3:50 PMTalk
Aging of the stripper foil and unexpected machine shutdowns are the primary causes for reduction of the injected intensity from CERN’s Linac3 into the Low Energy Ion Ring (LEIR). As a result, the set of optimal control parameters that maximizes beam intensity in the ring tends to drift, requiring daily adjustments to the machine control settings. This paper explores the design of a...
Go to contribution page -
Dr Penny Madysa (GSI Helmholtzzentrum für Schwerionenforschung)4/4/25, 9:00 AMTalk
The complexity of the GSI/FAIR accelerator facility demands a high level of automation in order to maximize time for physics experiments. Accelerator laboratories world-wide are exploring a variety of techniques to achieve this, from classical optimization to reinforcement learning.
Geoff, the Generic Optimization Framework & Frontend, is an open-source framework that harmonizes access to...
Go to contribution page -
Matthew Schwab (FSU Jena)4/4/25, 9:20 AMPoster + Talk
Manual alignment of optical systems can be time consuming and the achieved performance of the system varies depending on the operator doing the alignment. A reinforcement learning approach using the PPO algorithm was used to train agents to align simple two-mirror optical setups, as well as a full regenerative laser amplifier. The goal is to produce agents that can reproducibly align the setup...
Go to contribution page -
Antonin Sulc (LBNL)4/4/25, 9:40 AMTalk
Recent advances in fine-tuning large language models (LLMs) with reinforcement learning (RL) techniques have demonstrated their ability to generalize, unlike the often-used Supervised Fine-Tuning (SFT).
Many aspects of particle accelerators, such as beam parameters, have well-defined objectives, making them ideal candidates for RL-driven optimization.In this work, we explore the...
Go to contribution page -
Hayg Guler (IJCLab)4/4/25, 10:00 AM
-
Kai Dresia4/4/25, 10:50 AMPoster + Talk
Deep reinforcement learning (DRL) has demonstrated great potential for controlling and regulating complex real-world systems such as nuclear fusion reactors such as tokamaks and particle accelerators. Another promising application is the DRL-based control of liquid-propellant rocket engines (LPREs), which have been a focus of research at the German Aerospace Center (DLR) for the past six...
Go to contribution page -
Sabrina Pochaba4/4/25, 11:10 AMPoster + Talk
Reinforcement learning (RL) is gaining more and more importance in the field of machine learning (ML). One subfield of RL is Multi-Agent RL (MARL). Here, several agents learn to solve a problem simultaneously rather than a single agent. For this reason, this approach is suitable for many real-world problems.
Go to contribution page
Since learning in a multiple agent scenario is highly complex, further conflicts can... -
Leander Grech (University of Malta)4/4/25, 11:30 AMPoster + Talk
Noisy intermediate-scale quantum (NISQ) computers promise a new paradigm for what is possible in information processing, with the ability to tackle complex and otherwise intractable computational challenges, by harnessing the massive intrinsic parallelism of qubits. Central to realising the potential of quantum computing are perfect entangling (PE) two-qubit gates, which serve as a critical...
Go to contribution page -
Auralee Edelen (SLAC)4/4/25, 11:50 AMTalk
Beams at LCLS require precise shaping in position-momentum phase space to meet the needs of different users. In particular, the shape of the longitudinal phase space needs to be customized, while ensuring the transverse phase space meets the requirements for Free Electron Laser (FEL) lasing. We present results of using RL for longitudinal phase space shaping, and compare these with approaches...
Go to contribution page -
Hayg Guler (IJCLab)Talk
6D Phase Space Beam Modelling Using Point Cloud Approach
Go to contribution page -
Ibon Bustinduy (ESS-BILBAO), Juan Luis Muñoz (ESS-Bilbao), Konrad Altenmüller (ESS-Bilbao)Poster
The ESS-Bilbao injector is a multipurpose machine that will accelerate protons up to 3 MeV. It will be used to produce neutrons by means of a Beryllium target. The first part of the injector has been running smoothly for more than a decade. This is formed by a proton source of the Electron Cyclotron Resonance (ECR) type that posseses unique characteristics. The subsequent Low Energy Transport...
Go to contribution page -
Joel Axel Wulff (CERN)Poster
Achieving precise bunch spacing in the Large Hadron Collider (LHC) relies on advanced RF manipulations in the Proton Synchrotron (PS). Multiple RF systems covering a large range of revolution harmonics (7 to 21, 42, 84) allow performing bunch splitting manipulations. To minimize bunch-by-bunch variations in intensity, longitudinal emittance, and shape, precise tuning of relative RF amplitude...
Go to contribution page -
34. Cheetah – A High-speed Differentiable Beam Dynamics Simulation for Machine Learning ApplicationsJan Kaiser (DESY), Chenran Xu (IBPT)Poster
Machine learning has emerged as a powerful solution to the modern challenges in accelerator physics. However, the limited availability of beam time and the high computational cost of simulation codes pose significant hurdles in generating the necessary data for training state-of-the-art machine learning models. Furthermore, optimisation methods can be used to tune accelerators and perform...
Go to contribution page -
Mahule Roy (National Institute of Technology Karnataka Surathkal)Poster
Machine unlearning is an emerging field in machine learning that focuses on efficiently removing the influence of specific data from a trained model. This capability is critical in scenarios requiring compliance with data privacy regulations or when erroneous data needs to be removed without retraining from scratch. In this study, I explore the importance of machine unlearning as a way to...
Go to contribution page -
Olga Mironova (PLUS University Salzburg)Poster
This study explores advanced strategies for optimal control in systems with delayed consequences, using beam steering in the AWAKE electron line at CERN as a benchmark. We formulate the task as a constrained optimization problem within a continuous, primarily linear Markov Decision Process (MDP), incorporating measured system parameters and realistic termination criteria. A wide range of...
Go to contribution page -
Evangelos MatzoukasPoster
Tuning injectors is a challenging task for the operation of accelerator facilities and synchrotron light sources,
Go to contribution page
particularly during the commissioning phase. Efficient tuning of the transfer line is essential for ensuring
optimal beam transport and injection efficiency. This process is further complicated by challenges such as
beam misalignment in quadrupole magnets, which can degrade beam... -
Chenran Xu (IBPT)Poster
Reinforcement learning (RL) is a promising approach for the online control of complex, real-world systems, with recent success demonstrated in applications such as particle accelerator control. However, model-free RL algorithms often suffer from sample inefficiency, making training infeasible without access to high-fidelity simulations or extensive measurement data. This limitation poses a...
Go to contribution page -
Christian Hespe (DESY)Poster
At the European XFEL, the main beam dump serves to absorb all electron bunches that are not required for the downstream scientific experiments. Due to the large beam power of the accelerator, controlling the dump temperature is a crucial component in its operation. Currently, this is done in an open-loop feed-forward manner. However, due to unforeseen drifts and changes in the setup of the...
Go to contribution page -
Jan Kaiser (DESY)Poster
The photon pulse intensity is one of the key performance metrics of Free Electron Laser (FEL) facilities and has a direct impact on their experimental yield. To date, FEL intensity tuning is a time-consuming manual task that requires expert human operators to have significant skill and experience. Autonomous tuning methods have been demonstrated to reduce setup times and improve the attained...
Go to contribution page -
André Dehne (HAW Hamburg)Poster
The integration of mobile autonomous robots in accelerators introduces potential risks to the facility itself, including collisions with critical components, cables, and infrastructure. Such incidents could compromise the functionality and safety of the accelerator, necessitating robust solutions to mitigate these risks. This paper explores how Reinforcement Learning (RL) can be leveraged to...
Go to contribution page -
Ferdinand Ferber (CERN)Poster
Classical, model-free Reinforcement Learning (RL) has achieved impressive results in areas where interactions with the environment are inexpensive, such as computer games or simulations. However, in many real-world applications, such as robotics or autonomous particle accelerators, interactions with the system are costly, which creates a need for sample-efficient RL algorithms. In addition,...
Go to contribution page -
Finn O'Shea (SLAC National Accelerator Laboratory)Poster
Reinforcement Learning methods typically require a large number of interactions with the environment to learn anything useful. This makes learning with sophisticated accelerator simulations difficult because of the total time required to train. On the other hand, learning with environments based on these accelerator codes is potentially very useful because they contain a lot of knowledge...
Go to contribution page -
Benjamin HalilovicPoster
In advanced accelerator facilities like the heavy-ion synchrotron SIS18 at GSI in Darmstadt, ensuring stable and efficient multi-turn injection is crucial for achieving high-intensity beams. However, conventional control methods often lack the adaptability needed to handle rapidly changing beam dynamics, leading to suboptimal performance. To address this limitation, a data-driven Gaussian...
Go to contribution page -
Christiane Ehrt (CFEL), Heike Hufnagel Martinez (DESY)Poster
The increasing automatization and the surging number and resolution of sensors in scientific experiments result in large, heterogeneous, and complex data collections. Data Science is, therefore, a key technology in modern natural sciences and materials science. Data-intensive research at the Science City Hamburg Bahrenfeld that centers around several large-scale user facilities for research in...
Go to contribution page -
Xueting Wu (USTC)Poster
The autonomous alignment and optimization of synchrotron beamlines pose significant challenges. Traditionally, manual alignment is time-consuming and experience-dependent process, often requiring extensive diagnostic efforts and data collection. With the construction of the Hefei Advanced Light Facility (HALF) underway, the development of a virtual platform for beamlines will be an invaluable...
Go to contribution page -
Georg SchäferPoster
Recent advances in reinforcement learning (RL) have shown great potential for managing complex systems in robotics, manufacturing, and beyond. However, translating RL successes from controlled experiments to real-world scenarios remains a significant challenge due to the absence of a standardized engineering pipeline that prioritizes thorough problem formulation. While data science and control...
Go to contribution page -
Simon Hirlaender (PLUS University Salzburg)Poster
This paper investigates the automation of particle accelerator control using few-shot reinforcement learning (RL), a promising approach to rapidly adapt control strategies with minimal training data. With the advent of advanced diagnostic tools and increasingly complex accelerator schedules, ensuring reliable performance has become critical. We focus on the physics simulation of the AWAKE...
Go to contribution page -
Jan Kaiser (DESY), Chenran Xu (IBPT)Poster
Reinforcement learning (RL) has been successfully applied to various online tuning tasks, often outperforming traditional optimization methods. However, model-free RL algorithms typically require a high number of samples, with training processes often involving millions of interactions. As this time-consuming process needs to be repeated to train RL-based controllers for each new task, it...
Go to contribution page -
Hannes Voß (University of Applied Sciences Hamburg)Poster
The use of autonomous mobile robots in dynamic and uncertain environments requires adaptive and robust decision-making. Synchronized digital twins — real-time virtual counterparts of physical systems — offer a promising approach to improving planning, increasing robustness, and enhancing adaptability. However, developing such systems presents significant challenges, including balancing...
Go to contribution page
Choose timezone
Your profile timezone: