Contribution List

39 out of 39 displayed
Export to PDF
  1. 4/2/25, 9:00 AM
  2. Andrea Santamaria Garcia (University of Liverpool)
    4/2/25, 9:30 AM
  3. 4/3/25, 9:00 AM
  4. Holger Schlarb (DESY)
    4/3/25, 9:05 AM
  5. Phillip Neumann (DESY)
    4/3/25, 9:15 AM
  6. Heike Hufnagel Martinez (DESY)
    4/3/25, 9:25 AM
  7. Jonathan Edelen (RadiaSoft LLC)
    4/3/25, 2:00 PM
    Talk

    For more than half a decade, RadiaSoft has developed machine learning (ML) solutions to problems of immediate, practical interest in particle accelerator operations. These solutions include machine vision through convolutional neural networks for automating neutron scattering experiments and several classes of autoencoder networks for de-noising signals from beam position monitors and...

    Go to contribution page
  8. Jason St. John (Fermilab)
    4/3/25, 2:20 PM
    Poster + Talk

    We present design considerations and challenges for the fast machine learning component of a third-order resonant beam extraction regulation system being commissioned to deliver steady beam rates to the mu2e experiment at Fermilab. Dedicated quadrupoles drive the tune toward the 29/3 resonance each spill, extracting beam at kV multiwire septa. The overall Spill Regulation System consists of...

    Go to contribution page
  9. Adrian Menor de Onate (CERN)
    4/3/25, 2:40 PM
    Poster

    The slow extracted beams at the CERN Super Proton Synchrotron (SPS) are transported over several 100 m long transfer lines to three targets in the North Area Experimental Hall. The experiments require to eliminate intensity fluctuations over the roughly 5 s particle spill and hence to debunch the extracted beams. In this environment, secondary emission monitors (SEMs) have to replace the...

    Go to contribution page
  10. Georg Hoffstaetter (Cornell Univ.)
    4/3/25, 3:30 PM
    Poster + Talk

    In BNL’s Booster, the beam bunches can be split into two or three smaller bunches to reduce their space-charge forces. They are then merged back after acceleration in the Alternating Gradient Synchrotron (AGS). This acceleration with decreased space-charge forces can reduce the final emittance, increasing the luminosity in RHIC and improving proton polarization. Parts of this procedure have...

    Go to contribution page
  11. Mr Borja Rodriguez Mateos (CERN)
    4/3/25, 3:50 PM
    Talk

    Aging of the stripper foil and unexpected machine shutdowns are the primary causes for reduction of the injected intensity from CERN’s Linac3 into the Low Energy Ion Ring (LEIR). As a result, the set of optimal control parameters that maximizes beam intensity in the ring tends to drift, requiring daily adjustments to the machine control settings. This paper explores the design of a...

    Go to contribution page
  12. Dr Penny Madysa (GSI Helmholtzzentrum für Schwerionenforschung)
    4/4/25, 9:00 AM
    Talk

    The complexity of the GSI/FAIR accelerator facility demands a high level of automation in order to maximize time for physics experiments. Accelerator laboratories world-wide are exploring a variety of techniques to achieve this, from classical optimization to reinforcement learning.

    Geoff, the Generic Optimization Framework & Frontend, is an open-source framework that harmonizes access to...

    Go to contribution page
  13. Matthew Schwab (FSU Jena)
    4/4/25, 9:20 AM
    Poster + Talk

    Manual alignment of optical systems can be time consuming and the achieved performance of the system varies depending on the operator doing the alignment. A reinforcement learning approach using the PPO algorithm was used to train agents to align simple two-mirror optical setups, as well as a full regenerative laser amplifier. The goal is to produce agents that can reproducibly align the setup...

    Go to contribution page
  14. Antonin Sulc (LBNL)
    4/4/25, 9:40 AM
    Talk

    Recent advances in fine-tuning large language models (LLMs) with reinforcement learning (RL) techniques have demonstrated their ability to generalize, unlike the often-used Supervised Fine-Tuning (SFT).
    Many aspects of particle accelerators, such as beam parameters, have well-defined objectives, making them ideal candidates for RL-driven optimization.

    In this work, we explore the...

    Go to contribution page
  15. Hayg Guler (IJCLab)
    4/4/25, 10:00 AM
  16. Kai Dresia
    4/4/25, 10:50 AM
    Poster + Talk

    Deep reinforcement learning (DRL) has demonstrated great potential for controlling and regulating complex real-world systems such as nuclear fusion reactors such as tokamaks and particle accelerators. Another promising application is the DRL-based control of liquid-propellant rocket engines (LPREs), which have been a focus of research at the German Aerospace Center (DLR) for the past six...

    Go to contribution page
  17. Sabrina Pochaba
    4/4/25, 11:10 AM
    Poster + Talk

    Reinforcement learning (RL) is gaining more and more importance in the field of machine learning (ML). One subfield of RL is Multi-Agent RL (MARL). Here, several agents learn to solve a problem simultaneously rather than a single agent. For this reason, this approach is suitable for many real-world problems.
    Since learning in a multiple agent scenario is highly complex, further conflicts can...

    Go to contribution page
  18. Leander Grech (University of Malta)
    4/4/25, 11:30 AM
    Poster + Talk

    Noisy intermediate-scale quantum (NISQ) computers promise a new paradigm for what is possible in information processing, with the ability to tackle complex and otherwise intractable computational challenges, by harnessing the massive intrinsic parallelism of qubits. Central to realising the potential of quantum computing are perfect entangling (PE) two-qubit gates, which serve as a critical...

    Go to contribution page
  19. Auralee Edelen (SLAC)
    4/4/25, 11:50 AM
    Talk

    Beams at LCLS require precise shaping in position-momentum phase space to meet the needs of different users. In particular, the shape of the longitudinal phase space needs to be customized, while ensuring the transverse phase space meets the requirements for Free Electron Laser (FEL) lasing. We present results of using RL for longitudinal phase space shaping, and compare these with approaches...

    Go to contribution page
  20. Hayg Guler (IJCLab)
    Talk

    6D Phase Space Beam Modelling Using Point Cloud Approach

    Go to contribution page
  21. Ibon Bustinduy (ESS-BILBAO), Juan Luis Muñoz (ESS-Bilbao), Konrad Altenmüller (ESS-Bilbao)

    The ESS-Bilbao injector is a multipurpose machine that will accelerate protons up to 3 MeV. It will be used to produce neutrons by means of a Beryllium target. The first part of the injector has been running smoothly for more than a decade. This is formed by a proton source of the Electron Cyclotron Resonance (ECR) type that posseses unique characteristics. The subsequent Low Energy Transport...

    Go to contribution page
  22. Joel Axel Wulff (CERN)

    Achieving precise bunch spacing in the Large Hadron Collider (LHC) relies on advanced RF manipulations in the Proton Synchrotron (PS). Multiple RF systems covering a large range of revolution harmonics (7 to 21, 42, 84) allow performing bunch splitting manipulations. To minimize bunch-by-bunch variations in intensity, longitudinal emittance, and shape, precise tuning of relative RF amplitude...

    Go to contribution page
  23. Jan Kaiser (DESY), Chenran Xu (IBPT)

    Machine learning has emerged as a powerful solution to the modern challenges in accelerator physics. However, the limited availability of beam time and the high computational cost of simulation codes pose significant hurdles in generating the necessary data for training state-of-the-art machine learning models. Furthermore, optimisation methods can be used to tune accelerators and perform...

    Go to contribution page
  24. Mahule Roy (National Institute of Technology Karnataka Surathkal)

    Machine unlearning is an emerging field in machine learning that focuses on efficiently removing the influence of specific data from a trained model. This capability is critical in scenarios requiring compliance with data privacy regulations or when erroneous data needs to be removed without retraining from scratch. In this study, I explore the importance of machine unlearning as a way to...

    Go to contribution page
  25. Olga Mironova (PLUS University Salzburg)

    This study explores advanced strategies for optimal control in systems with delayed consequences, using beam steering in the AWAKE electron line at CERN as a benchmark. We formulate the task as a constrained optimization problem within a continuous, primarily linear Markov Decision Process (MDP), incorporating measured system parameters and realistic termination criteria. A wide range of...

    Go to contribution page
  26. Evangelos Matzoukas

    Tuning injectors is a challenging task for the operation of accelerator facilities and synchrotron light sources,
    particularly during the commissioning phase. Efficient tuning of the transfer line is essential for ensuring
    optimal beam transport and injection efficiency. This process is further complicated by challenges such as
    beam misalignment in quadrupole magnets, which can degrade beam...

    Go to contribution page
  27. Chenran Xu (IBPT)

    Reinforcement learning (RL) is a promising approach for the online control of complex, real-world systems, with recent success demonstrated in applications such as particle accelerator control. However, model-free RL algorithms often suffer from sample inefficiency, making training infeasible without access to high-fidelity simulations or extensive measurement data. This limitation poses a...

    Go to contribution page
  28. Christian Hespe (DESY)

    At the European XFEL, the main beam dump serves to absorb all electron bunches that are not required for the downstream scientific experiments. Due to the large beam power of the accelerator, controlling the dump temperature is a crucial component in its operation. Currently, this is done in an open-loop feed-forward manner. However, due to unforeseen drifts and changes in the setup of the...

    Go to contribution page
  29. Jan Kaiser (DESY)

    The photon pulse intensity is one of the key performance metrics of Free Electron Laser (FEL) facilities and has a direct impact on their experimental yield. To date, FEL intensity tuning is a time-consuming manual task that requires expert human operators to have significant skill and experience. Autonomous tuning methods have been demonstrated to reduce setup times and improve the attained...

    Go to contribution page
  30. André Dehne (HAW Hamburg)

    The integration of mobile autonomous robots in accelerators introduces potential risks to the facility itself, including collisions with critical components, cables, and infrastructure. Such incidents could compromise the functionality and safety of the accelerator, necessitating robust solutions to mitigate these risks. This paper explores how Reinforcement Learning (RL) can be leveraged to...

    Go to contribution page
  31. Ferdinand Ferber (CERN)

    Classical, model-free Reinforcement Learning (RL) has achieved impressive results in areas where interactions with the environment are inexpensive, such as computer games or simulations. However, in many real-world applications, such as robotics or autonomous particle accelerators, interactions with the system are costly, which creates a need for sample-efficient RL algorithms. In addition,...

    Go to contribution page
  32. Finn O'Shea (SLAC National Accelerator Laboratory)

    Reinforcement Learning methods typically require a large number of interactions with the environment to learn anything useful. This makes learning with sophisticated accelerator simulations difficult because of the total time required to train. On the other hand, learning with environments based on these accelerator codes is potentially very useful because they contain a lot of knowledge...

    Go to contribution page
  33. Benjamin Halilovic

    In advanced accelerator facilities like the heavy-ion synchrotron SIS18 at GSI in Darmstadt, ensuring stable and efficient multi-turn injection is crucial for achieving high-intensity beams. However, conventional control methods often lack the adaptability needed to handle rapidly changing beam dynamics, leading to suboptimal performance. To address this limitation, a data-driven Gaussian...

    Go to contribution page
  34. Christiane Ehrt (CFEL), Heike Hufnagel Martinez (DESY)
    Poster

    The increasing automatization and the surging number and resolution of sensors in scientific experiments result in large, heterogeneous, and complex data collections. Data Science is, therefore, a key technology in modern natural sciences and materials science. Data-intensive research at the Science City Hamburg Bahrenfeld that centers around several large-scale user facilities for research in...

    Go to contribution page
  35. Xueting Wu (USTC)

    The autonomous alignment and optimization of synchrotron beamlines pose significant challenges. Traditionally, manual alignment is time-consuming and experience-dependent process, often requiring extensive diagnostic efforts and data collection. With the construction of the Hefei Advanced Light Facility (HALF) underway, the development of a virtual platform for beamlines will be an invaluable...

    Go to contribution page
  36. Georg Schäfer

    Recent advances in reinforcement learning (RL) have shown great potential for managing complex systems in robotics, manufacturing, and beyond. However, translating RL successes from controlled experiments to real-world scenarios remains a significant challenge due to the absence of a standardized engineering pipeline that prioritizes thorough problem formulation. While data science and control...

    Go to contribution page
  37. Simon Hirlaender (PLUS University Salzburg)

    This paper investigates the automation of particle accelerator control using few-shot reinforcement learning (RL), a promising approach to rapidly adapt control strategies with minimal training data. With the advent of advanced diagnostic tools and increasingly complex accelerator schedules, ensuring reliable performance has become critical. We focus on the physics simulation of the AWAKE...

    Go to contribution page
  38. Jan Kaiser (DESY), Chenran Xu (IBPT)

    Reinforcement learning (RL) has been successfully applied to various online tuning tasks, often outperforming traditional optimization methods. However, model-free RL algorithms typically require a high number of samples, with training processes often involving millions of interactions. As this time-consuming process needs to be repeated to train RL-based controllers for each new task, it...

    Go to contribution page
  39. Hannes Voß (University of Applied Sciences Hamburg)

    The use of autonomous mobile robots in dynamic and uncertain environments requires adaptive and robust decision-making. Synchronized digital twins — real-time virtual counterparts of physical systems — offer a promising approach to improving planning, increasing robustness, and enhancing adaptability. However, developing such systems presents significant challenges, including balancing...

    Go to contribution page