Harnessing the power of gradient-based simulations for multi-objective optimization in particle accelerators
Particle accelerator operation requires simultaneous optimization of multiple objectives. Multi-objective optimization (MOO) is particularly challenging due to trade-offs between the objectives. Evolutionary algorithms, such as genetic algorithms (GAs), have been leveraged for many optimization prob...
Saved in:
| Main Authors: | , , , , , , , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
IOP Publishing
2025-01-01
|
| Series: | Machine Learning: Science and Technology |
| Subjects: | |
| Online Access: | https://doi.org/10.1088/2632-2153/adc221 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| _version_ | 1849726010612776960 |
|---|---|
| author | Kishansingh Rajput Malachi Schram Auralee Edelen Jonathan Colen Armen Kasparian Ryan Roussel Adam Carpenter He Zhang Jay Benesch |
| author_facet | Kishansingh Rajput Malachi Schram Auralee Edelen Jonathan Colen Armen Kasparian Ryan Roussel Adam Carpenter He Zhang Jay Benesch |
| author_sort | Kishansingh Rajput |
| collection | DOAJ |
| description | Particle accelerator operation requires simultaneous optimization of multiple objectives. Multi-objective optimization (MOO) is particularly challenging due to trade-offs between the objectives. Evolutionary algorithms, such as genetic algorithms (GAs), have been leveraged for many optimization problems, however, they do not apply to complex control problems by design. This paper demonstrates the power of differentiability for solving MOO problems in particle accelerators using a deep differentiable reinforcement learning (DDRL) algorithm. We compare the DDRL algorithm with model-free reinforcement learning (MFRL), GA, and Bayesian optimization (BO) for simultaneous optimization of heat load and trip rates in the continuous electron beam accelerator facility. The underlying problem enforces strict constraints on both individual states and actions as well as cumulative (global) constraints on energy requirements of the beam. Using historical accelerator data, we develop a physics-based surrogate model which is differentiable and allows for back-propagation of gradients. The results are evaluated in the form of a Pareto-front with two objectives. We show that the DDRL outperforms MFRL, BO, and GA on high dimensional problems. |
| format | Article |
| id | doaj-art-e80606d10b8948ffb74c8d249de2d5ef |
| institution | DOAJ |
| issn | 2632-2153 |
| language | English |
| publishDate | 2025-01-01 |
| publisher | IOP Publishing |
| record_format | Article |
| series | Machine Learning: Science and Technology |
| spelling | doaj-art-e80606d10b8948ffb74c8d249de2d5ef2025-08-20T03:10:20ZengIOP PublishingMachine Learning: Science and Technology2632-21532025-01-016202501810.1088/2632-2153/adc221Harnessing the power of gradient-based simulations for multi-objective optimization in particle acceleratorsKishansingh Rajput0https://orcid.org/0000-0002-4430-9937Malachi Schram1https://orcid.org/0000-0002-3475-2871Auralee Edelen2Jonathan Colen3https://orcid.org/0000-0003-4162-0276Armen Kasparian4Ryan Roussel5Adam Carpenter6He Zhang7Jay Benesch8Thomas Jefferson National Accelerator Facility , Newport News, VA 23606, United States of America; Department of Computer Science, University of Houston , Houston, TX 77204, United States of AmericaThomas Jefferson National Accelerator Facility , Newport News, VA 23606, United States of America; Department of Computer Science, Old Dominion University , Norfolk, VA 23529, United States of AmericaSLAC National Laboratory , Menlo Park, CA 94025, United States of AmericaJoint Institute on Advanced Computing for Environmental Studies, Old Dominion University , Norfolk, VA 23539, United States of America; Hampton Roads Biomedical Research Consortium , Portsmouth, VA 23703, United States of AmericaThomas Jefferson National Accelerator Facility , Newport News, VA 23606, United States of AmericaSLAC National Laboratory , Menlo Park, CA 94025, United States of AmericaThomas Jefferson National Accelerator Facility , Newport News, VA 23606, United States of AmericaThomas Jefferson National Accelerator Facility , Newport News, VA 23606, United States of AmericaThomas Jefferson National Accelerator Facility , Newport News, VA 23606, United States of AmericaParticle accelerator operation requires simultaneous optimization of multiple objectives. Multi-objective optimization (MOO) is particularly challenging due to trade-offs between the objectives. Evolutionary algorithms, such as genetic algorithms (GAs), have been leveraged for many optimization problems, however, they do not apply to complex control problems by design. This paper demonstrates the power of differentiability for solving MOO problems in particle accelerators using a deep differentiable reinforcement learning (DDRL) algorithm. We compare the DDRL algorithm with model-free reinforcement learning (MFRL), GA, and Bayesian optimization (BO) for simultaneous optimization of heat load and trip rates in the continuous electron beam accelerator facility. The underlying problem enforces strict constraints on both individual states and actions as well as cumulative (global) constraints on energy requirements of the beam. Using historical accelerator data, we develop a physics-based surrogate model which is differentiable and allows for back-propagation of gradients. The results are evaluated in the form of a Pareto-front with two objectives. We show that the DDRL outperforms MFRL, BO, and GA on high dimensional problems.https://doi.org/10.1088/2632-2153/adc221reinforcement learningBayesian optimizationgenetic algorithmmulti objectiveMORLMOGA |
| spellingShingle | Kishansingh Rajput Malachi Schram Auralee Edelen Jonathan Colen Armen Kasparian Ryan Roussel Adam Carpenter He Zhang Jay Benesch Harnessing the power of gradient-based simulations for multi-objective optimization in particle accelerators Machine Learning: Science and Technology reinforcement learning Bayesian optimization genetic algorithm multi objective MORL MOGA |
| title | Harnessing the power of gradient-based simulations for multi-objective optimization in particle accelerators |
| title_full | Harnessing the power of gradient-based simulations for multi-objective optimization in particle accelerators |
| title_fullStr | Harnessing the power of gradient-based simulations for multi-objective optimization in particle accelerators |
| title_full_unstemmed | Harnessing the power of gradient-based simulations for multi-objective optimization in particle accelerators |
| title_short | Harnessing the power of gradient-based simulations for multi-objective optimization in particle accelerators |
| title_sort | harnessing the power of gradient based simulations for multi objective optimization in particle accelerators |
| topic | reinforcement learning Bayesian optimization genetic algorithm multi objective MORL MOGA |
| url | https://doi.org/10.1088/2632-2153/adc221 |
| work_keys_str_mv | AT kishansinghrajput harnessingthepowerofgradientbasedsimulationsformultiobjectiveoptimizationinparticleaccelerators AT malachischram harnessingthepowerofgradientbasedsimulationsformultiobjectiveoptimizationinparticleaccelerators AT auraleeedelen harnessingthepowerofgradientbasedsimulationsformultiobjectiveoptimizationinparticleaccelerators AT jonathancolen harnessingthepowerofgradientbasedsimulationsformultiobjectiveoptimizationinparticleaccelerators AT armenkasparian harnessingthepowerofgradientbasedsimulationsformultiobjectiveoptimizationinparticleaccelerators AT ryanroussel harnessingthepowerofgradientbasedsimulationsformultiobjectiveoptimizationinparticleaccelerators AT adamcarpenter harnessingthepowerofgradientbasedsimulationsformultiobjectiveoptimizationinparticleaccelerators AT hezhang harnessingthepowerofgradientbasedsimulationsformultiobjectiveoptimizationinparticleaccelerators AT jaybenesch harnessingthepowerofgradientbasedsimulationsformultiobjectiveoptimizationinparticleaccelerators |