[go: up one dir, main page]

CN119474620B - Beam line station parameter optimization method based on extended Kalman filtering and reinforcement learning - Google Patents

Beam line station parameter optimization method based on extended Kalman filtering and reinforcement learning Download PDF

Info

Publication number
CN119474620B
CN119474620B CN202510053213.9A CN202510053213A CN119474620B CN 119474620 B CN119474620 B CN 119474620B CN 202510053213 A CN202510053213 A CN 202510053213A CN 119474620 B CN119474620 B CN 119474620B
Authority
CN
China
Prior art keywords
state
experience
strategy
transition model
extended kalman
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202510053213.9A
Other languages
Chinese (zh)
Other versions
CN119474620A (en
Inventor
戴圣然
王思宇
蒋建慧
张俊斌
方子君
吴爽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Gusu Laboratory of Materials
Original Assignee
Gusu Laboratory of Materials
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gusu Laboratory of Materials filed Critical Gusu Laboratory of Materials
Priority to CN202510053213.9A priority Critical patent/CN119474620B/en
Publication of CN119474620A publication Critical patent/CN119474620A/en
Application granted granted Critical
Publication of CN119474620B publication Critical patent/CN119474620B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/11Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/092Reinforcement learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Operations Research (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Game Theory and Decision Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Marketing (AREA)
  • Development Economics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Tourism & Hospitality (AREA)
  • Quality & Reliability (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本申请涉及强化学习技术领域,尤其涉及一种基于扩展卡尔曼滤波和强化学习的束线站参数优化方法,包括基于初始策略和预设的目标状态从环境中随机选择初始状态并进行采样,收集多条由连续经验四元组组成的轨迹数据;在第一轮采样使用收集到的轨迹数据训练概率神经网络得到状态转移模型;对于每条轨迹数据结合状态转移模型开展扩展卡尔曼滤波,使用滤波后的下一时刻状态替换进每条轨迹数据的经验四元组中并保存进经验回放池;使用DDPG算法从经验回放池中随机采样经验四元组并对策略进行学习更新获得新的策略,依此循环直至策略学习完成。本申请能够缓解系统误差带来的影响,提高状态估计的准确性,从而使得策略的学习更为精准。

The present application relates to the technical field of reinforcement learning, and in particular to a beamline station parameter optimization method based on extended Kalman filtering and reinforcement learning, including randomly selecting an initial state from the environment based on an initial strategy and a preset target state and sampling, collecting multiple trajectory data consisting of continuous experience quadruple groups; using the collected trajectory data to train a probabilistic neural network in the first round of sampling to obtain a state transition model; for each trajectory data, an extended Kalman filter is performed in combination with the state transition model, and the filtered next moment state is used to replace the experience quadruple of each trajectory data and save it in an experience replay pool; using the DDPG algorithm to randomly sample the experience quadruple from the experience replay pool and learn and update the strategy to obtain a new strategy, and repeating this cycle until the strategy learning is completed. The present application can alleviate the impact of system errors, improve the accuracy of state estimation, and thus make the strategy learning more accurate.

Description

Beam line station parameter optimization method based on extended Kalman filtering and reinforcement learning
Technical Field
The application relates to the technical field of reinforcement learning, in particular to a beam line station parameter optimization method based on extended Kalman filtering and reinforcement learning.
Background
The beam line station is a key component in the synchronous radiation light source device and is responsible for guiding synchronous radiation light generated by electrons moving at high speed in the storage ring to a specific experiment station. The synchrotron radiation light source is an extremely important scientific research tool, can provide a light source with high brightness and high resolution, and is widely applied to a plurality of fields such as material science, life science, chemistry, physics and the like. In a beam-line station, the beam is modulated by a series of optical elements, such as focusing, monochromating, collimation, etc., in order to meet the requirements of different experiments. Each beam line station is typically designed for a particular experimental technique or field of investigation, such as X-ray absorption spectroscopy, X-ray diffraction, photoelectron spectroscopy, etc. In the implementation, the parameters of the beam-line station are required to be optimized to realize high-precision regulation and control of optical elements in the beam-line station, so that the beam characteristics can meet the experimental requirements, and therefore, how to optimize the parameters of the beam-line station to improve the experimental precision and efficiency is one of the key points of the current research.
At present, optimization methods such as reinforcement learning, bayesian optimization algorithm, particle swarm optimization algorithm and the like are widely applied to parameter optimization of a beam line station. The method can optimize the parameter combination of the beam line station through continuous test and adjustment so as to realize the optimal experimental effect. However, in actual operation, there is an error in the device, which may cause a deviation between the state estimation and the actual state of the actual device, so that it is more difficult for the reinforcement learning strategy to learn an effective strategy in the sparse rewarding scene. In particular, in a region close to a physical boundary, reinforcement learning is difficult to obtain a strategy with high accuracy due to an error expansion of state estimation. This bias is amplified stepwise during the policy optimization process. Specifically, reinforcement learning relies on accurate perception of the state of the device to evaluate the correspondence between actions and rewards, and when the state estimation is inaccurate, the strategy may be biased to optimize the error state, and the deviation affects the updating direction of the strategy, so that the actual learned strategy cannot accurately reflect the real situation of the device, and thus inaccurate learning of the strategy is easily caused.
Disclosure of Invention
The application provides a beam line station parameter optimization method based on extended Kalman filtering and reinforcement learning, which uses a Kalman filtering and reinforcement learning combined method to relieve the influence caused by systematic errors in the equipment parameter tuning process and improve the accuracy of state estimation, thereby ensuring that the strategy learning is more accurate. The application provides the following technical scheme:
In a first aspect, the present application provides a method for optimizing parameters of a beam line station based on extended kalman filtering and reinforcement learning, the method comprising:
based on an initial strategy and a preset target state, randomly selecting a plurality of initial states from the environment, sampling, and collecting a plurality of pieces of track data consisting of continuous experience quaternions;
in the first round of sampling, training a probabilistic neural network by using the collected trajectory data to obtain a state transition model;
For each piece of track data, carrying out extended Kalman filtering by combining the state transition model, replacing the filtered next moment state into an experience quadruple of each piece of track data, and storing the new experience quadruple into an experience playback pool;
and randomly sampling the experience quadruple from the experience playback pool by using DDPG algorithm, learning and updating the current strategy to obtain a new strategy, and circulating until the strategy learning is completed.
In a specific embodiment, training the probabilistic neural network using the collected trajectory data to obtain the state transition model includes:
After the first round of sampling is finished and a plurality of pieces of track data are collected, training a preset probabilistic neural network by using the collected plurality of pieces of track data to obtain a state transition model, wherein the state transition model is as follows:
;
Wherein, AndRepresenting the current state and the current action respectively,As a model parameter of the probabilistic neural network,As a mean value vector of the data set,Is a variance vector.
In a specific embodiment, said developing extended kalman filtering for each trace data in combination with said state transition model, replacing the filtered next time state into an experience quadruple of each trace data and saving the new experience quadruple into an experience playback pool comprises:
predicting the state at the current moment through a state transition model and calculating an error covariance matrix of the state;
Introducing an observation value and a Kalman gain, and correcting a predicted state value and an error covariance matrix;
after the correction is completed, updating the error covariance matrix, and replacing the corrected predicted state value as the state value at the next moment into the original experience quaternion to form a new experience quaternion.
In a specific embodiment, the predicting the state at the current time by the state transition model and calculating the error covariance matrix of the state includes:
predicting current time using state transition model The state of (2) is as follows:
;
for the current moment The prediction formula of the error covariance is as follows:
;
Wherein, Is a state transition model with respect to stateThe jacobian matrix at the current moment,Is the covariance matrix of the data set,Is the error covariance matrix of the update phase at the previous time,Is the covariance matrix of the current moment prediction phase,Is the process noise covariance matrix and,Representing the transpose operation.
In a specific embodiment, the predicting the state at the current time by the state transition model and calculating the error covariance matrix of the state further includes:
Using N consecutive empirical quaternions in the current trajectory Calculating the error of each experience quadruple by using a state transition modelThe following are provided:
;
;
Wherein, Calculating the average error of all empirical quaternionsThe following are provided:
;
calculated using the following formula :
;
Wherein, Indicating the operation of the transpose,Representing the first of the track dataA number of four-tuple samples,AndRepresenting the last time and the current time in the experience quadruple respectively.
In a specific embodiment, the introducing the observed value and the kalman gain, and the correcting the predicted state value and the error covariance matrix includes:
calculating the difference between the observed value and the predicted value The following are provided:
;
kalman gain is calculated using the following formula :
;
Wherein, The diagonal matrix is calculated and constructed by using continuous track data in an experience playback pool in a dynamic estimation mode, N continuous experience samples are sampled in the current track, and the N continuous experience samples are takenAssume that there isComponents of each dimension, calculating standard deviation of each component,The calculation formula of (2) is as follows:
;
the correction formula of the predicted state value is as follows:
;
the correction formula of the error covariance matrix is as follows:
;
and the corrected error covariance matrix.
In a specific embodiment, the predicting the state at the current time by the state transition model and calculating the error covariance matrix of the state further includes:
error covariance emergency call requires an initialization operation using trajectory data for N consecutive empirical quaternions in the current trajectory Assume thatHas the following componentsThe components of the individual dimensions are such that,Calculate NStandard deviation of each component of (2)The initial value calculation formula of the error covariance is as follows:
;
;
Wherein, Representing the mean value of the first component,Represent the firstStatus in stripe track dataRespectively calculating covariance matrix of each empirical quaternion observation value and predicted value, and calculating average value as initial value of error covariance in prediction stage
In a second aspect, the application provides a beam line station parameter optimization system based on extended kalman filtering and reinforcement learning, which adopts the following technical scheme:
A beam line station parameter optimization system based on extended kalman filtering and reinforcement learning, comprising:
The track data acquisition module is used for randomly selecting a plurality of initial states from the environment and sampling the initial states based on an initial strategy and a preset target state, and collecting a plurality of track data consisting of continuous experience quadruples;
The state transition model generation module is used for training the probabilistic neural network by using the collected track data in the first round of sampling to obtain a state transition model;
The extended Kalman filtering module is used for carrying out extended Kalman filtering on each piece of track data by combining the state transition model, replacing the filtered next time state into an experience quadruple of each piece of track data and storing the new experience quadruple into an experience playback pool;
and the strategy learning module is used for randomly sampling the experience quadruple from the experience playback pool by using DDPG algorithm, learning and updating the current strategy to obtain a new strategy, and circulating until the strategy learning is completed.
In a third aspect, the application provides an electronic device, which comprises a processor and a memory, wherein a program is stored in the memory, and the program is loaded and executed by the processor to realize the beam line station parameter optimization method based on the extended Kalman filtering and reinforcement learning according to the first aspect.
In a fourth aspect, the present application provides a computer readable storage medium having a program stored therein, which when executed by a processor is configured to implement a beam line station parameter optimization method based on extended kalman filtering and reinforcement learning as described in the first aspect.
In summary, the beneficial effects of the present application at least include:
1) In the process of optimizing parameters of a beam line station, equipment errors can cause deviation between state estimation and actual equipment states, the errors can influence the learning effect of a reinforcement learning strategy, and especially under a sparse rewarding environment, the strategy optimization is more difficult due to the accumulation of the errors. According to the application, the state is predicted and updated by introducing Extended Kalman Filtering (EKF), so that the influence of equipment noise on state estimation is effectively reduced. The extended Kalman filter can correct the previous estimation according to the current observation information when predicting the state, so that the state estimation is more accurate. Through the process, errors are remarkably relieved, so that the reinforcement learning can still learn a more accurate strategy under the noise and error interference of equipment, and the effectiveness and stability of the reinforcement learning in practical application are further improved.
2) In reinforcement learning, an experience playback pool stores a plurality of historical experience quadruples, typically for subsequent policy updates. However, in the conventional method, due to inaccurate state estimation, the data in the playback pool may contain noise, resulting in limited contribution to the policy update. According to the application, by combining with the extended Kalman filtering, the state in each track data is accurately estimated and corrected, so that the influence of noise is reduced. The revised state estimate enhances the validity of the data, making the data sampled from the empirical playback pool more valuable for policy optimization. The accurate state estimation accelerates policy convergence in the reinforcement learning process, improves the utilization efficiency of data, shortens learning time, and improves overall optimization performance.
3) Reinforcement learning algorithms are often required to exhibit good adaptability and stability in dynamic, complex environments. However, unavoidable noise and uncertainty in the environment may lead to unstable algorithm behavior. By introducing the extended Kalman filtering, not only is the accuracy of state estimation improved, but also the adaptability of the algorithm to equipment errors and external disturbance is stronger. In a complex physical environment, the algorithm can reduce the influence of noise interference on the optimization process by continuously updating the prediction state and correcting errors thereof. The method can maintain stable performance when facing to environments with higher uncertainty, and improves the robustness of the algorithm, so that the method can be operated efficiently in wider application scenes, and the robustness and reliability of the optimization process are ensured.
The state transition model is established by training the probabilistic neural network, and the state is accurately estimated by combining with the extended Kalman filtering, so that the influence of equipment errors on the state estimation can be reduced, and the accuracy and the stability of strategy learning are improved. Finally, the current strategy is optimized by using DDPG algorithm, new strategy is obtained gradually through multiple iterations, the influence caused by system errors in the equipment parameter tuning process is relieved by using a Kalman filtering and reinforcement learning combined method, and the accuracy of state estimation is improved, so that the strategy learning is more accurate.
The foregoing description is only an overview of the present application, and is intended to provide a better understanding of the present application, as it is embodied in the following description, with reference to the preferred embodiments of the present application and the accompanying drawings.
Drawings
Fig. 1 is a flow chart of a beam line station parameter optimization method based on extended kalman filtering and reinforcement learning in an embodiment of the application.
Fig. 2 is an overall flow diagram of a beam line station parameter optimization method based on extended kalman filtering and reinforcement learning in an embodiment of the application.
FIG. 3 is a block diagram of an extended Kalman filtering and reinforcement learning based beam line station parameter optimization system in accordance with an embodiment of the present application.
Fig. 4 is a block diagram of an electronic device based on extended kalman filtering and reinforcement learning for beam-line station parameter optimization in an embodiment of the application.
Detailed Description
The following describes in further detail the embodiments of the present application with reference to the drawings and examples. The following examples are illustrative of the application and are not intended to limit the scope of the application.
Optionally, the beam line station parameter optimization method based on the extended kalman filtering and the reinforcement learning provided by each embodiment is used for illustration in electronic equipment, the electronic equipment is a terminal or a server, the terminal can be a computer, a tablet computer and the like, and the embodiment does not limit the type of the electronic equipment.
Referring to fig. 1, a flow chart of a beam-line station parameter optimization method based on extended kalman filtering and reinforcement learning according to an embodiment of the present application is shown, where the method at least includes the following steps:
step S101, based on an initial strategy and a preset target state, randomly selecting a plurality of initial states from the environment and sampling, and collecting a plurality of pieces of track data consisting of continuous experience quaternions.
Specifically, first, based on a preset target stateRandom selectionInitial states of,. Based on the initial policySampling in the environment, collecting to obtainAnd (3) a track. Wherein each track is composed of a plurality of sequentially ordered empirical quaternionsThe composition of the composite material comprises the components,AndRepresenting the last moment and the current moment in the experience quadruple respectively,Representative ofThe state of the moment of time,Representative ofThe action of the moment in time is that,Representative ofThe rewards of the moment of time,Representative ofA state of time.
The environment is the environment where the beam line station system is located.
And step S102, training the probabilistic neural network by using the collected trajectory data in the first round of sampling to obtain a state transition model.
In implementation, after the first round of sampling is finished and a plurality of pieces of track data are collected, training a preset probabilistic neural network by using the collected plurality of pieces of track data to obtain a state transition model, wherein the state transition model is as follows:
;
Wherein, As a model parameter of the probabilistic neural network,AndRepresenting the current state and the current action, respectively, the state transition model has two outputs, respectively,Is the mean value vector, which is also the expected state vector at the next moment,The variance vector represents the randomness of existence, and the input of the probability neural network is the current stateAnd actionsIn practice, the goal of the probabilistic neural network is to learn the current stateAnd actionsAnd the next stateProbability distribution relation between the two, the model requires the next stateObeying Gaussian distribution in each dimension and outputting average value vectorVariance vector,In Gaussian distribution, the mean vectorThe center position of the distribution is represented and is the most likely value in the distribution. Therefore, the mean vector output by the final state transition model after model training is completed is used as the predicted value of the next state of the system. The variance vector represents uncertainty and is used for reflecting the credibility of the prediction result. For example, when the variance is small, the specification model has a high confidence in the predicted value.
It should be noted that, the probabilistic neural network is an existing model designed in advance according to a specific reinforcement learning task, and the objective is to predict the next state and its uncertainty by inputting the current state and actions. The training process is completed based on the trajectory data collected during the first round of sampling, including the state of the device, the actions taken, and the resulting state. During training, the trajectory data is divided into input and target output by using a standard method of a neural network, and network parameters are adjusted by optimizing a loss function, so that the model can accurately predict a state transition relation. This process ensures that the probabilistic neural network can efficiently represent complex relationships between device states and actions, providing reliable state predictions for subsequent steps. In the implementation, the cost of training the probabilistic neural network is large, so that the state transition model obtained by repeated training in the strategy updating of each round cannot be obtained, and meanwhile, the initial strategy is randomly generated and has strong exploration capacity, and each state and action can be fully traversed. Although the subsequent strategies may be more convergent, the state transition model is sufficient to describe the entire optimization process without repeated updates.
In summary, step S102 successfully builds a state transition model of the system by training the probabilistic neural network. The mean vector as a predictor for the next state simplifies subsequent calculations while maintaining a description of state transition uncertainty. After the model parameters are fixed, the following steps can more efficiently utilize the model to carry out strategy optimization, so that the overall calculation cost is reduced.
And step S103, for each piece of track data, carrying out extended Kalman filtering by combining a state transition model, replacing the filtered next moment state into an experience quadruple of each piece of track data, and storing the new experience quadruple into an experience playback pool.
In step S103, the state at the current time is predicted by the state transition model and the error covariance matrix of the state is calculatedWherein an error covariance matrix is used to predict the uncertainty of the state. And then introducing an observation value and Kalman gain, correcting the predicted state value and the error covariance matrix to obtain a more accurate predicted state value, updating the error covariance matrix after finishing correction, and replacing the corrected predicted state value serving as a next-moment state value into the original experience quadruple to form a new experience quadruple, thereby ensuring more accurate state estimation of each track data and improving the efficiency and quality of subsequent training.
Specifically, the state transition model obtained by training in step S102 is used to predict the current timeThe state of (2) is as follows:
;
for the current moment In this case, the state transition model outputs two values, but the variance is not applied to the predicted values of (a).
In practice, the error covariance requires an initialization operation, in particular, using trajectory data for N consecutive empirical quaternions in the current trajectoryWherein it is assumed thatHas the following componentsThe components of the individual dimensions are such that,Calculate NStandard deviation of each component of (2)The initial value calculation formula of the error covariance is as follows:
;
;
Wherein, Representing the mean value of the first component,Represent the firstStatus in stripe track dataThe covariance matrix of each empirical quaternion observation and predicted value is calculated separately and then averaged to serve as the initial value of the error covariance in the prediction stage
In practice, the prediction formula for the error covariance is as follows:
;
Wherein, Is a state transition model with respect to stateAt the current momentThe jacobian matrix is used for linearizing a nonlinear process model, and transmitting a state estimation error of the last moment to the current moment to reflect the influence of the state of the last moment on the state of the current moment.Is the error covariance matrix of the update phase at the previous time,Is the covariance matrix of the prediction stage at the current moment, and represents the uncertainty of the state after prediction.Is a process noise covariance matrix, used to describe the additional uncertainty introduced in the prediction process,Representing the transpose operation.
Alternatively, the application uses experience playback pool and adopts experience statistical estimation method to calculateSpecifically, N continuous empirical quaternions in the current track are utilizedCalculating the error of each experience quadruple by using a state transition modelThe following are provided:
;
;
Wherein, . Re-calculating the average error of all empirical quaternionsThe following are provided:
;
Finally, the following formula is used for calculating :
;
Wherein, Indicating the operation of the transpose,Representing the first of the track dataA number of four-tuple samples,AndRepresenting the last time and the current time in the experience quadruple respectively.
In the implementation, after the state and covariance matrix are obtained through prediction, an observation value and a Kalman gain are introduced, and the predicted state value and error covariance are corrected to obtain a more accurate current state predicted value and covariance matrix. Specifically, the difference between the observed value and the predicted value is calculated firstThe following are provided:
;
the Kalman gain is then calculated using the following formula :
;
Wherein, Is a diagonal matrix calculated and constructed by using continuous track data in an experience playback pool in a dynamic estimation mode, and the calculation mode and the initial value of error covarianceSimilarly, N continuous experience samples in the current track are utilized to takeAssume that there isComponents of each dimension, calculating standard deviation of each component,The calculation formula of (2) is as follows:
;
In practice, the correction formula for the predicted state value is as follows:
;
the correction formula of the error covariance matrix is as follows:
;
And after the correction is completed, replacing the corrected prediction state value and the error covariance matrix of each track data into a corresponding experience quadruple to form a new experience quadruple and putting the new experience quadruple into an experience playback pool.
In step S103, by the extended kalman filter process, the state estimation of the trajectory data is more accurate and the quality is higher, and the training efficiency and effect of reinforcement learning are both significantly improved. Meanwhile, the robustness and the adaptability of the system are improved through a dynamic estimation and correction mechanism, so that the algorithm can better cope with complex physical environment and randomness challenges.
Step S104, randomly sampling the experience quadruple from the experience playback pool by using DDPG algorithm, and learning and updating the current strategy to obtain a new strategy, and sequentially cycling until the strategy learning is completed.
In implementation, a depth deterministic strategy gradient algorithm (DDPG algorithm) is employed to optimize the current strategy. DDPG algorithm is a technology combining strategy gradient method and deep reinforcement learning, and can realize efficient learning in continuous action space. Specifically, several empirical quadruples are randomly sampled from an empirical playback pool, gradients of the policy network are calculated using the sampled data, and policy network parameters are updated by back propagation. And updating the value network parameters according to the target value. Through multiple iterations, the current strategy is gradually optimized to obtain a better strategy as a new strategy.
In summary, with reference to fig. 2, the application provides a beam-line station parameter optimization method based on extended kalman filtering and reinforcement learning, which aims to optimize beam-line station parameters and solve the problem that reinforcement learning strategies are difficult to learn effective strategies in sparse rewarding scenes under the condition that equipment errors exist. The state transition model is established by training the probabilistic neural network, and the state is accurately estimated by combining with the extended Kalman filtering, so that the influence of equipment errors on the state estimation can be reduced, and the accuracy and the stability of strategy learning are improved. Finally, the current strategy is optimized by using DDPG algorithm, new strategy is obtained gradually through multiple iterations, the influence caused by system errors in the equipment parameter tuning process is relieved by using a Kalman filtering and reinforcement learning combined method, and the accuracy of state estimation is improved, so that the strategy learning is more accurate.
FIG. 3 is a block diagram of an extended Kalman filtering and reinforcement learning based beam line station parameter optimization system according to one embodiment of the present application, the system at least includes the following modules:
The track data acquisition module is used for randomly selecting a plurality of initial states from the environment and sampling the initial states based on an initial strategy and a preset target state, and collecting a plurality of track data consisting of continuous experience quadruples;
The state transition model generation module is used for training the probabilistic neural network by using the collected track data in the first round of sampling to obtain a state transition model;
The extended Kalman filtering module is used for carrying out extended Kalman filtering on each piece of track data by combining a state transition model, replacing the filtered next time state into an experience quadruple of each piece of track data and storing the new experience quadruple into an experience playback pool;
And the strategy learning module is used for randomly sampling the experience quadruple from the experience playback pool by using DDPG algorithm, learning and updating the current strategy to obtain a new strategy, and circulating until the strategy learning is completed.
For relevant details reference is made to the method embodiments described above.
Fig. 4 is a block diagram of an electronic device provided in one embodiment of the application. The device comprises at least a processor 401 and a memory 402.
Processor 401 may include one or more processing cores such as a 4-core processor, an 8-core processor, etc. The processor 401 may be implemented in at least one hardware form of DSP (DIGITAL SIGNAL Processing), FPGA (Field-Programmable gate array), PLA (Programmable Logic Array ). The processor 401 may also include a main processor, which is a processor for processing data in a wake-up state, also called a CPU (Central Processing Unit ), and a coprocessor, which is a low-power processor for processing data in a standby state. In some embodiments, the processor 401 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 401 may also include an AI (ARTIFICIAL INTELLIGENCE ) processor for processing computing operations related to machine learning.
Memory 402 may include one or more computer-readable storage media, which may be non-transitory. Memory 402 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 402 is used to store at least one instruction for execution by processor 401 to implement the extended kalman filter and reinforcement learning based beam-line station parameter optimization method provided by the method embodiments of the present application.
In some embodiments, the electronic device may also optionally include a peripheral interface and at least one peripheral. The processor 401, memory 402, and peripheral interfaces may be connected by buses or signal lines. The individual peripheral devices may be connected to the peripheral device interface via buses, signal lines or circuit boards. Illustratively, the peripheral devices include, but are not limited to, radio frequency circuitry, touch display screens, audio circuitry, and power supplies, among others.
Of course, the electronic device may also include fewer or more components, as the present embodiment is not limited in this regard.
Optionally, the present application further provides a computer readable storage medium, in which a program is stored, and the program is loaded and executed by a processor to implement the beam line station parameter optimization method based on the extended kalman filtering and reinforcement learning in the above method embodiment.
Optionally, the present application further provides a computer product, where the computer product includes a computer readable storage medium, where a program is stored, and the program is loaded and executed by a processor to implement the beam line station parameter optimization method based on extended kalman filtering and reinforcement learning in the above method embodiment.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (10)

1. An extended kalman filtering and reinforcement learning-based beam line station parameter optimization method is characterized by comprising the following steps of:
based on an initial strategy and a preset target state, randomly selecting a plurality of initial states from the environment where the beam line station system is located, sampling, and collecting a plurality of pieces of track data consisting of continuous experience quaternions;
in the first round of sampling, training a probabilistic neural network by using the collected trajectory data to obtain a state transition model;
For each piece of track data, carrying out extended Kalman filtering by combining the state transition model, replacing the filtered next moment state into an experience quadruple of each piece of track data, and storing the new experience quadruple into an experience playback pool;
and randomly sampling the experience quadruple from the experience playback pool by using DDPG algorithm, learning and updating the current strategy to obtain a new strategy, and circulating until the strategy learning is completed.
2. The extended kalman filter and reinforcement learning based beam-line station parameter optimization method according to claim 1, wherein training the probabilistic neural network using the collected trajectory data to obtain a state transition model comprises:
After the first round of sampling is finished and a plurality of pieces of track data are collected, training a preset probabilistic neural network by using the collected plurality of pieces of track data to obtain a state transition model, wherein the state transition model is as follows:
f(s,a;θ)→μ,σ2;
Wherein s and a represent the current state and the current action respectively, θ is a model parameter of the probabilistic neural network, μ is a mean vector, and σ 2 is a variance vector.
3. The method for optimizing parameters of a beam-line station based on extended kalman filtering and reinforcement learning according to claim 2, wherein said developing extended kalman filtering in combination with the state transition model for each piece of track data, replacing the filtered next time state into an experience quadruple of each piece of track data and saving the new experience quadruple into an experience playback pool comprises:
predicting the state at the current moment through a state transition model and calculating an error covariance matrix of the state;
Introducing an observation value and a Kalman gain, and correcting a predicted state value and an error covariance matrix;
after the correction is completed, updating the error covariance matrix, and replacing the corrected predicted state value as the state value at the next moment into the original experience quaternion to form a new experience quaternion.
4. The extended kalman filter and reinforcement learning based beam-line station parameter optimization method according to claim 3, wherein predicting the state of the current moment by the state transition model and calculating an error covariance matrix of the state comprises:
The state of the predicted current time s t using the state transition model is as follows:
For the predicted value of the current time s t, the prediction formula of the error covariance is as follows:
Wherein F t+1 is jacobian matrix of state transition model at current time with respect to state s, P is covariance matrix, P t is error covariance matrix of update stage at last time, P t+1 - is covariance matrix of prediction stage at current time, Q t+1 is process noise covariance matrix, and T represents transpose operation.
5. The extended kalman filter and reinforcement learning based beam-line station parameter optimization method according to claim 4, wherein predicting the state at the current time by the state transition model and calculating an error covariance matrix of the state further comprises:
the error w i t+1 for each empirical quadruple is calculated using the N consecutive empirical quadruples (s t,at,rt,st+1) in the current trajectory and the state transition model as follows:
wi t+1=si t+1i t+1;
Wherein i is E [1, N ], the average error of all experience quaternions is calculated The following are provided:
Q t+1 is calculated using the following formula:
wherein T represents transposition operation, i represents an ith quadruple sample in track data, and T and t+1 represent the last moment and the current moment in the experience quadruple respectively.
6. The extended kalman filter and reinforcement learning based beam-line station parameter optimization method according to claim 4, wherein the introducing the observed value and the kalman gain, correcting the predicted state value and the error covariance matrix comprises:
the difference y t+1 between the observed value and the predicted value is calculated as follows:
The kalman gain K t+1 is calculated using the following formula:
Kt+1=Pt+1 -(Pt+1 -+R)-1;
wherein R is a diagonal matrix calculated and constructed by using continuous track data in an experience playback pool in a dynamic estimation mode, N continuous experience samples are sampled in the current track, and the N continuous experience samples are taken Assuming that there are n dimensional components, the standard deviation σ of each component is calculated as follows:
R=diag(σ1 22 23 2,…,σn 2);
the correction formula of the predicted state value is as follows:
the correction formula of the error covariance matrix is as follows:
Pt+1=(1-Kt+1)Pt+1 -;
p t+1 is the corrected error covariance matrix.
7. The extended kalman filter and reinforcement learning based beam-line station parameter optimization method according to claim 3, wherein predicting the state at the current moment by the state transition model and calculating an error covariance matrix of the state further comprises:
The error covariance emergency call needs to perform an initialization operation, using track data (s t,at,rt,st+1) of N continuous empirical quaternions in the current track, assuming that st has N-dimensional components, s t=(st1,st2,st3,…,stn), calculating a standard deviation σ of each component of N s t, and calculating an initial value of the error covariance as follows:
P0=diag(σ1 22 23 2,…,σn 2);
Wherein, Representing the mean of the first component, s i1 represents the value of the first component of the state s t in the ith trace data, calculates the covariance matrix of each empirical quadruple observation and prediction, and calculates the mean as the initial value P 0 of the prediction phase error covariance.
8. An extended kalman filtering and reinforcement learning-based beam line station parameter optimization system, which is characterized by comprising:
The track data acquisition module is used for randomly selecting a plurality of initial states from the environment where the beam line station system is located based on an initial strategy and a preset target state, sampling the initial states and collecting a plurality of track data consisting of continuous experience quadruples;
The state transition model generation module is used for training the probabilistic neural network by using the collected track data in the first round of sampling to obtain a state transition model;
The extended Kalman filtering module is used for carrying out extended Kalman filtering on each piece of track data by combining the state transition model, replacing the filtered state into an experience quadruple of each piece of track data and storing a new experience quadruple into an experience playback pool;
and the strategy learning module is used for randomly sampling the experience quadruple from the experience playback pool by using DDPG algorithm, learning and updating the current strategy to obtain a new strategy, and circulating until the strategy learning is completed.
9. An electronic device, characterized in that the device comprises a processor and a memory, wherein the memory stores a program, and the program is loaded and executed by the processor to implement a beam-line station parameter optimization method based on extended kalman filtering and reinforcement learning as claimed in any one of claims 1 to 7.
10. A computer readable storage medium, wherein a program is stored in the storage medium, and when the program is executed by a processor, the program is used to implement a beam line station parameter optimization method based on extended kalman filtering and reinforcement learning as set forth in any one of claims 1 to 7.
CN202510053213.9A 2025-01-14 2025-01-14 Beam line station parameter optimization method based on extended Kalman filtering and reinforcement learning Active CN119474620B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510053213.9A CN119474620B (en) 2025-01-14 2025-01-14 Beam line station parameter optimization method based on extended Kalman filtering and reinforcement learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510053213.9A CN119474620B (en) 2025-01-14 2025-01-14 Beam line station parameter optimization method based on extended Kalman filtering and reinforcement learning

Publications (2)

Publication Number Publication Date
CN119474620A CN119474620A (en) 2025-02-18
CN119474620B true CN119474620B (en) 2025-05-16

Family

ID=94588848

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510053213.9A Active CN119474620B (en) 2025-01-14 2025-01-14 Beam line station parameter optimization method based on extended Kalman filtering and reinforcement learning

Country Status (1)

Country Link
CN (1) CN119474620B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111223141A (en) * 2019-12-31 2020-06-02 东华大学 System and method for optimizing the efficiency of automated assembly line operations based on reinforcement learning
CN114879738A (en) * 2022-05-30 2022-08-09 太原理工大学 A model-enhanced UAV flight trajectory reinforcement learning optimization method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11395149B2 (en) * 2020-05-01 2022-07-19 Digital Global Systems, Inc. System, method, and apparatus for providing dynamic, prioritized spectrum management and utilization
US11787419B1 (en) * 2021-10-22 2023-10-17 Zoox, Inc. Robust numerically stable Kalman filter for autonomous vehicles
CN114596553B (en) * 2022-03-11 2023-01-24 阿波罗智能技术(北京)有限公司 Model training method, trajectory prediction method and device and automatic driving vehicle
CN116743112A (en) * 2023-04-14 2023-09-12 同济大学 An extended Kalman filter target tracking method based on reinforcement learning
CN118004478A (en) * 2024-01-29 2024-05-10 贵州电网有限责任公司 Unmanned aerial vehicle-based laser charging energy transmission monitoring method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111223141A (en) * 2019-12-31 2020-06-02 东华大学 System and method for optimizing the efficiency of automated assembly line operations based on reinforcement learning
CN114879738A (en) * 2022-05-30 2022-08-09 太原理工大学 A model-enhanced UAV flight trajectory reinforcement learning optimization method

Also Published As

Publication number Publication date
CN119474620A (en) 2025-02-18

Similar Documents

Publication Publication Date Title
US20250165217A1 (en) Computer Processing and Outcome Prediction Systems and Methods
US11645496B2 (en) Optimization apparatus and optimization apparatus control method
US20200364594A1 (en) Information processing apparatus, optimization system, and optimization method
EP3779616B1 (en) Optimization device and control method of optimization device
CN119294342B (en) Chip layout optimization method, system and device based on improved reinforcement learning
US20200327393A1 (en) Optimization system and control method for optimization system
US20160070243A1 (en) System and Method for Explicit Model Predictive Control
CN111832693B (en) Neural network layer operation, model training method, device and equipment
Boltz et al. Leveraging prior mean models for faster Bayesian optimization of particle accelerators
CN119474620B (en) Beam line station parameter optimization method based on extended Kalman filtering and reinforcement learning
EP3745314B1 (en) Method, apparatus and computer program for training deep networks
US20220366011A1 (en) Non-transitory computer-readable storage medium and information processing apparatus
Dülger et al. Memory coalescing implementation of Metropolis resampling on graphics processing unit
CN120146130A (en) A method and device for determining parameters of low-power neural network based on zero-order optimization
US20240119109A1 (en) Methods and systems for time-affecting linear pathway (talp) extensions
CN111221248A (en) Optimization device and control method of optimization device
US20240126971A1 (en) Layout design system using deep reinforcement learning and learning method thereof
CN117061330A (en) Root cause location model training method, root cause location method and device
Kattula et al. Multiple sampling with reduced resampling for particle filtering
JP6994572B2 (en) Data processing system and data processing method
CN119853027B (en) Wind power prediction method and device based on deep learning and error correction
Lu A Survey on Regularization-Based Structured Neural Network Pruning
US20240394535A1 (en) Artificial neural network performance prediction method and device according to data format
US20230168873A1 (en) Scheduling apparatus, training apparatus, scheduler and generation method
CN113688995A (en) Quantum system control method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant