[go: up one dir, main page]

CN112905213B - Method and system for realizing ECU (electronic control Unit) refreshing parameter optimization based on convolutional neural network - Google Patents

Method and system for realizing ECU (electronic control Unit) refreshing parameter optimization based on convolutional neural network Download PDF

Info

Publication number
CN112905213B
CN112905213B CN202110327752.9A CN202110327752A CN112905213B CN 112905213 B CN112905213 B CN 112905213B CN 202110327752 A CN202110327752 A CN 202110327752A CN 112905213 B CN112905213 B CN 112905213B
Authority
CN
China
Prior art keywords
neural network
convolutional neural
brushing
occupancy rate
refreshing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110327752.9A
Other languages
Chinese (zh)
Other versions
CN112905213A (en
Inventor
刘杰
朱磊磊
朱雪岩
王殿辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China National Heavy Duty Truck Group Jinan Power Co Ltd
Original Assignee
China National Heavy Duty Truck Group Jinan Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China National Heavy Duty Truck Group Jinan Power Co Ltd filed Critical China National Heavy Duty Truck Group Jinan Power Co Ltd
Priority to CN202110327752.9A priority Critical patent/CN112905213B/en
Publication of CN112905213A publication Critical patent/CN112905213A/en
Application granted granted Critical
Publication of CN112905213B publication Critical patent/CN112905213B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/65Updates
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/61Installation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Testing And Monitoring For Control Systems (AREA)

Abstract

本发明提供一种基于卷积神经网络实现ECU刷写参数优化的方法及系统,二者均可执行以下步骤:D1、采集硬件配置参数和刷写时序参数,硬件配置参数包括刷写前CPU占用率z;D2、采用训练好的第一卷积神经网络模型得到预测CPU占用率y;D3、计算y+z;D4、判断y+z是否小于m,若是,则执行步骤D5,若否,则执行步骤D7;D5、将刷写时序参数的当前值自减,并利用训练好的第二卷积神经网络预测刷写时长t;D6、判断所述刷写时长t是否小于n,若是,则执行步骤D8,否则执行步骤D7;D7、将刷写时序参数的当前值自加,之后执行步骤D1;D8、控制ECU刷写程序按刷写时序参数的最新值进行刷写。本发明用于降低刷写终端设备软硬件环境对刷写速率的影响。

The present invention provides a method and system for optimizing ECU flashing parameters based on a convolutional neural network, both of which can perform the following steps: D1, collecting hardware configuration parameters and flashing timing parameters, the hardware configuration parameters include CPU occupation before flashing rate z; D2, using the trained first convolutional neural network model to obtain the predicted CPU occupancy rate y; D3, calculating y+z; D4, judging whether y+z is less than m, if so, then execute step D5, if not, Then execute step D7; D5, decrement the current value of the flashing timing parameter, and use the trained second convolutional neural network to predict the flashing duration t; D6, judge whether the flashing duration t is less than n, if so, Then go to step D8, otherwise go to step D7; D7, automatically add the current value of the flash timing parameter, and then go to step D1; D8, control the ECU flash program to flash according to the latest value of the flash timing parameter. The invention is used to reduce the impact of the software and hardware environment of the flashing terminal equipment on the flashing rate.

Description

Method and system for realizing ECU (electronic control Unit) refreshing parameter optimization based on convolutional neural network
Technical Field
The invention relates to the technical field of vehicle control, in particular to a method and a system for realizing ECU (electronic control unit) refreshing parameter optimization based on a convolutional neural network.
Background
The ECU is an important component of an automobile engine and can regulate the engine power output value based on internally written data.
Data swiping by the ECU is in three aspects: firstly, data packaging is carried out when an engine/automobile goes off line and leaves a factory, secondly, an automobile maintenance station carries out ECU updating and maintenance, and thirdly, the engine is refitted and applied. The ECU updating is used for optimizing parameters of an automobile engine and the ECU, and the working principle is that fuel supply and ignition are refined on the basis of original data through an optimizing program, the parameters are optimized, output power is increased, torque is increased or oil consumption is reduced. Increasing horsepower and torque increases fuel consumption and decreases horsepower and torque, thereby reducing fuel consumption. Brushing the ECU refers to changing the program in the computer in a broad sense, such as rewriting the MAP in the ECU, or covering the original MAP, or dynamically intervening in the final purpose of MAP, namely, re-matching the ignition and oil injection time and controlling the opening and closing time and speed of the air intake and exhaust valves.
In recent years, with the development of electronic injection technology, the writing of ECU data becomes a necessary link in the automobile industry. The main field of ECU (electronic control unit) handwriting is in the engine and production links, and at present, a large number of official maintenance stations and folk maintenance stations provide ECU handwriting services, so that the demand of the market for ECU handwriting tools is large. However, current suppliers of ECU writing tools include german bosch, german Vector, australian AVL, etc., which are relatively similar and difficult to meet the actual demands of the market.
At present, the ECU writing tool has functions of fault diagnosis, fault code clearing, data calibration and the like besides ECU data writing. The ECU writing tool is mostly connected with a computer end and an ECU controller through ECU diagnosis writing equipment, and the writing function is realized by software control of the computer end. As a core function of the ECU writing tool, the working efficiency of the ECU data writing is the most important part of the software design of the ECU writing tool. Through the ECU program of computer end software refreshing, except the influence of ECU itself and CAN bus, different computer configurations will also influence the refreshing speed and the refreshing result, through experimental finding, the above-mentioned refreshing method has the following disadvantages:
when the CPU occupancy rate of the computer is too high, the brushing process is blocked, so that overtime failure is easy to cause; when the hardware environment configuration of the computer is low, the brushing time is relatively long under the same brushing time sequence.
Therefore, the invention provides an ECU program writing optimization method and system based on a Convolutional Neural Network (CNN), which are used for solving the problems.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a Convolutional Neural Network (CNN) -based ECU program refreshing optimization method and a system, which are used for reducing the influence of hardware and software environments of a refreshing terminal device on the refreshing speed and improving the refreshing speed and success rate.
In a first aspect, the present invention provides a method for optimizing a brushing parameter of an ECU based on a convolutional neural network, applied to a brushing terminal device, including the steps of:
d1, acquiring hardware configuration parameters of the refreshing terminal equipment, and acquiring current refreshing time sequence parameters of an ECU refreshing program to be executed on the refreshing terminal equipment; the hardware configuration parameters comprise CPU occupancy rate of the terminal equipment to be refreshed, and the CPU occupancy rate is recorded as CPU occupancy rate z before being refreshed;
d2, taking the latest acquired hardware configuration parameters and the latest acquired refreshing time sequence parameters as inputs, and predicting the CPU occupancy rate of the refreshing terminal equipment by adopting a trained first convolutional neural network model to obtain a predicted CPU occupancy rate y;
d3, calculating the sum y+z of the CPU occupancy rate z before brushing and the predicted CPU occupancy rate y;
d4, judging whether y+z is smaller than m, if yes, continuing to execute the step D5, and if not, executing the step D7; wherein m is a preset CPU total occupancy rate threshold;
d5, the current value of the brushing time sequence parameter is self-subtracted according to a preset mode, the brushing time sequence parameter after self-subtraction adjustment and the hardware configuration parameter which is acquired latest in the step D1 are used as input, a second trained convolutional neural network is utilized to predict and output a predicted brushing time length t, and then the step D6 is executed;
D6, judging whether the brushing time t is smaller than n, if yes, turning to the step D8 to continue execution, otherwise, continuing to execute the step D7; wherein n is a preset threshold value of the brushing time length;
d7, self-adding the current value of the brushing time sequence parameter according to a preset mode, and then continuously executing the step D1;
and D8, controlling the ECU refreshing program to refresh according to the latest value of the refreshing time sequence parameter.
Further, the construction method of the first convolutional neural network model and the second convolutional neural network model comprises the following steps:
q1: acquiring a sample data set;
q2: establishing a first convolutional neural network and a second convolutional neural network;
the first convolutional neural network and the second convolutional neural network comprise five convolutional layers conv, three pooling layers pooling, two normalization layers Lrn and two full connection layers fc, and the corresponding network structures are all
In the above network structure, each ReLU is an activation function, and MSE is a loss function, where: the formula of the loss function MSE in the first convolutional neural network isWherein y' is the actual CPU occupancy rate before the acquisition and the writing, and y is the CPU occupancy rate predicted by the first convolutional neural network; the formula of the loss function MSE in the second convolutional neural network is +. >Wherein t' represents the actual brushing time length, and t represents the brushing time length predicted by the second convolutional neural network; n1 is the number of samples used to train the first convolutional neural network, and n2 is the number of samples used to train the second convolutional neural network;
q3: and training the first convolutional neural network and the second convolutional neural network respectively by using the constructed sample data set to obtain a trained first convolutional neural network model and a trained second convolutional neural network model.
Further, the implementation method of the step Q3 is as follows:
dividing the sample data set into a training set and a test set according to the ratio of 4:1, wherein the training set is divided into k parts according to a k-fold check method, wherein k-1 parts are used as training, and the other part is used as check;
training the established first convolutional neural network and the second convolutional neural network by using training sets respectively;
checking the established first convolutional neural network and the second convolutional neural network by using the test set respectively;
and calculating the difference between the predicted result and the actual result through the MSE in each training, respectively according to a predetermined minimum value of the check error, and adjusting the parameters to retrain the model according to the number of cyclic training until the calculated result of the MSE is smaller than or equal to the predetermined minimum value of the check error, thereby obtaining a trained corresponding convolutional neural network model.
Further, the implementation method of the step Q1 is as follows:
collecting a sample;
preprocessing each acquired sample to correspondingly obtain preprocessed samples of each sample;
collecting all the obtained preprocessed samples to form the sample data set;
the method for preprocessing each acquired sample comprises the following steps:
calculating average value average of all data in the sample;
calculating the standard deviation of each datum in the sample;
each piece of data x in the sample is preprocessed by using the formula H= |x-average|/sigma by using the average value average and the calculated standard deviation, and a preprocessed sample is obtained; wherein, H is the preprocessed data corresponding to the data x, and sigma is the standard deviation corresponding to the data x.
Further, the brushing time sequence parameters comprise a frame-to-frame brushing time interval CFMS, a packet-to-packet brushing time interval FFMS and a timeout time setting parameter RTMS corresponding to an ECU brushing program;
the hardware configuration parameters also comprise CPU processing frequency, memory bank size, current residual memory size, CPU occupancy rate corresponding to the refreshing and refreshing time length corresponding to the refreshing of the refreshing terminal equipment;
the corresponding CPU occupancy rate is the preset CPU occupancy rate difference; the CPU occupancy rate difference value is used for representing the difference value between the highest CPU occupancy rate during the writing and the latest value of the CPU occupancy rate before the writing;
The brushing time length corresponding to the brushing is a brushing time length preset according to experience and is used for representing the brushing time length used by the ECU brushing program to execute the brushing.
In a second aspect, the present invention provides a system for implementing optimization of an ECU handwriting parameter based on a convolutional neural network, applied to a handwriting terminal device, the system comprising:
the parameter acquisition unit is used for acquiring hardware configuration parameters of the refreshing terminal equipment and acquiring current refreshing time sequence parameters of an ECU refreshing program to be executed on the refreshing terminal equipment; the hardware configuration parameters comprise CPU occupancy rate of the terminal equipment to be refreshed, and the CPU occupancy rate is recorded as CPU occupancy rate z before being refreshed;
the CPU occupancy rate prediction unit is used for taking the hardware configuration parameters acquired by the parameter acquisition unit and the brushing time sequence parameters acquired by the parameter acquisition unit as input, and predicting the CPU occupancy rate of the brushing terminal equipment by adopting the trained first convolutional neural network model to obtain a predicted CPU occupancy rate y;
the computing unit is used for computing the sum y+z of the CPU occupancy rate z before the writing and the predicted CPU occupancy rate y;
a first judging unit configured to judge whether the sum y+z is smaller than m; m is a preset CPU total occupancy rate threshold;
The brushing time length prediction unit is used for automatically subtracting the current value of the brushing time sequence parameter according to a preset mode when the judgment result of the first judgment unit is yes, taking the brushing time sequence parameter after the self-subtraction adjustment and the hardware configuration parameter which is acquired by the parameter acquisition unit newly as input, and predicting and outputting the predicted brushing time length t by using the trained second convolutional neural network;
the second judging unit is used for judging whether the brushing time t is smaller than n, wherein n is a preset brushing time threshold;
the first execution unit is configured to execute, when the determination result of the first determination unit is no and when the determination result of the second determination unit is no: the current value of the brushing time sequence parameter is added automatically according to a preset mode, and then the parameter acquisition unit is called again;
and the second execution unit is used for controlling the ECU refreshing program to refresh according to the latest value of the refreshing time sequence parameter when the judgment result of the second judgment unit is yes.
Further, the construction method of the first convolutional neural network model and the second convolutional neural network model comprises the following steps:
q1: acquiring a sample data set;
Q2: establishing a first convolutional neural network and a second convolutional neural network;
the first convolutional neural network and the second convolutional neural network comprise five convolutional layers conv, three pooling layers pooling, two normalization layers LRN and two full connection layers fc, and the corresponding network structures are all
In the above network structure, each ReLU is an activation function, and MSE is a loss function, where: the formula of the loss function MSE in the first convolutional neural network isWherein y' is the actual CPU occupancy rate before the acquisition and the writing, and y is the CPU occupancy rate predicted by the first convolutional neural network; the formula of the loss function MSE in the second convolutional neural network is +.>Wherein t' represents the actual brushing time length, and t represents the brushing time length predicted by the second convolutional neural network; n1 is the number of samples used to train the first convolutional neural network, and n2 is the number of samples used to train the second convolutional neural network;
q3: and training the first convolutional neural network and the second convolutional neural network respectively by using the constructed sample data set to obtain a trained first convolutional neural network model and a trained second convolutional neural network model.
Further, the implementation method of the step Q3 is as follows:
dividing the sample data set into a training set and a test set according to the ratio of 4:1, wherein the training set is divided into k parts according to a k-fold check method, wherein k-1 parts are used as training, and the other part is used as check;
training the established first convolutional neural network and the second convolutional neural network by using training sets respectively;
checking the established first convolutional neural network and the second convolutional neural network by using the test set respectively;
and calculating the difference between the predicted result and the actual result through the MSE in each training, respectively according to a predetermined minimum value of the check error, and adjusting the parameters to retrain the model according to the number of cyclic training until the calculated result of the MSE is smaller than or equal to the predetermined minimum value of the check error, thereby obtaining a trained corresponding convolutional neural network model.
Further, the implementation method of the step Q1 is as follows:
collecting a sample;
preprocessing each acquired sample to correspondingly obtain preprocessed samples of each sample;
collecting all the obtained preprocessed samples to form the sample data set;
The method for preprocessing each acquired sample comprises the following steps:
calculating average value average of all data in the sample;
calculating the standard deviation of each datum in the sample;
each piece of data x in the sample is preprocessed by using the formula H= |x-average|/sigma by using the average value average and the calculated standard deviation, and a preprocessed sample is obtained; wherein, H is the preprocessed data corresponding to the data x, and sigma is the standard deviation corresponding to the data x.
Further, the brushing time sequence parameters comprise a frame-to-frame brushing time interval CFMS, a packet-to-packet brushing time interval FFMS and a timeout time setting parameter RTMS corresponding to an ECU brushing program;
the hardware configuration parameters also comprise CPU processing frequency, memory bank size, current residual memory size, CPU occupancy rate corresponding to the refreshing and refreshing time length corresponding to the refreshing of the refreshing terminal equipment;
the corresponding CPU occupancy rate is the preset CPU occupancy rate difference; the CPU occupancy rate difference value is used for representing the difference value between the highest CPU occupancy rate during the writing and the latest value of the CPU occupancy rate before the writing;
the brushing time length corresponding to the brushing is a brushing time length preset according to experience and is used for representing the brushing time length used by the ECU brushing program to execute the brushing.
The invention has the beneficial effects that:
the method and the system for realizing ECU (electronic control unit) refreshing parameter optimization based on the convolutional neural network are beneficial to pre-judging the CPU occupancy rate and the refreshing time according to various environment parameters of a computer (namely a refreshing terminal device) before refreshing, and can timely adjust the refreshing time sequence parameters when the CPU occupancy rate is pre-judged to be too high and the refreshing time is pre-judged to be too long, so that the refreshing time is prevented from being too long to a certain extent, and the program blocking overtime caused by the too high CPU occupancy rate is prevented from failing in the refreshing time process to a certain extent. Therefore, the invention can avoid the influence of the software and hardware environment of the computer on the program writing speed to a certain extent.
In addition, the invention has reliable design principle, simple structure and very wide application prospect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a schematic flow chart of a method of one embodiment of the invention.
Fig. 2 is a schematic diagram of the network structure of the first convolutional neural network and the second convolutional neural network in the present invention.
FIG. 3 is a schematic block diagram of a system of one embodiment of the present invention.
Detailed Description
In order to make the technical solution of the present invention better understood by those skilled in the art, the technical solution of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
FIG. 1 is a schematic flow chart of a method of one embodiment of the invention. The method is applied to the refreshing terminal equipment, the refreshing time length and the CPU occupancy rate can be predicted based on the convolutional neural network, when the CPU occupancy rate is too high, the refreshing time sequence parameter can be reduced, and when the refreshing time length is too long, the refreshing time sequence parameter can be increased. The swipe terminal device may be a computer.
As shown in fig. 1, the method 100 includes:
and 110, collecting hardware configuration parameters of the refreshing terminal equipment, and collecting current refreshing time sequence parameters of an ECU refreshing program to be executed on the refreshing terminal equipment.
The hardware configuration parameters comprise CPU occupancy rate of the terminal equipment to be refreshed, and the CPU occupancy rate is recorded as CPU occupancy rate z before being refreshed.
And 120, taking the latest acquired hardware configuration parameters and the latest acquired refreshing time sequence parameters as inputs, and predicting the CPU occupancy rate of the refreshing terminal equipment by adopting the trained first convolutional neural network model to obtain a predicted CPU occupancy rate y.
And 130, calculating the sum y+z of the CPU occupancy rate z before the brushing and the predicted CPU occupancy rate y.
Step 140, judging whether y+z is smaller than m, if yes, continuing to execute step 150, and if no, turning to execute step 170.
Wherein m is a preset CPU total occupancy rate threshold.
And 150, self-subtracting the current value of the brushing time sequence parameter according to a preset mode, taking the brushing time sequence parameter after self-subtracting adjustment and the hardware configuration parameter which is acquired latest in the step 110 as inputs, and predicting and outputting the predicted brushing time length t by using the trained second convolutional neural network.
Step 160 is then performed.
Step 160, judging whether the brushing duration t is less than n, if yes, turning to step 180 to continue execution, otherwise, continuing to execute step 170.
Wherein n is a preset threshold value of the brushing time length.
Step 170, the current value of the above-mentioned refresh timing parameter is self-added according to a preset manner, and then the process proceeds to step 110.
And 180, controlling the ECU flashing program to perform flashing according to the latest value of the flashing time sequence parameter.
Specifically, a brushing process is entered, and the ECU brushing program is controlled to brush according to the latest value of the brushing time sequence parameter.
Optionally, as an exemplary embodiment of the present invention, in step 110, the brushing timing parameters include a frame-to-frame brushing interval CFMS, a packet-to-packet brushing interval FFMS, and a timeout period setting parameter RTMS corresponding to the ECU brushing procedure; the hardware configuration parameters also comprise CPU processing frequency, memory bank size, current residual memory size, CPU occupancy rate corresponding to the refreshing and refreshing time length corresponding to the refreshing of the refreshing terminal equipment.
The corresponding CPU occupancy rate is the preset CPU occupancy rate difference; the CPU occupancy rate difference value is used for representing the difference value between the highest CPU occupancy rate during the writing and the latest value of the CPU occupancy rate before the writing.
The brushing time length corresponding to the brushing is a brushing time length preset according to experience and is used for representing the brushing time length used by the ECU brushing program to execute the brushing.
Optionally, as an exemplary embodiment of the present invention, the method for constructing the first convolutional neural network model and the second convolutional neural network model includes:
step one: a sample dataset is acquired.
Specifically, the implementation method of the first step comprises the following steps:
and (3) collecting a sample.
In implementation, the sample data may be obtained by continuously executing the ECU refresh program having refresh timing parameters of different values at the refresh terminal device. Specifically, for each execution of the ECU flashing program, the hardware configuration parameters of the one-time flashing terminal device and the flashing timing parameters (i.e., parameter values) of the ECU flashing program are collected as one sample, respectively. Wherein, the parameter types of the hardware configuration parameters of the refreshing terminal device and the parameter types of the refreshing time sequence parameters of the ECU refreshing program refer to the hardware configuration parameters and the refreshing time sequence parameters in the step 110, specifically: for each sample, the refresh timing parameters in the sample are a frame-to-frame refresh time interval CFMS, a packet-to-packet refresh time interval FFMS, and a timeout period setting parameter RTMS corresponding to the executed ECU refresh program, and the hardware configuration parameters in the sample are a CPU processing frequency of the terminal device for refresh, a memory bank size, a current remaining memory size, a CPU occupancy rate before refresh, a CPU occupancy rate corresponding to refresh, and a refresh duration corresponding to refresh, which are different from step 110:
The CPU occupancy rate before the refreshing in the sample is the CPU occupancy rate before the refreshing terminal equipment executes the ECU refreshing program corresponding to the sample, and the CPU occupancy rate is collected only once;
the CPU processing frequency, the memory bank size and the current residual memory size in the sample are all collected once (and are all collected before the ECU writing program corresponding to the sample is executed by the writing terminal equipment);
the calculation method of the CPU occupancy rate corresponding to the refreshing in the sample is that the CPU occupancy rate before refreshing in the sample is subtracted from the highest CPU occupancy rate of the refreshing terminal equipment in the process of actually executing the ECU refreshing program corresponding to the sample (for example, a sample A exists, the ECU refreshing program corresponding to the sample A is the ECU refreshing program B, and the calculation method of the CPU occupancy rate corresponding to the refreshing in the sample A is that the CPU occupancy rate before refreshing in the sample is subtracted from the highest CPU occupancy rate of the refreshing terminal equipment in the process of actually executing the ECU refreshing program B);
and the corresponding brushing time length of the brushing in the sample is the actual brushing time length of the brushing executed by the brushing terminal equipment by using the ECU brushing program corresponding to the sample.
The method comprises the steps that a brushing time sequence parameter in a sample is a frame-to-frame brushing time interval CFMS, a packet-to-packet brushing time interval FFMS and a timeout time setting parameter RTMS corresponding to an ECU brushing program corresponding to the sample, and the acquisition is carried out before the brushing terminal equipment executes the ECU brushing program corresponding to the sample and is carried out only once;
The hardware configuration parameters also comprise CPU processing frequency, memory bank size, current residual memory size, CPU occupancy rate corresponding to the refreshing and refreshing time length corresponding to the refreshing of the refreshing terminal equipment;
the corresponding CPU occupancy rate is the preset CPU occupancy rate difference; the CPU occupancy rate difference value is used for representing the difference value between the highest CPU occupancy rate during the writing and the latest value of the CPU occupancy rate before the writing;
the brushing time length corresponding to the brushing is a brushing time length preset according to experience and is used for representing the brushing time length used by the ECU brushing program to execute the brushing.
And (2) preprocessing each acquired sample, and correspondingly obtaining preprocessed samples of each sample.
Specifically, the method for preprocessing each acquired sample comprises the following steps:
calculating average value average of all data in the sample;
calculating the standard deviation of each datum in the sample;
each piece of data x in the sample is preprocessed by using the formula H= |x-average|/sigma by using the average value average and the calculated standard deviation, and a preprocessed sample is obtained; wherein, H is the preprocessed data corresponding to the data x, and sigma is the standard deviation corresponding to the data x.
And then executing the step (3).
And (3) collecting all obtained preprocessed samples to form the sample data set.
Step two: a first convolutional neural network and a second convolutional neural network are established.
The first convolutional neural network and the second convolutional neural network established in the second step each comprise five convolutional layers conv, three pooling layers pooling, two normalization layers LRN and two full connection layers fc, and specifically, the network structures of the first convolutional neural network and the second convolutional neural network are:
in the above network structure, each ReLU is an activation function, and MSE is a loss function, where:
the formula of the loss function MSE in the first convolutional neural network isWherein y' is the actual collected CPU occupancy rate before the writing, and y is the CPU occupancy rate predicted by the first convolutional neural network (corresponding to the predicted CPU occupancy rate); n1 is the number of samples used to train the first convolutional neural network;
the loss function MSE in the second convolutional neural network is formulated asWherein t' represents the actual brushing time length, and t represents the brushing time length predicted by the second convolutional neural network; n2 is the number of samples used to train the second convolutional neural network.
In this embodiment, n1 and n2 are equal in value and are equal to the number of samples in the sample dataset.
In addition, the first convolutional neural network and the second convolutional neural network established in the second step are provided with:
the first layer of convolution layer adopts convolution kernel with the size of 11 multiplied by 11 and the step length of 4;
the second layer of convolution layers, the third layer of convolution layers, the fourth layer of convolution layers and the fifth layer of convolution layers all adopt convolution kernels with the size of 3 multiplied by 3 and the step length of 1;
after pooling, the first and second convolution layers are normalized with the LRN;
the size of each pooling layer is 3×3, and the step size is 2.
The network structure schematic diagrams of the first convolutional neural network and the second convolutional neural network are shown in fig. 2.
In fig. 2, reference numerals 300, 400, 500, and 600 denote conv+relu (convolutional layer+activation function), full connected+relu (fully connected layer+activation function), full connected, pooling (pooling layer), in order.
Step three: and training the first convolutional neural network and the second convolutional neural network respectively by using the constructed sample data set to obtain a trained first convolutional neural network model and a trained second convolutional neural network model.
Training the established first convolutional neural network by using the constructed sample data set, wherein the obtained trained first convolutional neural network is a trained first convolutional neural network model; and training the established second convolutional neural network by using the constructed sample data set, wherein the obtained trained second convolutional neural network is the trained second convolutional neural network model.
The first convolutional neural network model extracts depth features of input samples through five convolutional layers, and finally utilizes a loss function MSE of the depth features to realize CPU occupancy prediction through two full-connection layers fc.
And the second convolutional neural network model extracts depth features of input samples through five convolutional layers, and finally realizes the prediction of the writing time by using a loss function MSE through two full connection layers fc.
Optionally, as an embodiment of the present invention, the specific implementation method of the third step is:
dividing the sample data set into a training set and a test set according to the ratio of 4:1, wherein the training set is divided into k parts according to a k-fold check method, wherein k-1 parts are used as training, and the other part is used as check;
Training the established first convolutional neural network and the second convolutional neural network by using training sets respectively;
checking the established first convolutional neural network and the second convolutional neural network by using the test set respectively;
and calculating the difference between the predicted result and the actual result through the MSE in each training, respectively according to a predetermined minimum value of the check error, and adjusting the parameters to retrain the model according to the number of cyclic training until the calculated result of the MSE is smaller than or equal to the predetermined minimum value of the check error, thereby obtaining a trained corresponding convolutional neural network model.
FIG. 3 is one embodiment of a system for ECU flooding parameter optimization based on convolutional neural networks in accordance with the present invention. As shown in fig. 3, the system 200 is applied to a terminal device for brushing, and specifically includes:
a parameter acquisition unit 201, configured to acquire hardware configuration parameters of the terminal device for writing, and acquire current writing time sequence parameters of an ECU writing program to be executed on the terminal device for writing; the hardware configuration parameters comprise CPU occupancy rate of the terminal equipment to be refreshed, and the CPU occupancy rate is recorded as CPU occupancy rate z before being refreshed;
the CPU occupancy rate prediction unit 202 is configured to use the hardware configuration parameter acquired by the parameter acquisition unit 201 and the refresh timing parameter acquired by the parameter acquisition unit 201 as input, and predict the CPU occupancy rate of the refresh terminal device by using the trained first convolutional neural network model to obtain a predicted CPU occupancy rate y;
A calculating unit 203, configured to calculate a sum y+z of the pre-brush CPU occupancy z and the predicted CPU occupancy y;
a first judging unit 204, configured to judge whether the sum y+z is smaller than m; m is a preset CPU total occupancy rate threshold;
a brushing duration prediction unit 205, configured to, when the determination result of the first determination unit 204 is yes, self-subtract the current value of the brushing timing parameter according to a preset manner, and use the brushing timing parameter after self-subtraction adjustment and the hardware configuration parameter that is newly acquired by the parameter acquisition unit 201 as input, predict and output a predicted brushing duration t by using the trained second convolutional neural network;
a second judging unit 206, configured to judge whether the brushing duration t is less than n, where n is a preset brushing duration threshold;
the first execution unit 207 is configured to execute, when the determination result of the first determination unit 204 is no and when the determination result of the second determination unit 206 is no: the current value of the brushing time sequence parameter is added by itself according to a preset mode, and then the parameter acquisition unit 201 is called again;
and a second execution unit 208, configured to control the ECU flashing program to perform flashing according to the latest value of the flashing timing parameter when the determination result of the second determination unit 206 is yes.
Optionally, as an embodiment of the present invention, the method for constructing the first convolutional neural network model and the second convolutional neural network model includes:
step Q1: acquiring a sample data set;
step Q2: establishing a first convolutional neural network and a second convolutional neural network;
the first convolutional neural network and the second convolutional neural network comprise five convolutional layers conv, three pooling layers pooling, two normalization layers LRN and two full connection layers fc, and the corresponding network structures are all
In the above network structure, each ReLU is an activation function, and MSE is a loss function, where: the formula of the loss function MSE in the first convolutional neural network isWherein y' is the actual CPU occupancy rate before the acquisition and the writing, and y is the CPU occupancy rate predicted by the first convolutional neural network; the formula of the loss function MSE in the second convolutional neural network is +.>Wherein t' represents the actual brushing time length, and t represents the brushing time length predicted by the second convolutional neural network; n1 is the number of samples used to train the first convolutional neural network, and n2 is the number of samples used to train the second convolutional neural network;
step Q3: and training the first convolutional neural network and the second convolutional neural network respectively by using the constructed sample data set to obtain a trained first convolutional neural network model and a trained second convolutional neural network model.
The first convolutional neural network and the second convolutional neural network established in the step Q2 are both:
the first layer of convolution layer adopts convolution kernel with the size of 11 multiplied by 11 and the step length of 4;
the second layer of convolution layers, the third layer of convolution layers, the fourth layer of convolution layers and the fifth layer of convolution layers all adopt convolution kernels with the size of 3 multiplied by 3 and the step length of 1;
after pooling, the first layer convolution layer and the second layer convolution layer are normalized by LRNs respectively;
the size of each pooling layer is 3×3, and the step size is 2.
Optionally, as an embodiment of the present invention, the implementation method of step Q3 is:
dividing the sample data set into a training set and a test set according to the ratio of 4:1, wherein the training set is divided into k parts according to a k-fold check method, wherein k-1 parts are used as training, and the other part is used as check;
training the established first convolutional neural network and the second convolutional neural network by using training sets respectively;
checking the established first convolutional neural network and the second convolutional neural network by using the test set respectively;
and calculating the difference between the predicted result and the actual result through the MSE in each training, respectively according to a predetermined minimum value of the check error, and adjusting the parameters to retrain the model according to the number of cyclic training until the calculated result of the MSE is smaller than or equal to the predetermined minimum value of the check error, and correspondingly obtaining the trained corresponding convolutional neural network model.
Optionally, as an embodiment of the present invention, the implementation method of step Q1 is:
collecting a sample;
preprocessing each acquired sample to correspondingly obtain preprocessed samples of each sample;
collecting all the obtained preprocessed samples to form the sample data set;
the method for preprocessing each acquired sample comprises the following steps:
calculating average value average of all data in the sample;
calculating the standard deviation of each datum in the sample;
each piece of data x in the sample is preprocessed by using the formula H= |x-average|/sigma by using the average value average and the calculated standard deviation, and a preprocessed sample is obtained; wherein, H is the preprocessed data corresponding to the data x, and sigma is the standard deviation corresponding to the data x.
Alternatively, as an embodiment of the present invention, in the parameter acquisition unit 201: the brushing time sequence parameters comprise a frame-to-frame brushing time interval CFMS, a packet-to-packet brushing time interval FFMS and a timeout time setting parameter RTMS corresponding to an ECU brushing program;
the hardware configuration parameters also comprise CPU processing frequency, memory bank size, current residual memory size, CPU occupancy rate corresponding to the refreshing and refreshing time length corresponding to the refreshing of the refreshing terminal equipment; wherein,,
The corresponding CPU occupancy rate is the preset CPU occupancy rate difference; the CPU occupancy rate difference value is used for representing the difference value between the highest CPU occupancy rate during the writing and the latest value of the CPU occupancy rate before the writing;
the brushing time length corresponding to the brushing is a brushing time length preset according to experience and is used for representing the brushing time length used by the ECU brushing program to execute the brushing.
The same or similar parts between the various embodiments in this specification are referred to each other. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as far as reference is made to the description in the method embodiments.
Although the present invention has been described in detail by way of preferred embodiments with reference to the accompanying drawings, the present invention is not limited thereto. Various equivalent modifications and substitutions may be made in the embodiments of the present invention by those skilled in the art without departing from the spirit and scope of the present invention, and it is intended that all such modifications and substitutions be within the scope of the present invention/be within the scope of the present invention as defined by the appended claims. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. The method for realizing ECU (electronic control Unit) refreshing parameter optimization based on convolutional neural network is applied to refreshing terminal equipment and is characterized by comprising the following steps:
d1, acquiring hardware configuration parameters of the refreshing terminal equipment, and acquiring current refreshing time sequence parameters of an ECU refreshing program to be executed on the refreshing terminal equipment; the hardware configuration parameters comprise CPU occupancy rate of the terminal equipment to be refreshed, and the CPU occupancy rate is recorded as CPU occupancy rate z before being refreshed;
d2, taking the latest acquired hardware configuration parameters and the latest acquired refreshing time sequence parameters as inputs, and predicting the CPU occupancy rate of the refreshing terminal equipment by adopting a trained first convolutional neural network model to obtain a predicted CPU occupancy rate y;
d3, calculating the sum of the CPU occupancy rate z before the writing and the predicted CPU occupancy rate y
D4, judgingIf the number is less than m, continuing to execute the step D5, if not, executing the step D7; wherein m is a preset CPU total occupancy rate threshold;
d5, the current value of the brushing time sequence parameter is self-subtracted according to a preset mode, the brushing time sequence parameter after self-subtraction adjustment and the hardware configuration parameter which is acquired latest in the step D1 are used as input, a second trained convolutional neural network is utilized to predict and output a predicted brushing time length t, and then the step D6 is executed;
D6, judging whether the brushing time t is smaller than n, if yes, turning to the step D8 to continue execution, otherwise, continuing to execute the step D7; wherein n is a preset threshold value of the brushing time length;
d7, self-adding the current value of the brushing time sequence parameter according to a preset mode, and then continuously executing the step D1;
d8, controlling the ECU refreshing program to refresh according to the latest value of the refreshing time sequence parameter;
the construction method of the first convolutional neural network model and the second convolutional neural network model comprises the following steps:
q1: acquiring a sample data set;
q2: establishing a first convolutional neural network and a second convolutional neural network;
the first convolutional neural network and the second convolutional neural network each comprise five convolutional layersThree pooling layers->Two normalization layers->And two full connection layers->The corresponding network structures are all
In the above network structure, eachAre all activating functions>As a loss function, wherein: loss function in first convolutional neural network>The formula of (2) is +.>In the formula->For the actual acquired CPU occupancy before brushing, < >>The CPU occupancy rate is predicted by a first convolution neural network; loss function in second convolutional neural network >The formula of (2) is +.>In the formula->Representing the actual brushing duration, +.>Representing a predicted flush time length of the second convolutional neural network; above->For the number of samples for training the first convolutional neural network, the above +.>Is the number of samples used to train the second convolutional neural network;
q3: and training the first convolutional neural network and the second convolutional neural network respectively by using the constructed sample data set to obtain a trained first convolutional neural network model and a trained second convolutional neural network model.
2. The method for realizing the optimization of the ECU refreshing parameters based on the convolutional neural network according to claim 1, wherein the realization method of the step Q3 is as follows:
dividing the sample data set into a training set and a test set according to the ratio of 4:1, wherein the training set is divided into k parts according to a k-fold check method, wherein k-1 parts are used as training, and the other part is used as check;
training the established first convolutional neural network and the second convolutional neural network by using training sets respectively;
checking the established first convolutional neural network and the second convolutional neural network by using the test set respectively;
each training is performed through a loss function And calculating the difference between the predicted result and the actual result, respectively according to a predetermined minimum value of the check error, and adjusting the parameter retraining model according to the number of cyclic training until the calculated result of the loss function is smaller than or equal to the predetermined minimum value of the check error, thereby obtaining a trained corresponding convolutional neural network model.
3. The method for realizing the optimization of the ECU refreshing parameters based on the convolutional neural network according to claim 1, wherein the realization method of the step Q1 is as follows:
collecting a sample;
preprocessing each acquired sample to correspondingly obtain preprocessed samples of each sample;
collecting all the obtained preprocessed samples to form the sample data set;
the method for preprocessing each acquired sample comprises the following steps:
calculating average value average of all data in the sample;
calculating the standard deviation of each datum in the sample;
each piece of data x in the sample is preprocessed by using the formula H= |x-average|/sigma by using the average value average and the calculated standard deviation, and a preprocessed sample is obtained; wherein, H is the preprocessed data corresponding to the data x, and sigma is the standard deviation corresponding to the data x.
4. The method for optimizing the brushing parameters of the ECU based on the convolutional neural network according to claim 1, wherein the brushing timing parameters include a frame-to-frame brushing interval CFMS, a packet-to-packet brushing interval FFMS, and a timeout period setting parameter RTMS corresponding to the brushing program of the ECU;
the hardware configuration parameters also comprise CPU processing frequency, memory bank size, current residual memory size, CPU occupancy rate corresponding to the refreshing and refreshing time length corresponding to the refreshing of the refreshing terminal equipment;
the corresponding CPU occupancy rate is the preset CPU occupancy rate difference; the CPU occupancy rate difference value is used for representing the difference value between the highest CPU occupancy rate during the writing and the latest value of the CPU occupancy rate before the writing;
the brushing time length corresponding to the brushing is a brushing time length preset according to experience and is used for representing the brushing time length used by the ECU brushing program to execute the brushing.
5. A system for optimizing a handwriting parameter of an ECU based on a convolutional neural network, applied to a handwriting terminal device, comprising:
the parameter acquisition unit is used for acquiring hardware configuration parameters of the refreshing terminal equipment and acquiring current refreshing time sequence parameters of an ECU refreshing program to be executed on the refreshing terminal equipment; the hardware configuration parameters comprise CPU occupancy rate of the terminal equipment to be refreshed, and the CPU occupancy rate is recorded as CPU occupancy rate z before being refreshed;
The CPU occupancy rate prediction unit is used for taking the hardware configuration parameters acquired by the parameter acquisition unit and the brushing time sequence parameters acquired by the parameter acquisition unit as input, and predicting the CPU occupancy rate of the brushing terminal equipment by adopting the trained first convolutional neural network model to obtain a predicted CPU occupancy rate y;
a calculation unit for calculating the sum of the CPU occupancy rate z before the writing and the predicted CPU occupancy rate y
A first judging unit for judging the additionWhether or not is smaller than m; m is a preset CPU total occupancy rate threshold;
the brushing time length prediction unit is used for automatically subtracting the current value of the brushing time sequence parameter according to a preset mode when the judgment result of the first judgment unit is yes, taking the brushing time sequence parameter after the self-subtraction adjustment and the hardware configuration parameter which is acquired by the parameter acquisition unit newly as input, and predicting and outputting the predicted brushing time length t by using the trained second convolutional neural network;
the second judging unit is used for judging whether the brushing time t is smaller than n, wherein n is a preset brushing time threshold;
the first execution unit is configured to execute, when the determination result of the first determination unit is no and when the determination result of the second determination unit is no: the current value of the brushing time sequence parameter is added automatically according to a preset mode, and then the parameter acquisition unit is called again;
The second execution unit is used for controlling the ECU refreshing program to refresh according to the latest value of the refreshing time sequence parameter when the judgment result of the second judgment unit is yes;
the construction method of the first convolutional neural network model and the second convolutional neural network model comprises the following steps:
q1: acquiring a sample data set;
q2: establishing a first convolutional neural network and a second convolutional neural network;
the first convolutional neural network and the second convolutional neural network each comprise five convolutional layersThree pooling layers->Two normalization layers->And two full connection layers->The corresponding network structures are all
,
In the above network structure, eachAre all activating functions>As a loss function, wherein: loss function in first convolutional neural network>The formula of (2) is +.>In the formula->For the actual acquired CPU occupancy before brushing, < >>The CPU occupancy rate is predicted by a first convolution neural network; loss function in second convolutional neural network>The formula of (2) is +.>In the formula->Representing the actual brushing duration, +.>Representing a predicted flush time length of the second convolutional neural network;For training the firstSample number of convolutional neural network, +. >Is the number of samples used to train the second convolutional neural network;
q3: and training the first convolutional neural network and the second convolutional neural network respectively by using the constructed sample data set to obtain a trained first convolutional neural network model and a trained second convolutional neural network model.
6. The system for realizing the optimization of the ECU refreshing parameters based on the convolutional neural network according to claim 5, wherein the realization method of the step Q3 is as follows:
dividing the sample data set into a training set and a test set according to the ratio of 4:1, wherein the training set is divided into k parts according to a k-fold check method, wherein k-1 parts are used as training, and the other part is used as check;
training the established first convolutional neural network and the second convolutional neural network by using training sets respectively;
checking the established first convolutional neural network and the second convolutional neural network by using the test set respectively;
each training is performed through a loss functionAnd calculating the difference between the predicted result and the actual result, respectively according to a predetermined minimum value of the check error, and adjusting the parameter retraining model according to the number of cyclic training until the calculated result of the loss function is smaller than or equal to the predetermined minimum value of the check error, thereby obtaining a trained corresponding convolutional neural network model.
7. The system for realizing the optimization of the ECU refreshing parameters based on the convolutional neural network according to claim 5, wherein the realization method of the step Q1 is as follows:
collecting a sample;
preprocessing each acquired sample to correspondingly obtain preprocessed samples of each sample;
collecting all the obtained preprocessed samples to form the sample data set;
the method for preprocessing each acquired sample comprises the following steps:
calculating average value average of all data in the sample;
calculating the standard deviation of each datum in the sample;
each piece of data x in the sample is preprocessed by using the formula H= |x-average|/sigma by using the average value average and the calculated standard deviation, and a preprocessed sample is obtained; wherein, H is the preprocessed data corresponding to the data x, and sigma is the standard deviation corresponding to the data x.
8. The system for optimizing the brushing parameters of the ECU based on the convolutional neural network according to claim 5, wherein the brushing time sequence parameters comprise a frame-to-frame brushing time interval CFMS, a packet-to-packet brushing time interval FFMS and a timeout time setting parameter RTMS corresponding to the brushing program of the ECU;
The hardware configuration parameters also comprise CPU processing frequency, memory bank size, current residual memory size, CPU occupancy rate corresponding to the refreshing and refreshing time length corresponding to the refreshing of the refreshing terminal equipment;
the corresponding CPU occupancy rate is the preset CPU occupancy rate difference; the CPU occupancy rate difference value is used for representing the difference value between the highest CPU occupancy rate during the writing and the latest value of the CPU occupancy rate before the writing;
the brushing time length corresponding to the brushing is a brushing time length preset according to experience and is used for representing the brushing time length used by the ECU brushing program to execute the brushing.
CN202110327752.9A 2021-03-26 2021-03-26 Method and system for realizing ECU (electronic control Unit) refreshing parameter optimization based on convolutional neural network Active CN112905213B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110327752.9A CN112905213B (en) 2021-03-26 2021-03-26 Method and system for realizing ECU (electronic control Unit) refreshing parameter optimization based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110327752.9A CN112905213B (en) 2021-03-26 2021-03-26 Method and system for realizing ECU (electronic control Unit) refreshing parameter optimization based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN112905213A CN112905213A (en) 2021-06-04
CN112905213B true CN112905213B (en) 2023-08-08

Family

ID=76109246

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110327752.9A Active CN112905213B (en) 2021-03-26 2021-03-26 Method and system for realizing ECU (electronic control Unit) refreshing parameter optimization based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN112905213B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113553069B (en) * 2021-06-30 2023-04-07 东风柳州汽车有限公司 Engine ECU (electronic control Unit) flashing method, device and system
CN115062413B (en) * 2022-06-20 2024-07-16 中国重汽集团济南动力有限公司 A process optimization method suitable for vehicle assembly quality control

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106250812A (en) * 2016-07-15 2016-12-21 汤平 A kind of model recognizing method based on quick R CNN deep neural network
CN107851213A (en) * 2015-07-22 2018-03-27 高通股份有限公司 Transfer Learning in Neural Networks
CN108647834A (en) * 2018-05-24 2018-10-12 浙江工业大学 A kind of traffic flow forecasting method based on convolutional neural networks structure
CN108965001A (en) * 2018-07-12 2018-12-07 北京航空航天大学 A kind of appraisal procedure and device of vehicle message data model
CN109711349A (en) * 2018-12-28 2019-05-03 百度在线网络技术(北京)有限公司 Method and apparatus for generating control instruction
CN109784489A (en) * 2019-01-16 2019-05-21 北京大学软件与微电子学院 Convolutional neural networks IP kernel based on FPGA
CN110210644A (en) * 2019-04-17 2019-09-06 浙江大学 The traffic flow forecasting method integrated based on deep neural network
CN110555057A (en) * 2019-08-19 2019-12-10 武汉世纪楚林科技有限公司 energy-saving big data analysis method and device, terminal equipment and storage medium
CN111050116A (en) * 2018-10-12 2020-04-21 本田技研工业株式会社 System and method for online motion detection using a time recursive network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3535625B1 (en) * 2016-12-07 2021-02-24 Arilou Information Security Technologies Ltd. System and method for using signal waveform analysis for detecting a change in a wired network

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107851213A (en) * 2015-07-22 2018-03-27 高通股份有限公司 Transfer Learning in Neural Networks
CN106250812A (en) * 2016-07-15 2016-12-21 汤平 A kind of model recognizing method based on quick R CNN deep neural network
CN108647834A (en) * 2018-05-24 2018-10-12 浙江工业大学 A kind of traffic flow forecasting method based on convolutional neural networks structure
CN108965001A (en) * 2018-07-12 2018-12-07 北京航空航天大学 A kind of appraisal procedure and device of vehicle message data model
CN111050116A (en) * 2018-10-12 2020-04-21 本田技研工业株式会社 System and method for online motion detection using a time recursive network
CN109711349A (en) * 2018-12-28 2019-05-03 百度在线网络技术(北京)有限公司 Method and apparatus for generating control instruction
CN109784489A (en) * 2019-01-16 2019-05-21 北京大学软件与微电子学院 Convolutional neural networks IP kernel based on FPGA
CN110210644A (en) * 2019-04-17 2019-09-06 浙江大学 The traffic flow forecasting method integrated based on deep neural network
CN110555057A (en) * 2019-08-19 2019-12-10 武汉世纪楚林科技有限公司 energy-saving big data analysis method and device, terminal equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
云千芮.混合动力客车构型优选与参数标定方法研究.《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》.2020,(第8期),C035-589. *

Also Published As

Publication number Publication date
CN112905213A (en) 2021-06-04

Similar Documents

Publication Publication Date Title
CN112905213B (en) Method and system for realizing ECU (electronic control Unit) refreshing parameter optimization based on convolutional neural network
CN115915708B (en) Refrigeration equipment control parameter prediction method and device, electronic equipment and storage medium
CN101706882B (en) Embedded platform based neural network model online training method
CN116562156B (en) Training method, device, equipment and storage medium for control decision model
CN110427006A (en) A kind of multi-agent cooperative control system and method for process industry
CN116739147A (en) BIM-based intelligent energy consumption management and dynamic carbon emission calculation combined method and system
CN107591001A (en) Expressway Traffic Flow data filling method and system based on on-line proving
CN114878896B (en) Voltage determination method, device, electronic device and storage medium
CN110569562A (en) Short-term power load forecasting method and system based on trajectory tracking and error correction
CN115859808B (en) Pump unit work prediction method and device, electronic equipment and storage medium
CN117556725A (en) A flow field prediction method and system
CN116717391A (en) Parameter correction method, device, equipment and medium for main charge model of vehicle engine
CN116629136A (en) Updating method, device, equipment and storage medium of digital twin model
CN116257943A (en) A Simulation Method for Pressure Parameters of Nuclear Power Turbine
CN114996660B (en) Carbon capacity prediction method and device, electronic equipment and storage medium
CN115017466B (en) Carbon capacity determination method and device, electronic equipment and storage medium
CN117952806A (en) A smart low-carbon management method and system for achieving carbon peak and carbon neutrality
CN116638520A (en) Method and system for cross-process fault diagnosis of industrial robots based on transfer learning
CN115310359A (en) Method, device, equipment and medium for determination of transient nitrogen oxide emissions
CN110909455B (en) Method for delaying performance degradation of solid oxide fuel cell
CN114687859A (en) Method, device and equipment for compensating work unevenness of engine and storage medium
CN114492007A (en) Factor effect online identification method and device based on hierarchical error control
CN119393240B (en) A method for calculating fuel injection quantity
CN118977697B (en) Range extender power following control method, device, equipment, product and medium
CN111274409A (en) Method and system for assembly process control of engine valve train based on knowledge graph

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant