CN110352297B - machine learning device - Google Patents
machine learning device Download PDFInfo
- Publication number
- CN110352297B CN110352297B CN201980001105.XA CN201980001105A CN110352297B CN 110352297 B CN110352297 B CN 110352297B CN 201980001105 A CN201980001105 A CN 201980001105A CN 110352297 B CN110352297 B CN 110352297B
- Authority
- CN
- China
- Prior art keywords
- values
- internal combustion
- combustion engine
- operating parameters
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000010801 machine learning Methods 0.000 title claims abstract description 37
- 238000012549 training Methods 0.000 claims abstract description 332
- 238000013528 artificial neural network Methods 0.000 claims abstract description 309
- 238000005259 measurement Methods 0.000 claims abstract description 113
- 230000008859 change Effects 0.000 claims abstract description 21
- 238000002485 combustion reaction Methods 0.000 claims description 219
- 230000006870 function Effects 0.000 description 63
- 238000000034 method Methods 0.000 description 56
- 230000008569 process Effects 0.000 description 42
- 238000012545 processing Methods 0.000 description 34
- 238000012986 modification Methods 0.000 description 30
- 230000004048 modification Effects 0.000 description 30
- 238000010586 diagram Methods 0.000 description 22
- 230000006866 deterioration Effects 0.000 description 20
- 238000004364 calculation method Methods 0.000 description 12
- 238000012887 quadratic function Methods 0.000 description 10
- 239000000446 fuel Substances 0.000 description 9
- 230000004913 activation Effects 0.000 description 6
- 230000007423 decrease Effects 0.000 description 5
- 238000005192 partition Methods 0.000 description 5
- 238000001514 detection method Methods 0.000 description 4
- 230000006835 compression Effects 0.000 description 3
- 238000007906 compression Methods 0.000 description 3
- 238000002347 injection Methods 0.000 description 3
- 239000007924 injection Substances 0.000 description 3
- 238000003062 neural network model Methods 0.000 description 2
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000003197 catalytic effect Effects 0.000 description 1
- 238000001816 cooling Methods 0.000 description 1
- 239000000498 cooling water Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000002828 fuel tank Substances 0.000 description 1
- 239000003502 gasoline Substances 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000000746 purification Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F02—COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
- F02D—CONTROLLING COMBUSTION ENGINES
- F02D41/00—Electrical control of supply of combustible mixture or its constituents
- F02D41/02—Circuit arrangements for generating control signals
- F02D41/14—Introducing closed-loop corrections
- F02D41/1401—Introducing closed-loop corrections characterised by the control or regulation method
- F02D41/1405—Neural network control
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F02—COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
- F02D—CONTROLLING COMBUSTION ENGINES
- F02D41/00—Electrical control of supply of combustible mixture or its constituents
- F02D41/02—Circuit arrangements for generating control signals
- F02D41/14—Introducing closed-loop corrections
- F02D41/1438—Introducing closed-loop corrections using means for determining characteristics of the combustion gases; Sensors therefor
- F02D41/1444—Introducing closed-loop corrections using means for determining characteristics of the combustion gases; Sensors therefor characterised by the characteristics of the combustion gases
- F02D41/146—Introducing closed-loop corrections using means for determining characteristics of the combustion gases; Sensors therefor characterised by the characteristics of the combustion gases the characteristics being an NOx content or concentration
- F02D41/1461—Introducing closed-loop corrections using means for determining characteristics of the combustion gases; Sensors therefor characterised by the characteristics of the combustion gases the characteristics being an NOx content or concentration of the exhaust gases emitted by the engine
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B13/00—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
- G05B13/02—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
- G05B13/0205—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric not using a model or a simulator of the controlled system
- G05B13/024—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric not using a model or a simulator of the controlled system in which a parameter or coefficient is automatically adjusted to optimise the performance
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B13/00—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
- G05B13/02—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
- G05B13/0265—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion
- G05B13/027—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion using neural networks only
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B23/00—Testing or monitoring of control systems or parts thereof
- G05B23/02—Electric testing or monitoring
- G05B23/0205—Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
- G05B23/0218—Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults
- G05B23/0243—Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults model based detection method, e.g. first-principles knowledge model
- G05B23/0254—Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults model based detection method, e.g. first-principles knowledge model based on a quantitative model, e.g. mathematical relationships between inputs and outputs; functions: observer, Kalman filter, residual calculation, Neural Networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F02—COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
- F02D—CONTROLLING COMBUSTION ENGINES
- F02D2200/00—Input parameters for engine control
- F02D2200/02—Input parameters for engine control the parameters being related to the engine
- F02D2200/04—Engine intake system parameters
- F02D2200/0414—Air temperature
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F02—COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
- F02D—CONTROLLING COMBUSTION ENGINES
- F02D2200/00—Input parameters for engine control
- F02D2200/02—Input parameters for engine control the parameters being related to the engine
- F02D2200/10—Parameters related to the engine output, e.g. engine torque or engine speed
- F02D2200/1002—Output torque
- F02D2200/1004—Estimation of the output torque
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F02—COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
- F02D—CONTROLLING COMBUSTION ENGINES
- F02D2200/00—Input parameters for engine control
- F02D2200/02—Input parameters for engine control the parameters being related to the engine
- F02D2200/10—Parameters related to the engine output, e.g. engine torque or engine speed
- F02D2200/101—Engine speed
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F02—COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
- F02D—CONTROLLING COMBUSTION ENGINES
- F02D41/00—Electrical control of supply of combustible mixture or its constituents
- F02D41/02—Circuit arrangements for generating control signals
- F02D41/14—Introducing closed-loop corrections
- F02D41/1438—Introducing closed-loop corrections using means for determining characteristics of the combustion gases; Sensors therefor
- F02D41/1444—Introducing closed-loop corrections using means for determining characteristics of the combustion gases; Sensors therefor characterised by the characteristics of the combustion gases
- F02D41/1459—Introducing closed-loop corrections using means for determining characteristics of the combustion gases; Sensors therefor characterised by the characteristics of the combustion gases the characteristics being a hydrocarbon content or concentration
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B11/00—Automatic controllers
- G05B11/01—Automatic controllers electric
- G05B11/36—Automatic controllers electric with provision for obtaining particular characteristics, e.g. proportional, integral, differential
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Automation & Control Theory (AREA)
- Chemical & Material Sciences (AREA)
- Combustion & Propulsion (AREA)
- Mechanical Engineering (AREA)
- Feedback Control In General (AREA)
- Combined Controls Of Internal Combustion Engines (AREA)
Abstract
即使运转参数的值为预先设定的范围外,也能够得到合适的输出值。在用于使用神经网络来输出相对于机器的运转参数的值的输出值的机器学习装置中,在机器的运转参数的值为预先设定的范围外时,增大神经网络的输出层的前一个隐藏层的节点的个数,使用对新取得的机器的运转参数的值通过实测而得到的训练数据,以使根据机器的运转参数的值而变化的输出值与对应于机器的运转参数的值的训练数据之差变小的方式学习神经网络的权重。
Appropriate output values can be obtained even when the values of the operating parameters are outside the preset range. In a machine learning device for using a neural network to output an output value with respect to a value of an operating parameter of a device, when the value of the operating parameter of the device is outside a preset range, the front end of the output layer of the neural network is increased. The number of nodes in one hidden layer is based on the training data obtained by actual measurement of the newly acquired values of the operating parameters of the machine, so that the output values that change according to the values of the operating parameters of the machine are the same as the values corresponding to the operating parameters of the machine. The weights of the neural network are learned in such a way that the difference between the values of the training data becomes smaller.
Description
技术领域technical field
本发明涉及机器学习装置。The present invention relates to machine learning devices.
背景技术Background technique
在使用神经网络的内燃机的控制装置中,如下的内燃机的控制装置是公知的:其基于内燃机转速、吸入空气量等内燃机的运转参数的值,以使向燃烧室内的吸入气体量与实际的向燃烧室内的吸入气体量一致的方式预先学习神经网络的权重,在内燃机运转时,使用学习了权重的神经网络,根据内燃机的运转参数的值来推定向燃烧室内的吸入气体量(例如参照专利文献1)。Among the control devices for internal combustion engines using neural networks, there are known control devices for internal combustion engines that adjust the amount of intake gas into the combustion chamber to the actual amount based on values of operating parameters of the internal combustion engine, such as the engine speed and the amount of intake air. The weights of the neural network are learned in advance so that the amount of intake gas in the combustion chamber is the same, and the amount of intake gas into the combustion chamber is estimated from the values of the operating parameters of the internal combustion engine using the neural network that has learned the weights during the operation of the internal combustion engine (for example, refer to Patent Documents). 1).
现有技术文献prior art literature
专利文献Patent Literature
专利文献1:日本特开2012-112277号公报Patent Document 1: Japanese Patent Laid-Open No. 2012-112277
发明内容SUMMARY OF THE INVENTION
发明所要解决的课题The problem to be solved by the invention
另外,内燃机转速这样的与内燃机相关的特定类别的运转参数的值的使用范围能够根据内燃机的种类而预先设想,因此,通常,对于内燃机的运转参数的值的预先设想的使用范围,以使神经网络的输出值与向燃烧室内的实际的吸入气体量这样的实际的值之差变小的方式预先学习神经网络的权重。然而,实际上,内燃机的运转参数的值有时会成为预先设想的使用范围外,在该情况下,对于预先设想的使用范围外,由于未进行基于实际的值的学习,所以存在使用神经网络运算出的输出值会成为从实际的值大幅背离的值这一问题。这样的问题不限于内燃机的领域,而会在成为机器学习的对象的各种领域的机器中产生。In addition, the use range of the value of a specific type of operating parameter related to the internal combustion engine, such as the engine speed, can be pre-estimated according to the type of the internal combustion engine. Therefore, the pre-estimated use range of the value of the operating parameter of the internal combustion engine is usually used so that the nerve The weights of the neural network are learned in advance so that the difference between the output value of the network and the actual value such as the actual intake gas amount into the combustion chamber becomes small. However, in practice, the values of the operating parameters of the internal combustion engine may be outside the pre-estimated range of use. In this case, since learning based on the actual values is not performed for the out-of-premise use range, there is a possibility of using a neural network calculation. There is a problem that the output value will be a value that deviates greatly from the actual value. Such a problem is not limited to the field of internal combustion engines, but occurs in machines of various fields that are targets of machine learning.
为了解决上述问题,根据第一个发明,提供一种机器学习装置,用于使用神经网络来输出相对于机器的运转参数的值的输出值,其中,预先设定有与上述的机器相关的特定类别的运转参数的值的范围,并且预先设定有与上述的机器相关的特定类别的运转参数的值的范围所对应的神经网络的隐藏层的节点的个数,在新取得的与上述的机器相关的特定类别的运转参数的值为预先设定的范围外时,使神经网络的输出层的前一个隐藏层的节点的个数增大,使用对新取得的与上述的机器相关的特定类别的运转参数的值通过实测而得到的训练数据及对预先设定的范围内的上述的机器的运转参数的值通过实测而得到的训练数据来学习神经网络的权重,使用学习了权重的神经网络来输出相对于与上述的机器相关的特定类别的运转参数的值的输出值。In order to solve the above-mentioned problem, according to the first invention, there is provided a machine learning apparatus for outputting an output value with respect to a value of an operation parameter of a machine using a neural network, wherein a specific parameter related to the above-mentioned machine is preset. The value range of the operating parameter of the category, and the number of nodes of the hidden layer of the neural network corresponding to the value range of the operating parameter of the specific category related to the above-mentioned machine is preset. When the value of the operating parameter of the specific type related to the machine is outside the preset range, the number of nodes in the previous hidden layer of the output layer of the neural network is increased, and the newly acquired specific type related to the machine is used. The training data obtained by the actual measurement of the values of the operating parameters of the category and the training data obtained by the actual measurement of the values of the operating parameters of the above-mentioned equipment within a preset range are used to learn the weights of the neural network, and the neural network that has learned the weights is used. The network outputs output values relative to the values of the specific classes of operating parameters associated with the above-mentioned machines.
为了解决上述问题,根据第二个发明,提供一种机器学习装置,用于使用神经网络来输出相对于机器的运转参数的值的输出值,其中,预先设定有与上述的机器相关的多个类别的运转参数的值的范围,并且预先设定有与上述的机器相关的多个类别的运转参数的值的范围所对应的神经网络的隐藏层的节点的个数,在新取得的与上述的机器相关的多个类别的运转参数的值为预先设定的范围外时,使神经网络的输出层的前一个隐藏层的节点的个数增大,使用对新取得的与上述的机器相关的多个类别的运转参数的值通过实测而得到的训练数据及对预先设定的范围内的上述的机器的运转参数的值通过实测而得到的训练数据来学习神经网络的权重,使用学习了权重的神经网络来输出相对于与上述的机器相关的多个类别的运转参数的值的输出值。In order to solve the above-mentioned problems, according to the second invention, there is provided a machine learning apparatus for outputting an output value with respect to a value of an operating parameter of a machine using a neural network, wherein a plurality of parameters related to the above-mentioned machine are preset. The range of the values of the operating parameters of each category, and the number of nodes in the hidden layer of the neural network corresponding to the value ranges of the operating parameters of the plurality of categories related to the above-mentioned equipment is preset, and the newly acquired and When the values of the operating parameters of the above-mentioned equipment-related categories are outside the preset range, increase the number of nodes in the previous hidden layer of the output layer of the neural network, and use the newly acquired equipment related to the above-mentioned equipment. The weights of the neural network are learned using the training data obtained by the actual measurement of the values of the operating parameters of the related multiple categories and the training data obtained by the actual measurement of the values of the operating parameters of the above-mentioned equipment within a preset range. A weighted neural network is used to output an output value with respect to the values of the operating parameters of the plurality of categories related to the above-mentioned equipment.
为了解决上述问题,根据第三个发明,提供一种机器学习装置,用于使用神经网络来输出相对于机器的运转参数的值的输出值,其中,预先设定有与上述的机器相关的多个类别的运转参数的值的范围,并且预先形成有与上述的机器相关的多个类别的运转参数的值的范围所对应的神经网络,在新取得的与上述的机器相关的多个类别的运转参数的值中的至少一个类别的运转参数的值为预先设定的范围外时,形成新的神经网络,使用对新取得的与上述的机器相关的多个类别的运转参数的值通过实测而得到的训练数据来学习新的神经网络的权重,使用学习了权重的神经网络来输出相对于与上述的机器相关的多个类别的运转参数的值的输出值。In order to solve the above-mentioned problems, according to a third invention, there is provided a machine learning apparatus for outputting an output value with respect to a value of an operation parameter of a machine using a neural network, wherein a plurality of parameters related to the above-mentioned machine are preset. The range of values of the operating parameters of each category, and a neural network corresponding to the value range of the operating parameters of the plurality of categories related to the above-mentioned equipment is pre-formed, and the newly acquired plural categories of the above-mentioned equipment related to the above-mentioned equipment. When the value of the operating parameter of at least one category of the values of the operating parameters is outside the preset range, a new neural network is formed, and the newly acquired values of the operating parameters of the plurality of categories related to the above-mentioned equipment are used to pass the actual measurement. On the other hand, the weights of the new neural network are learned from the obtained training data, and output values corresponding to the values of the operation parameters of the plurality of categories related to the above-mentioned equipment are output using the neural network that has learned the weights.
发明效果Invention effect
在上述的各发明中,在新取得的机器的运转参数的值为预先设定的范围外时,通过使神经网络的隐藏层的节点的个数增大或者通过制作新的神经网络,能够抑制在机器的运转参数的值成为了预先设定的范围外的值时,使用神经网络运算出的输出值成为从实际的值大幅背离的值。In each of the above-described inventions, when the value of the newly acquired operating parameter of the device is outside the preset range, by increasing the number of nodes in the hidden layer of the neural network or by creating a new neural network, it is possible to suppress the When the value of the operating parameter of the device is out of the preset range, the output value calculated using the neural network is a value that greatly deviates from the actual value.
附图说明Description of drawings
图1是内燃机的整体图。FIG. 1 is an overall view of an internal combustion engine.
图2是示出神经网络的一例的图。FIG. 2 is a diagram showing an example of a neural network.
图3A及图3B是示出Sigmoid函数σ的值的变化的图。3A and 3B are diagrams showing changes in the value of the sigmoid function σ.
图4A及图4B分别是示出神经网络和来自隐藏层的节点的输出值的图。4A and 4B are diagrams showing output values from a neural network and nodes in a hidden layer, respectively.
图5A及图5B分别是示出来自隐藏层的节点的输出值和来自输出层的节点的输出值的图。5A and 5B are diagrams showing output values from nodes in the hidden layer and output values from nodes in the output layer, respectively.
图6A及图6B分别是示出神经网络和来自输出层的节点的输出值的图。6A and 6B are diagrams showing the neural network and the output values from the nodes of the output layer, respectively.
图7A及图7B是用于说明通过本申请发明所要解决的课题的图。7A and 7B are diagrams for explaining the problem to be solved by the invention of the present application.
图8A及图8B分别是示出神经网络和神经网络的输入值与输出值的关系的图。8A and 8B are diagrams showing a neural network and a relationship between input values and output values of the neural network, respectively.
图9是示出神经网络的图。FIG. 9 is a diagram illustrating a neural network.
图10是用于执行学习处理的流程图。FIG. 10 is a flowchart for performing learning processing.
图11是示出神经网络的变形例的图。FIG. 11 is a diagram showing a modification of the neural network.
图12是示出用于执行学习处理的另一实施例的流程图。FIG. 12 is a flowchart illustrating another embodiment for performing a learning process.
图13是示出神经网络的图。FIG. 13 is a diagram showing a neural network.
图14A及图14B是示出内燃机转速等的预先设定的范围的图。14A and 14B are diagrams showing preset ranges of the engine speed and the like.
图15是示出神经网络的变形例的图。FIG. 15 is a diagram showing a modification of the neural network.
图16是示出用于执行学习处理的又一实施例的流程图。FIG. 16 is a flowchart showing yet another embodiment for performing the learning process.
图17是示出根据内燃机的运转参数的值而划分出的已学习划分区域的图。FIG. 17 is a diagram showing learned divided regions divided according to the values of the operating parameters of the internal combustion engine.
图18A、图18B及图18C分别是示出训练数据相对于内燃机转速和点火正时的分布、训练数据相对于点火正时和节气门开度的分布及训练数据与学习后的输出值的关系的图。18A , 18B and 18C respectively show the distribution of training data with respect to the engine speed and ignition timing, the distribution of training data with respect to ignition timing and throttle opening, and the relationship between the training data and the output value after learning 's diagram.
图19A及图19B是示出训练数据与学习后的输出值的关系的图。19A and 19B are diagrams showing the relationship between training data and learned output values.
图20是用于进行空调的自动调整的机器学习装置的整体图。FIG. 20 is an overall view of a machine learning device for performing automatic adjustment of an air conditioner.
图21是示出神经网络的图。FIG. 21 is a diagram showing a neural network.
图22A及图22B是示出气温等的预先设定的范围的图。22A and 22B are diagrams showing preset ranges of air temperature and the like.
图23是示出用于执行学习处理的又一实施例的流程图。FIG. 23 is a flowchart showing yet another embodiment for performing the learning process.
图24A及图24B是示出气温等的预先设定的范围的图。24A and 24B are diagrams showing preset ranges of air temperature and the like.
图25是示出用于执行学习处理的又一实施例的流程图。FIG. 25 is a flowchart showing yet another embodiment for performing a learning process.
图26是用于推定二次电池的劣化度的机器学习装置的整体图。FIG. 26 is an overall view of a machine learning apparatus for estimating the degree of deterioration of a secondary battery.
图27是示出神经网络的图。FIG. 27 is a diagram illustrating a neural network.
图28A及图28B是示出气温等的预先设定的范围的图。28A and 28B are diagrams showing preset ranges of air temperature and the like.
图29是用于执行算出处理的流程图。FIG. 29 is a flowchart for executing calculation processing.
图30是用于执行训练数据取得处理的流程图。FIG. 30 is a flowchart for executing training data acquisition processing.
图31是示出用于执行学习处理的又一实施例的流程图。FIG. 31 is a flowchart showing yet another embodiment for performing a learning process.
图32A及图32B是示出气温等的预先设定的范围的图。32A and 32B are diagrams showing preset ranges of air temperature and the like.
图33是示出用于执行学习处理的又一实施例的流程图。FIG. 33 is a flowchart showing yet another embodiment for performing the learning process.
具体实施方式Detailed ways
<内燃机的整体结构><Overall structure of internal combustion engine>
首先,对将本发明的机器学习装置应用于内燃机的情况进行说明。参照示出内燃机的整体图的图1,1表示内燃机主体,2表示各气缸的燃烧室,3表示配置于各气缸的燃烧室2内的火花塞,4表示用于向各气缸供给燃料(例如,汽油)的燃料喷射阀,5表示平衡罐,6表示进气支管,7表示排气歧管。平衡罐5经由进气管道8而连结于排气涡轮增压器9的压缩机9a的出口,压缩机9a的入口经由吸入空气量检测器10而连结于空气滤清器11。在进气管道8内配置有由致动器13驱动的节气门12,在节气门12安装有用于检测节气门开度的节气门开度传感器14。另外,在进气管道8周围配置有用于冷却在进气管道8内流动的吸入空气的中冷器15。First, a case where the machine learning device of the present invention is applied to an internal combustion engine will be described. Referring to FIG. 1 showing an overall view of an internal combustion engine, 1 denotes an internal combustion engine main body, 2 denotes a combustion chamber of each cylinder, 3 denotes a spark plug arranged in a
另一方面,排气歧管7连结于排气涡轮增压器9的排气涡轮机9b的入口,排气涡轮机9b的出口经由排气管16而连结于排气净化用催化转换器17。排气歧管7与平衡罐5经由废气再循环(以下,称作EGR)通路18而互相连结,在EGR通路18内配置有EGR控制阀19。各燃料喷射阀4连结于燃料分配管20,该燃料分配管20经由燃料泵21而连结于燃料箱22。在排气管16内配置有用于检测废气中的NOX浓度的NOX传感器23。另外,在空气滤清器11内配置有用于检测大气温的大气温传感器24。On the other hand, the exhaust manifold 7 is connected to the inlet of the
电子控制单元30由数字计算机构成,具备通过双向性总线31而互相连接的ROM(只读存储器)32、RAM(随机存取存储器)33、CPU(微处理器)34、输入端口35及输出端口36。吸入空气量检测器10、节气门开度传感器14、NOX传感器23及大气温传感器24的输出信号经由对应的AD变换器37而向输入端口35输入。在加速器踏板40上连接有产生与加速器踏板40的踩踏量成比例的输出电压的负荷传感器41,负荷传感器41的输出电压经由对应的AD变换器37而向输入端口35输入。而且,在输入端口35上连接有每当曲轴旋转例如30°时产生输出脉冲的曲轴角传感器42。在CPU34内,基于曲轴角传感器42的输出信号来算出内燃机转速。另一方面,输出端口36经由对应的驱动电路38而连接于火花塞3、燃料喷射阀4、节气门驱动用致动器13、EGR控制阀19及燃料泵21。The
<神经网络的概要><Outline of Neural Network>
在本发明的实施例中,使用神经网络来推定表示内燃机的性能的各种值。图2示出了该神经网络的一例。图2中的圆形记号表示人工神经元,在神经网络中,该人工神经元通常被称作节点或单元(在本申请中称作节点)。在图2中,L=1表示输入层,L=2及L=3表示隐藏层,L=4表示输出层。另外,在图2中,x1及x2表示来自输入层(L=1)的各节点的输出值,y表示来自输出层(L=4)的节点的输出值,z1、z2及z3表示来自隐藏层(L=2)的各节点的输出值,z1及z2表示来自隐藏层(L=3)的各节点的输出值。需要说明的是,隐藏层的层数可以设为1个或任意的个数,输入层的节点的个数及隐藏层的节点的个数也可以设为任意的个数。需要说明的是,在图2中虽然示出了输出层的节点的个数为1个的情况,但输出层的节点的个数可以设为2个以上的多个。In the embodiment of the present invention, various values representing the performance of the internal combustion engine are estimated using a neural network. FIG. 2 shows an example of this neural network. The circular symbols in Figure 2 represent artificial neurons, which are commonly referred to as nodes or units in neural networks (referred to in this application as nodes). In FIG. 2, L=1 represents the input layer, L=2 and L=3 represent the hidden layer, and L=4 represents the output layer. In addition, in FIG. 2, x 1 and x 2 represent the output values from the nodes of the input layer (L=1), y represents the output values from the nodes of the output layer (L=4), z 1 , z 2 and z 3 represents the output value from each node of the hidden layer (L=2), and z 1 and z 2 represent the output value from each node of the hidden layer (L=3). It should be noted that the number of hidden layers can be set to one or any number, and the number of nodes of the input layer and the number of nodes of the hidden layer can also be set to any number. It should be noted that although FIG. 2 shows a case where the number of nodes of the output layer is one, the number of nodes of the output layer may be more than two.
在输入层的各节点中,输入原样输出。另一方面,对隐藏层(L=2)的各节点输入输入层的各节点的输出值x1及x2,在隐藏层(L=2)的各节点中,使用各自对应的权重w及偏倚b来算出总输入值u。例如,在图2中的隐藏层(L=2)的zk(k=1,2,3)所示的节点中算出的总输入值uk成为下式这样。In each node of the input layer, the input is output as it is. On the other hand, the output values x 1 and x 2 of each node of the input layer are input to each node of the hidden layer (L=2), and the corresponding weights w and 2 are used in each node of the hidden layer (L=2) Bias b to calculate the total input value u. For example, the total input value uk calculated in the node indicated by z k ( k =1, 2, 3) of the hidden layer (L=2) in FIG. 2 is as follows.
接着,该总输入值uk由活性化函数f进行变换,从隐藏层(L=2)的zk所示的节点作为输出值zk(=f(uk))而输出。关于隐藏层(L=2)的其他节点也是同样的。另一方面,对隐藏层(L=3)的各节点输入隐藏层(L=2)的各节点的输出值z1、z2及z3,在隐藏层(L=3)的各节点中,使用各自对应的权重w及偏倚b来算出总输入值u(Σz·w+b)。该总输入值u同样由活性化函数进行变换,从隐藏层(L=3)的各节点作为输出值z1、z2而输出,需要说明的是,在本发明的实施例中,使用Sigmoid函数σ作为该活性化函数。Next, the total input value uk is transformed by the activation function f, and is output from the node indicated by z k of the hidden layer (L=2) as the output value z k (=f(u k )). The same is true for other nodes in the hidden layer (L=2). On the other hand, the
另一方面,对输出层(L=4)的节点输入隐藏层(L=3)的各节点的输出值z1及z2,在输出层的节点中,使用各自对应的权重w及偏倚b来算出总输入值u(Σz·w+b),或者,仅使用各自对应的权重w来算出总输入值u(Σz·w)。在本发明的实施例中,在输出层的节点中使用恒等函数,因此,从输出层的节点将在输出层的节点中算出的总输入值u直接作为输出值y而输出。On the other hand, the output values z 1 and z 2 of each node of the hidden layer (L=3) are input to the nodes of the output layer (L=4), and the corresponding weights w and bias b are used in the nodes of the output layer. to calculate the total input value u(Σz·w+b), or use only the corresponding weight w to calculate the total input value u(Σz·w). In the embodiment of the present invention, the identity function is used in the nodes of the output layer, so the total input value u calculated in the nodes of the output layer is directly output as the output value y from the nodes of the output layer.
<基于神经网络的函数的表现><Performance of neural network-based functions>
当使用神经网络时,能够表现任意的函数,接着,对此进行简单说明。首先,对用作活性化函数的Sigmoid函数σ进行说明,Sigmoid函数σ由σ(x)=1/(1+exp(-x))表示,如图3A所示,根据x的值而取0与1之间的值。在此,若将x置换为wx+b,则Sigmoid函数σ由σ(wx+b)=1/(1+exp(-wx-b))表示。在此,若增大w的值,则如图3B中的曲线σ1、σ2、σ3所示,Sigmoid函数σ(wx+b)的曲线部分的倾斜逐渐变陡,若使w的值无限大,则如图3B中的曲线σ4所示,Sigmoid函数σ(wx+b)在成为x=-b/w(成为wx+b=0的x,即,成为σ(wx+b)=0.5的x处,如图3B所示,呈阶梯状变化。若利用这样的Sigmoid函数σ的性质,则能够使用神经网络来表现任意的函数。When a neural network is used, an arbitrary function can be expressed, and this will be briefly described next. First, the sigmoid function σ used as the activation function will be described. The sigmoid function σ is represented by σ(x)=1/(1+exp(-x)), and as shown in FIG. 3A , it takes 0 according to the value of x value between 1 and 1. Here, when x is replaced by wx+b, the sigmoid function σ is represented by σ(wx+b)=1/(1+exp(-wx-b)). Here, when the value of w is increased, as shown by the curves σ 1 , σ 2 , and σ 3 in FIG. 3B , the slope of the curve portion of the sigmoid function σ(wx+b) gradually becomes steeper. is infinite, then as shown by the curve σ 4 in FIG. 3B, the Sigmoid function σ(wx+b) becomes x=-b/w (x which becomes wx+b=0, that is, becomes σ(wx+b) = 0.5 x, as shown in Fig. 3B, changes in a step-like manner. By utilizing the properties of such a sigmoid function σ, an arbitrary function can be expressed using a neural network.
例如,使用如图4A所示的由1个节点构成的输入层(L=1)、由2个节点构成的隐藏层(L=2)及由1个节点构成的输出层(L=3)所构成的神经网络,能够表现近似于二次函数的函数。需要说明的是,在该情况下,即使使输出层(L=3)的个数为多个也能够表现任意的函数,但为了容易理解,以输出层(L=3)的节点的个数为一个的情况为例来说明。在图4A所示的神经网络中,如图4A所示,对输入层(L=1)的节点输入输入值x,对隐藏层(L=2)中的z1所示的节点输入使用权重w1 (L2)及偏倚b1算出的输入值u=x·w1 (L2)+b1。该输入值u由Sigmoid函数σ(x·w1 (L2)+b1)进行变换,作为输出值z1而输出。同样,对隐藏层(L=2)中的z2所示的节点输入使用权重w2 (L2)及偏倚b2算出的输入值u=x·w2 (L2)+b2,该输入值u由Sigmoid函数σ(x·w2 (L2)+b2)进行变换,作为输出值z2而输出。For example, as shown in FIG. 4A, an input layer (L=1) consisting of one node, a hidden layer (L=2) consisting of two nodes, and an output layer (L=3) consisting of one node are used The constructed neural network can express a function that approximates a quadratic function. It should be noted that in this case, even if the number of output layers (L=3) is set to a large number, any function can be expressed, but for ease of understanding, the number of nodes of the output layer (L=3) is used. Take one case as an example. In the neural network shown in FIG. 4A, as shown in FIG. 4A, the input value x is input to the node of the input layer (L=1), and the weight is used to input the node input shown by z 1 in the hidden layer (L=2). The input value u=x·w 1 (L2) +b 1 calculated by w 1 (L2) and the bias b 1 . The input value u is transformed by the sigmoid function σ(x·w 1 (L2) +b 1 ), and is output as the output value z 1 . Similarly, the input value u=x·w 2 (L2) +b 2 calculated using the weight w 2 (L2) and the bias b 2 is input to the node indicated by z 2 in the hidden layer (L=2). u is transformed by the Sigmoid function σ(x·w 2 (L2) +b 2 ), and output as an output value z 2 .
另一方面,对输出层(L=3)的节点输入隐藏层(L=2)的各节点的输出值z1及z2,在输出层的节点中,使用各自对应的权重w1 (y)及w2 (y)来算出总输入值u(Σz·w=z1·w1 (y)+z2·w2 (y))。如前所述,在本发明的实施例中,在输出层的节点中使用恒等函数,因此,从输出层的节点将在输出层的节点中算出的总输入值u直接作为输出值y而输出。On the other hand, the output values z 1 and z 2 of each node of the hidden layer (L=2) are input to the nodes of the output layer (L=3), and the corresponding weights w 1 (y ) are used for the nodes of the output layer. ) and w 2 (y) to calculate the total input value u (Σz·w=z 1 ·w 1 (y) +z 2 ·w 2 (y) ). As mentioned above, in the embodiment of the present invention, the identity function is used in the nodes of the output layer. Therefore, the total input value u calculated in the nodes of the output layer is directly used as the output value y from the nodes of the output layer. output.
图4B的(I)示出了以在x=0处Sigmoid函数σ(x·w1 (L2)+b1)的值成为大致零的方式设定了权重w1 (L2)及偏倚b1时的来自隐藏层(L=2)的节点的输出值z1。另一方面,在Sigmoid函数σ(x·w2 (L2)+b2)中,若例如使权重w2 (L2)为负的值,则Sigmoid函数σ(x·w2 (L2)+b2)的曲线的形状如图4B的(II)所示,成为伴随于x的增大而减小的形状。图4B的(II)示出了以在x=0处Sigmoid函数σ(x·w2 (L2)+b2)的值成为大致零的方式设定了权重w2 (L2)及偏倚b2时的来自隐藏层(L=2)的节点的输出值z2的变化。(I) of FIG. 4B shows that the weight w 1 (L2) and the bias b 1 are set so that the value of the Sigmoid function σ(x·w 1 (L2) + b 1 ) becomes substantially zero at x=0 The output value z 1 from the node of the hidden layer (L=2) at . On the other hand, in the Sigmoid function σ(x·w 2 (L2) +b 2 ), if, for example, the weight w 2 (L2) has a negative value, the Sigmoid function σ(x·w 2 (L2) +b 2 ) The shape of the curve is a shape that decreases as x increases, as shown in (II) of FIG. 4B . (II) of FIG. 4B shows that the weight w 2 (L2) and the bias b 2 are set so that the value of the Sigmoid function σ(x·w 2 (L2) + b 2 ) becomes substantially zero at x=0 The change in the output value z 2 of the node from the hidden layer (L=2) at .
另一方面,在图4B的(III)中,用实线示出了来自隐藏层(L=2)的各节点的输出值z1与z2之和(z1+z2)。需要说明的是,如图4A所示,对各输出值z1、z2乘以各自对应的权重w1 (y)及w2 (y),在图4B的(III)中,用虚线A示出了w1 (y)、w2 (y)>1且w1 (y)≈w2 (y)时的输出值y的变化。而且,在图4B的(III)中,用单点划线B示出了w1 (y)、w2 (y)>1且w1 (y)>w2 (y)时的输出值y的变化,在图4B的(III)中,用单点划线C示出了w1 (y)、w2 (y)>1且w1 (y)<w2 (y)时的输出值y的变化。在图4B的(III)中,W所示的范围内的虚线A的形状表示如y=ax2(a是系数)所示的近似于二次函数的曲线,因此可知,通过使用如图4A所示的神经网络,能够表现近似于二次函数的函数。On the other hand, in (III) of FIG. 4B , the sum (z 1 +z 2 ) of output values z 1 and z 2 from each node of the hidden layer (L=2) is shown by a solid line. It should be noted that, as shown in FIG. 4A , the respective output values z 1 and z 2 are multiplied by their corresponding weights w 1 (y) and w 2 (y) , and in (III) of FIG. 4B , the dotted line A is used. Changes in the output value y when w 1 (y) , w 2 (y) >1 and w 1 (y) ≈ w 2 (y) are shown. Furthermore, in (III) of FIG. 4B , the output value y when w 1 (y) , w 2 (y) >1, and w 1 (y) >w 2 (y) is shown by a one-dot chain line B The change of , in (III) of FIG. 4B , the output values when w 1 (y) , w 2 (y) >1 and w 1 (y) <w 2 (y) are shown with a dashed-dotted line C change in y. In (III) of FIG. 4B , the shape of the broken line A in the range shown by W represents a curve approximated to a quadratic function as shown by y=ax 2 (a is a coefficient), so it can be seen that by using FIG. 4A The neural network shown is capable of expressing a function that approximates a quadratic function.
另一方面,图5A示出了通过增大图4A中的权重w1 (L2)及w2 (L2)的值而使Sigmoid函数σ的值如图3B所示那样呈阶梯状变化的情况。图5A的(I)示出了以在x=-b1/w1 (L2)处Sigmoid函数σ(x·w1 (L2)+b1)的值呈阶梯状增大的方式设定了权重w1 (L2)及偏倚b1时的来自隐藏层(L=2)的节点的输出值z1。另外,图5A的(II)示出了以在比x=-b1/w1 (L2)稍大的x=-b2/w2 (L2)处Sigmoid函数σ(x·w2 (L2)+b2)的值呈阶梯状减小的方式设定了权重w2 (L2)及偏倚b2时的来自隐藏层(L=2)的节点的输出值z2。另外,在图5A的(III)中,用实线示出了来自隐藏层(L=2)的各节点的输出值z1与z2之和(z1+z2)。如图4A所示,对各输出值z1、z2乘以各自对应的权重w1 (y)及w2 (y),在图5A的(III)中,用虚线示出了w1 (y)、w2 (y)>1时的输出值y。On the other hand, FIG. 5A shows a case where the value of the sigmoid function σ is changed stepwise as shown in FIG. 3B by increasing the values of the weights w 1 (L2) and w 2 (L2) in FIG. 4A . (I) of FIG. 5A shows that the value of the Sigmoid function σ(x·w 1 (L2) +b 1 ) at x=−b 1 /w 1 (L2) is set so that the value of the sigmoid function σ(x·w 1 (L2) +b 1 ) increases in a step-like manner The output value z 1 of the node from the hidden layer (L=2) at the weight w 1 (L2) and the bias b 1 . In addition, (II) of FIG. 5A shows the Sigmoid function σ(x·w 2 ( L2) at x=−b 2 /w 2 (L2) slightly larger than x=−b 1 /w 1 (L2) ) +b 2 ) is set so that the value of the weight w 2 (L2) and the output value z 2 from the node of the hidden layer (L=2) when the bias b 2 is reduced in a stepwise manner. In addition, in (III) of FIG. 5A , the sum (z 1 +z 2 ) of output values z 1 and z 2 from each node of the hidden layer (L=2) is shown by a solid line. As shown in FIG. 4A , the respective output values z 1 and z 2 are multiplied by their corresponding weights w 1 (y) and w 2 (y) , and in (III) of FIG. 5A , w 1 ( y) , the output value y when w 2 (y) >1.
这样,在图4A所示的神经网络中,通过隐藏层(L=2)的一对节点,得到如图5A的(III)所示的长条状的输出值y。因此,若增大隐藏层(L=2)的成对的节点数,适当设定隐藏层(L=2)的各节点处的权重w及偏倚b的值,则能够表现如图5B中的虚线的曲线所示的近似函数y=f(x)的函数。需要说明的是,在图5B中,虽然以各长条相接的方式描绘,但实际上各长条有时会局部重叠。另外,由于实际上w的值并非无限大,所以各长条不成为准确的长条状,而成为图3B中的σ3所示的曲线部分的上半部分那样的曲线状。需要说明的是,虽然省略详细的说明,但如图6A所示,若对于不同的两个输入值xI及x2在隐藏层(L=2)中设置各自对应的一对节点,则如图6B所示,得到与输入值xI及x2对应的柱状的输出值y。在该情况下,可知,若对于各输入值xI、x2在隐藏层(L=2)设置许多成对的节点,则分别得到与不同的输入值xI及x2对应的多个柱状的输出值y,因此,能够表现表示输入值xI及x2与输出值y的关系的函数。需要说明的是,即使在存在不同的三个以上的输入值x的情况下,也同样能够表现表示输入值x与输出值y的关系的函数。In this way, in the neural network shown in FIG. 4A , a long output value y as shown in (III) of FIG. 5A is obtained through a pair of nodes in the hidden layer (L=2). Therefore, if the number of pairs of nodes in the hidden layer (L=2) is increased, and the values of the weight w and the bias b at each node of the hidden layer (L=2) are appropriately set, the expression as shown in FIG. 5B can be obtained. The approximate function y=f(x) shown by the dotted curve. In addition, in FIG. 5B, although each strip|line is drawn so that it may contact|connect, in actuality, each long strip|line may partially overlap. In addition, since the value of w is not actually infinite, each strip does not have an exact strip shape, but has a curved shape like the upper half of the curved portion shown by σ 3 in FIG. 3B . It should be noted that although the detailed description is omitted, as shown in FIG. 6A , if a pair of nodes corresponding to two different input values x 1 and x 2 are set in the hidden layer (L=2), the following As shown in FIG. 6B , a columnar output value y corresponding to the input values x1 and x2 is obtained. In this case, it can be seen that if many pairs of nodes are provided in the hidden layer (L=2) for each of the input values x I and x 2 , a plurality of columns corresponding to different input values x I and x 2 are obtained, respectively. Therefore, a function representing the relationship between the input values x 1 and x 2 and the output value y can be expressed. It should be noted that even when there are three or more different input values x, a function representing the relationship between the input value x and the output value y can be expressed similarly.
<神经网络中的学习><Learning in Neural Networks>
另一方面,在本发明的实施例中,使用误差反向传播法来学习神经网络内的各权重w的值及偏倚b的值。该误差反向传播法是周知的,因此,关于误差反向传播法,以下简单说明其概要。需要说明的是,偏倚b是权重w的一种,因此,在以下的说明中,偏倚b被设为权重w的一个。在如图2所示的神经网络中,当将向L=2、L=3或L=4的各层的节点的输入值u(L)中的权重用w(L)表示时,误差函数E对权重w(L)的微分即梯度若改写则成为下式所示那样。On the other hand, in the embodiment of the present invention, the error back propagation method is used to learn the value of each weight w and the value of the bias b in the neural network. This error backpropagation method is well known, and therefore, the outline of the error backpropagation method will be briefly described below. It should be noted that the bias b is one of the weights w, and therefore, in the following description, the bias b is set to be one of the weights w. In the neural network shown in Fig. 2, when the weight in the input value u (L) to the node of each layer of L=2, L=3 or L=4 is represented by w (L) , the error function The differential of E to the weight w (L) is the gradient When rewritten, it becomes as shown in the following formula.
在此,因此若设为则上述(1)式能够用下式表示。here, So if set to Then, the above-mentioned formula (1) can be represented by the following formula.
在此,当u(L)变动时,通过下一层的总输入值u(L+1)的变化而引起误差函数E的变动,因此δ(L)能够用下式表示。Here, when u (L) fluctuates, the error function E fluctuates due to changes in the total input value u (L+1) of the next layer, so δ (L) can be expressed by the following equation.
在此,若表示为z(L)=f(u(L)),则上述(3)式的右边出现的输入值uk (L+1)能够用下式表示。Here, when expressed as z (L) =f(u (L) ), the input value uk (L+1) appearing on the right side of the above-mentioned formula (3) can be expressed by the following formula.
在此,上述(3)式的右边第一项是δ(L+1),上述(3)式的右边第二项能够用下式表示。Here, the first term on the right side of the above formula (3) is δ (L+1) , the second term on the right-hand side of equation (3) above It can be represented by the following formula.
因此,δ(L)由下式表示。Therefore, δ (L) is represented by the following formula.
即, which is,
即,当δ(L+1)求出后,能够求出δ(L)。That is, after δ (L+1) is obtained, δ (L) can be obtained.
在对某输入值求出了训练数据yt,相对于该输入值的来自输出层的输出值是y的情况下,在使用平方误差作为误差函数的情况下,平方误差E以E=1/2(y-yt)2求出。在该情况下,在图2的输出层(L=4)的节点中,输出值y=f(u(L)),因此,在该情况下,输出层(L=4)的节点处的δ(L)的值成为下式这样。When the training data y t is obtained for a certain input value, and the output value from the output layer is y with respect to the input value, when the squared error is used as the error function, the squared error E is E=1/ 2(yy t ) 2 is found. In this case, in the node of the output layer (L=4) of FIG. 2, the output value y=f(u (L) ), therefore, in this case, at the node of the output layer (L=4) The value of δ (L) is as follows.
在本发明的实施例中,如前所述,f(u(L))是恒等函数,f’(u(L))=1。因此,δ(L)=y-yt,δ(L)求出。In the embodiment of the present invention, as described above, f(u (L) ) is an identity function, and f'(u (L) )=1. Therefore, δ (L) = yy t , and δ (L) is obtained.
当δ(L)求出后,使用上式(6)求出前层的δ(L-1)。这样依次求出前层的δ,使用这些δ的值,根据上式(2),关于各权重w求出误差函数E的微分,即梯度当求出梯度后,使用该梯度以使误差函数E的值减小的方式更新权重w的值。即,进行权重w的值的学习。需要说明的是,在输出层(L=4)具有多个节点的情况下,若将来自各节点的输出值设为y1、y2…,将对应的训练数据设为yt1、yt2…,则作为误差函数E,使用以下的平方和误差E。After δ (L) is obtained, δ (L-1) of the front layer is obtained using the above formula (6) . In this way, the δ of the previous layer is successively obtained, and using the values of these δ, according to the above formula (2), the differential of the error function E, that is, the gradient, is obtained with respect to each weight w When finding the gradient After that, use the gradient The value of the weight w is updated in such a way that the value of the error function E decreases. That is, the learning of the value of the weight w is performed. It should be noted that when the output layer (L=4) has a plurality of nodes, if the output value from each node is set as y 1 , y 2 . . . , and the corresponding training data is set as y t1 , y t2 ..., the following square sum error E is used as the error function E.
<本发明的实施例><Example of the present invention>
接着,参照图7A~图10,对本发明的机器学习装置的第一实施例进行说明。在本发明的第一实施例中,如图4A所示,使用包含一个输入层(L=1)、由一层构成的隐藏层(L=2)及一个输出层(L=3)的神经网络。另外,该第一实施例示出了使用如图4A所示的神经网络以使输出值y由输入值x的二次函数表示的方式进行了神经网络的权重的学习的情况。需要说明的是,在图7A~图8B中,虚线表示真正的二次函数的波形,涂黑的圆表示训练数据,环状的圆表示以使对应于输入值x的输出值y与训练数据之差变小的方式进行了神经网络的权重的学习后的输出值y,实线的曲线表示学习结束后的输入值x与输出值y的关系。另外,在图7A~图8B中,A与B之间即R表示输入值x的预先设定的范围。Next, a first embodiment of the machine learning apparatus of the present invention will be described with reference to FIGS. 7A to 10 . In the first embodiment of the present invention, as shown in FIG. 4A, a neural network comprising an input layer (L=1), a hidden layer (L=2) composed of one layer, and an output layer (L=3) is used network. In addition, this first embodiment shows the case where the learning of the weight of the neural network is performed so that the output value y is represented by the quadratic function of the input value x using the neural network shown in FIG. 4A . It should be noted that, in FIG. 7A to FIG. 8B , the dotted line represents the waveform of the real quadratic function, the black circle represents the training data, and the annular circle represents the output value y corresponding to the input value x and the training data. The output value y after the learning of the weight of the neural network is performed so that the difference becomes smaller, and the solid line curve represents the relationship between the input value x and the output value y after the learning is completed. In addition, in FIGS. 7A to 8B , between A and B, that is, R represents a preset range of the input value x.
图7A及图7B是用于说明本申请发明所要解决的课题的图,因此,首先,参照图7A及图7B,对本申请发明所要解决的课题进行说明。图7A示出了如下情况:使用如图4A所示那样隐藏层(L=2)的节点的个数是2个的神经网络,对于预先设定的范围R内的输入值x,以使输出量y成为输入值x的二次函数y=ax2(a是常数)的方式学习了神经网络的权重。如图7A所示,即使在神经网络的隐藏层(L=2)仅具有2个节点的情况下,在输入值x处于预先设定的范围R内的情况下,也如实线所示,表现接近于二次函数的函数。FIGS. 7A and 7B are diagrams for explaining the problem to be solved by the present invention. Therefore, first, the problem to be solved by the present invention will be described with reference to FIGS. 7A and 7B . Fig. 7A shows the following situation: using a neural network in which the number of nodes in the hidden layer (L=2) is 2 as shown in Fig. 4A, for an input value x within a preset range R, the output The weights of the neural network are learned in such a way that the quantity y becomes a quadratic function y=ax 2 (a is a constant) of the input value x. As shown in FIG. 7A , even when the hidden layer (L=2) of the neural network has only two nodes, when the input value x is within the range R set in advance, as shown by the solid line, the expression A function close to a quadratic function.
即,在关于输入值x的预先设定的范围R进行了学习的情况下,关于预先设定的范围R,通过多个Sigmoid函数σ的曲线部分的合适的组合,输出值y表现为接近于二次函数的函数。然而,关于输入值x的预先设定的范围R外,由于未进行学习,所以如实线所示,Sigmoid函数σ大幅变化的曲线部分的两端的直线部分直接作为输出值y而出现。因此,学习完成后的输出值y如图7A中的实线所示,在输入值x的预先设定的范围R内,以接近于二次函数的函数的形式出现,在输入值x的预先设定的范围R外,以相对于输入值x几乎不变的接近于直线的形式出现。因此,如图7A所示,在输入值x的预先设定的范围R外,输出值y相对于虚线所示的2次曲线大幅背离。That is, when learning is performed with respect to the preset range R of the input value x, with respect to the preset range R, the output value y appears close to A function of a quadratic function. However, since learning is not performed outside the preset range R of the input value x, the straight line portions at both ends of the curve portion where the sigmoid function σ greatly changes appears as the output value y as shown by the solid line. Therefore, the output value y after the learning is completed, as shown by the solid line in FIG. 7A , appears in the form of a function close to the quadratic function within the preset range R of the input value x. Outside the set range R, it appears in the form of a nearly straight line that is almost unchanged with respect to the input value x. Therefore, as shown in FIG. 7A , outside the preset range R of the input value x, the output value y deviates significantly from the quadratic curve shown by the dotted line.
另一方面,图7B示出了如下情况:在输入值x例如如图7B中的x0所示那样成为了输入值x的预先设定的范围R外的情况下,将输入值x为x0时的输出值y0也包含于训练数据,学习了神经网络的权重。这样,在也包含输入值x的预先设定的范围R外的输出值y0而进行了学习的情况下,图4B的z1所示的Sigmoid函数σ的成为z1=1的直线部分以包含输出值y0的方式上升,图4B的z2所示的Sigmoid函数σ整体向右移动并且Sigmoid函数σ的值整体变低,因此如图7B中的实线所示,在预先设定的范围R内,学习结束后的输出值y的值相对于2次曲线大幅背离。这样,在成为了输入值x的预先设定的范围R外的情况下,无法得到合适的输出值y。On the other hand, FIG. 7B shows a case where the input value x is set to x when the input value x is outside the preset range R of the input value x as indicated by x 0 in FIG. 7B , for example. The output value y 0 at 0 is also included in the training data, and the weights of the neural network are learned. In this way, when learning is performed including the output value y 0 outside the preset range R of the input value x, the straight line portion of the Sigmoid function σ shown in z 1 in FIG. 4B that becomes z 1 =1 is The way including the output value y 0 rises, the Sigmoid function σ shown in z 2 in FIG. 4B moves to the right as a whole and the value of the Sigmoid function σ becomes lower as a whole, so as shown by the solid line in FIG. In the range R, the value of the output value y after the learning is completed greatly deviates from the quadratic curve. In this way, when the input value x is outside the preset range R, an appropriate output value y cannot be obtained.
但是,弄清楚了,在该情况下,若增大神经网络的隐藏层(L=2)的节点的个数,则即使在输入值x成为了预先设定的范围R外的情况下,也能够得到合适的输出值y。接着,参照示出本发明的第一实施例的图8A及图8B来对此进行说明。图8B示出了在如图8A所示那样将神经网络的隐藏层(L=2)的节点的个数从2个增大为3个的状态下将输入值x为x0时的输出值y0也包含于训练数据而学习了神经网络的权重时的学习结果。当这样增大神经网络的隐藏层(L=2)的节点的个数时,如图8B中的实线所示,输出值y的值与虚线所示的2次曲线重叠。因此,如图8B所示,可知,即使在输入值x成为了预先设想的使用范围R外的情况下,通过增大神经网络的隐藏层(L=2)的节点的个数,也能够得到合适的输出值y。于是,在本发明的第一实施例中,在输入值x成为了预先设定的范围R外的情况下,使神经网络的隐藏层(L=2)的节点的个数增大。However, it was found that in this case, if the number of nodes in the hidden layer (L=2) of the neural network is increased, even if the input value x is outside the preset range R, the A suitable output value y can be obtained. Next, this will be described with reference to FIGS. 8A and 8B showing the first embodiment of the present invention. FIG. 8B shows the output value when the input value x is x 0 in a state where the number of nodes in the hidden layer (L=2) of the neural network is increased from 2 to 3 as shown in FIG. 8A y 0 is also included in the training data and the learning result when the weight of the neural network is learned. When the number of nodes of the hidden layer (L=2) of the neural network is increased in this way, as shown by the solid line in FIG. 8B , the value of the output value y overlaps with the quadratic curve shown by the dotted line. Therefore, as shown in FIG. 8B , it can be seen that even when the input value x is outside the pre-estimated use range R, by increasing the number of nodes in the hidden layer (L=2) of the neural network, it is possible to obtain Appropriate output value y. Therefore, in the first embodiment of the present invention, when the input value x is outside the preset range R, the number of nodes of the hidden layer (L=2) of the neural network is increased.
接着,对图7A~图8B所示的输入值x及输出值y的具体的一例进行说明。在内燃机的领域中,在将与内燃机相关的特定类别的运转参数的值设为输入值x时,实际的输出量y有时成为输入值x的二次函数的形式,作为这样的情况的一例,存在与内燃机相关的特定类别的运转参数的值即输入值x是内燃机转速N(rpm)且输出量y是排气损失量的情况。在该情况下,内燃机转速N的使用范围当内燃机确定时与之相应地确定,因此,内燃机转速N的范围预先设定。另一方面,排气损失量表示从内燃机燃烧室排出的热能量,与从内燃机燃烧室排出的废气量成比例,与从内燃机燃烧室排出的废气温与外气温的温度差成比例。该排气损失量基于实际使内燃机运转时的气体温度等的检测值而算出,因此,该算出的排气损失量表示通过实测而得到的值。Next, a specific example of the input value x and the output value y shown in FIGS. 7A to 8B will be described. In the field of internal combustion engines, when the value of a specific type of operating parameter related to the internal combustion engine is the input value x, the actual output y may be in the form of a quadratic function of the input value x. As an example of such a case, There are cases where the input value x, which is a value of a specific type of operating parameter related to the internal combustion engine, is the engine speed N (rpm) and the output amount y is the exhaust gas loss amount. In this case, the use range of the engine speed N is determined correspondingly when the internal combustion engine is determined, and therefore, the range of the engine speed N is set in advance. On the other hand, the exhaust gas loss amount represents the thermal energy discharged from the combustion chamber of the internal combustion engine, and is proportional to the amount of exhaust gas discharged from the combustion chamber of the internal combustion engine and proportional to the temperature difference between the temperature of the exhaust gas discharged from the combustion chamber of the internal combustion engine and the outside air temperature. The exhaust gas loss amount is calculated based on the detected value of the gas temperature and the like when the internal combustion engine is actually operated. Therefore, the calculated exhaust gas loss amount represents a value obtained by actual measurement.
在该具体的一例中,在输入值x即内燃机转速N为预先设定的范围R内时,使用通过实测而得到的训练数据,以使输出值y与对应于输入值x的训练数据之差变小的方式学习神经网络的权重。即,在与内燃机相关的特定类别的运转参数的值为预先设定的范围R内时,使用通过实测而得到的训练数据,以使输出值y与对应于与内燃机相关的特定类别的运转参数的值的训练数据之差变小的方式学习神经网络的权重。另一方面,在输入值x即内燃机转速N为预先设定的范围外时,增大神经网络的隐藏层的节点的个数,并且使用对新取得的输入值x即内燃机转速N通过实测而得到的训练数据,以使输出值y与对应于输入值x的训练数据之差变小的方式学习神经网络的权重。即,在与内燃机相关的特定类别的运转参数的值为预先设定的范围外时,增大神经网络的隐藏层的节点的个数,并且使用对新取得的与内燃机相关的特定类别的运转参数的值通过实测而得到的训练数据,以使输出值y与对应于与内燃机相关的特定类别的运转参数的值的训练数据之差变小的方式学习神经网络的权重。因此,在该情况下,即使内燃机转速N变得比预先设定的范围R高,也能够比较准确地推定排气损失量。In this specific example, when the input value x, that is, the engine speed N, is within the predetermined range R, training data obtained by actual measurement is used so that the difference between the output value y and the training data corresponding to the input value x is obtained. Learning the weights of a neural network in a smaller way. That is, when the value of the operating parameter of the specific type related to the internal combustion engine is within the preset range R, the training data obtained by actual measurement is used so that the output value y corresponds to the operating parameter of the specific type related to the internal combustion engine. The weights of the neural network are learned in such a way that the difference between the values of the training data becomes smaller. On the other hand, when the input value x, that is, the engine speed N, is outside the preset range, the number of nodes in the hidden layer of the neural network is increased, and the newly acquired input value x, that is, the engine speed N, is determined by actual measurement. From the obtained training data, the weights of the neural network are learned in such a way that the difference between the output value y and the training data corresponding to the input value x becomes small. That is, when the value of the operation parameter of the specific type related to the internal combustion engine is outside the preset range, the number of nodes in the hidden layer of the neural network is increased, and the newly acquired operation of the specific type related to the internal combustion engine is used. The weights of the neural network are learned so that the difference between the output value y and the training data corresponding to the value of the specific type of operating parameter related to the internal combustion engine becomes small from the training data obtained by the actual measurement of the parameter value. Therefore, in this case, even if the engine speed N becomes higher than the preset range R, the exhaust gas loss amount can be estimated relatively accurately.
需要说明的是,本发明的第一实施例也能够应用于如图9所示的具有多个隐藏层(L=2及L=3)的神经网络。在如图9所示的神经网络中,根据输出层(L=4)的前一个隐藏层(L=3)的节点的输出值z1、z2,从输出层(L=4)输出的函数的形式确定。即,输出值y能够由何种函数表现受到输出层(L=4)的前一个隐藏层(L=3)的节点的个数支配。因此,在如图9所示的神经网络中,在增大隐藏层的节点的个数时,如图9所示,增大输出层(L=4)的前一个隐藏层(L=3)的节点的个数。It should be noted that the first embodiment of the present invention can also be applied to a neural network having multiple hidden layers (L=2 and L=3) as shown in FIG. 9 . In the neural network shown in Fig. 9, according to the output values z 1 , z 2 of the nodes of the previous hidden layer (L=3) of the output layer (L=4), the output from the output layer (L=4) The form of the function is determined. That is, what kind of function expression the output value y can represent is governed by the number of nodes in the hidden layer (L=3) preceding the output layer (L=4). Therefore, in the neural network shown in Figure 9, when the number of nodes in the hidden layer is increased, as shown in Figure 9, the previous hidden layer (L=3) of the output layer (L=4) is increased. the number of nodes.
在上述的第一实施例中,对预先设定的范围R内的各种输入值x实测出的排气损失量作为训练数据而事先求出,即,对预先设定的范围R内的与内燃机相关的特定类别的运转参数的值通过实测而事先求出了训练数据,根据这些与内燃机相关的特定类别的运转参数的值及训练数据来决定神经网络的构造,以使输出值y与对应于与内燃机相关的特定类别的运转参数的值的训练数据之差变小的方式事先学习神经网络的权重。对该预先设定的范围R内的与内燃机相关的特定类别的运转参数的值通过实测而事先求出的训练数据存储于电子控制单元30的存储部。在该第一实施例中,使用与在事先学习中使用的神经网络相同构造的神经网络,使用学习完成时的神经网络的权重,在车辆运转中以车载方式进一步进行学习。图10示出了该以车载方式进行的第一实施例的学习处理例程。需要说明的是,图10所示的学习处理例程通过每隔一定时间(例如每隔一秒)的中断来执行。In the first embodiment described above, the amount of exhaust gas loss actually measured for various input values x within the preset range R is obtained as training data in advance, that is, the sum of the values within the preset range R and the The values of the specific types of operating parameters related to the internal combustion engine are obtained in advance by actual measurement to obtain training data, and based on the values of the specific types of operating parameters related to the internal combustion engine and the training data, the structure of the neural network is determined so that the output value y corresponds to The weights of the neural network are learned in advance so that the difference between the training data of the values of the specific types of operating parameters related to the internal combustion engine becomes small. The training data obtained in advance by actual measurement of the values of the specific types of operating parameters related to the internal combustion engine within the preset range R are stored in the storage unit of the
参照图10,首先,在步骤101中,读入存储于电子控制单元30的存储部的已学习的权重、在事先的学习中使用的训练数据即对预先设定的范围R内的与内燃机相关的特定类别的运转参数的值通过实测而事先求出的训练数据、表示输入数据的范围R即与内燃机相关的特定类别的运转参数的值的预先设定的范围的值A、B。该已学习的权重用作权重的初始值。接着,在步骤102中,读入在事先的学习中使用的神经网络的输出层的前一个隐藏层的节点的个数K。接着,进入步骤103,取得新的输入值x即新的与内燃机相关的特定类别的运转参数的值,该新的输入值x即新的与内燃机相关的特定类别的运转参数的值存储于电子控制单元30的存储部。而且,在步骤103中,将相对于新的输入值x的排气损出量的实测值作为训练数据存储于电子控制单元30的存储部。即,在步骤103中,将对新取得的与内燃机相关的特定类别的运转参数的值通过实测而得到的训练数据存储于电子控制单元30的存储部。Referring to FIG. 10 , first, in
接着,在步骤104中,判别新的输入值x即新取得的与内燃机相关的特定类别的运转参数的值是否处于表示预先设定的范围R的A、B之间,即,新的输入值x是否为A以上且B以下。在新的输入值x处于表示预先设定的范围R的A、B之间时,进入步骤105,将输入值x即新取得的与内燃机相关的特定类别的运转参数的值向神经网络的输入层的节点输入,基于从神经网络的输出层的节点输出的输出值y和对新取得的与内燃机相关的特定类别的运转参数的值通过实测而得到的训练数据,使用误差反向传播法,以使输出值y与训练数据之差变小的方式学习神经网络的权重。Next, in
另一方面,在步骤104中判别为新的输入值x即新取得的与内燃机相关的特定类别的运转参数的值不处于表示预先设定的范围R的A、B之间时,进入步骤106,更新神经网络的输出层的前一个隐藏层的节点的个数K,增大输出层的前一个隐藏层的节点的个数K。此时,在第一实施例中,将输出层的前一个隐藏层的节点的个数K增大1个。接着,在步骤107中,以使输出层的前一个隐藏层的节点的个数K增大的方式更新神经网络,接着,进入步骤105。在步骤105中,将对新的输入值x新得到的训练数据也包含于训练数据,以使输出值y与训练数据之差变小的方式学习更新后的神经网络的权重。即,在步骤105中,使用对新取得的与内燃机相关的特定类别的运转参数的值通过实测而得到的训练数据及对预先设定的范围R内的与内燃机相关的特定类别的运转参数的值通过实测而事先求出的训练数据,以使根据预先设定的范围内及预先设定的范围外的与内燃机相关的特定类别的运转参数的值而变化的输出值y与对应于与该内燃机相关的特定类别的运转参数的值的训练数据之差变小的方式学习更新后的神经网络的权重。On the other hand, when it is determined in
在该情况下,在新取得的与内燃机相关的特定类别的运转参数的值为预先设定的范围外时,也可以在对新取得的与内燃机相关的特定类别的运转参数的值通过实测而得到的训练数据的个数为2个以上的一定个数以上的情况下,使神经网络的输出层的前一个隐藏层的节点的个数增大。因此,在该第一实施例中,在新取得的与内燃机相关的特定类别的运转参数的值为预先设定的范围外时,根据对新取得的与内燃机相关的特定类别的运转参数的值通过实测而得到的训练数据的个数的增大来增大神经网络的输出层的前一个隐藏层的节点的个数。In this case, when the newly acquired value of the operating parameter of the specific type related to the internal combustion engine is outside the preset range, the value of the newly acquired operating parameter of the specific type related to the internal combustion engine may be determined by actual measurement. When the number of obtained training data is two or more than a certain number, the number of nodes in the hidden layer immediately preceding the output layer of the neural network is increased. Therefore, in the first embodiment, when the value of the newly acquired operating parameter of the specific type related to the internal combustion engine is out of the preset range, based on the newly acquired value of the specific type of operating parameter related to the internal combustion engine The number of nodes in the previous hidden layer of the output layer of the neural network is increased by increasing the number of training data obtained through actual measurement.
另外,在对预先设定的范围外的新取得的与内燃机相关的特定类别的运转参数的值通过实测而得到的训练数据存在多个的情况下,可以与图8B中的B与C之间所示的预先设定的运转参数的值的范围中的训练数据的数据密度的增大相应地,使神经网络的输出层的前一个隐藏层的节点的个数增大。需要说明的是,在图8B中,B及C分别表示该预先设定的运转参数的值的范围的最小值和最大值,因此,准确地说,可以与将训练数据的个数除以表示运转参数的值的预先设定的范围的最大值C及最小值B的差值(C-B)而得到的数据密度的增大相应地,使神经网络的输出层的前一个隐藏层的节点的个数增大。In addition, when there is a plurality of training data obtained by actual measurement for newly acquired values of operating parameters of a specific type related to the internal combustion engine that are outside the preset range, the difference between B and C in FIG. 8B may be determined. An increase in the data density of the training data within the range of the values of the preset operating parameters shown corresponds to an increase in the number of nodes in the hidden layer immediately preceding the output layer of the neural network. It should be noted that, in FIG. 8B , B and C respectively represent the minimum value and the maximum value of the range of the values of the preset operating parameters. Therefore, to be precise, it can be expressed by dividing the number of training data by The increase in data density obtained by the difference between the maximum value C and the minimum value B (C-B) of the preset range of the values of the operating parameters correspondingly increases the number of nodes in the hidden layer immediately preceding the output layer of the neural network. number increases.
如图1所示,在本发明的实施例中使用的内燃机具备电子控制单元30,该电子控制单元30具备:参数值取得部,取得内燃机的运转参数的值;运算部,使用包含输入层、隐藏层及输出层的神经网络来进行运算;及存储部。在此,图1所示的输入端口35构成了上述的参数值取得部,CPU34构成了上述的运算部,ROM32及RAM33构成了上述的存储部。需要说明的是,在CPU34即上述的运算部中,内燃机的运转参数的值被向输入层输入,根据内燃机的运转参数的值而变化的输出值被从输出层输出。另外,针对与内燃机相关的特定类别的运转参数的值而预先设定的范围R预先存储于ROM32内即上述的存储部。而且,已学习的权重和对预先设定的范围R内的与内燃机相关的特定类别的运转参数的值通过实测而事先求出的训练数据存储于RAM33内即上述的存储部。As shown in FIG. 1 , the internal combustion engine used in the embodiment of the present invention includes an
图11示出了在本发明的第一实施例中使用的神经网络的变形例。在该变形例中,输出层(L=4)具有两个节点。FIG. 11 shows a modification of the neural network used in the first embodiment of the present invention. In this modification, the output layer (L=4) has two nodes.
在该变形例中,与图9所示的例子同样,输入值x被设为内燃机转速N(rpm)。另一方面,在该变形例中,一方的输出量y1与图9所示的例子同样,被设为排气损失量,另一方的输出量y2被设为成为输入值x的二次函数的某种量,例如燃料消耗率。在该变形例中也是,在输出层(L=4)的各节点中,使用恒等函数作为活性化函数。在这样输出层(L=4)具有多个节点的情况下,如前所述,使用前式(8)所示的平方和误差E(来自各节点的输出值为y1、y2…,对应的训练数据为yt1、yt2…)作为误差函数E。在该情况下,从前式(7)可知,关于一方的节点,利用y1对平方和误差E进行偏微分因此一方的节点处的δ(L)的值成为δ(L)=y1-yt1,关于另一方的节点,利用y2对平方和误差E进行偏微分因此另一方的节点处的δ(L)的值成为δ(L)=y2-yt12。关于输出层(L=4)的各节点,当δ(L)求出后,使用前式(6)求出前层的δ(L-1)。这样,依次求出前层的δ,使用这些δ的值,根据前式(2),关于各权重w求出误差函数E的微分即梯度当求出梯度后,使用该梯度以使误差函数E的值减小的方式更新权重w的值。In this modification, as in the example shown in FIG. 9 , the input value x is set to the engine speed N (rpm). On the other hand, in this modification, one of the output amounts y 1 is assumed to be the exhaust gas loss amount as in the example shown in FIG. 9 , and the other output amount y 2 is assumed to be the quadratic of the input value x Some quantity of a function, such as a fuel consumption rate. Also in this modification, each node of the output layer (L=4) uses the identity function as the activation function. In the case where the output layer (L=4) has a plurality of nodes, as described above, using the square sum error E (the output values from each node are y 1 , y 2 . . . ) shown in the previous equation (8), The corresponding training data are y t1 , y t2 ...) as the error function E. In this case, as can be seen from the previous equation (7), with respect to one node, the sum of squares error E is partially differentiated by y 1 Therefore, the value of δ (L ) at one node becomes δ (L) = y 1 -y t1 , and with respect to the other node, the square sum error E is partially differentiated by y 2 Therefore, the value of δ (L ) at the other node becomes δ (L) =y 2 -y t12 . For each node of the output layer (L=4), after δ (L) is obtained, δ (L-1) of the previous layer is obtained using the preceding equation (6) . In this way, the δ of the previous layer is successively obtained, and using these values of δ, the gradient of the error function E, that is, the differential of the error function E, is obtained for each weight w according to the preceding formula (2). When finding the gradient After that, use the gradient The value of the weight w is updated in such a way that the value of the error function E decreases.
在如图11所示那样神经网络的输出层(L=4)具有多个节点的情况下也是,通过输出层(L=4)的前一个隐藏层(L=3)的节点的输出值z1、z2,从输出层(L=4)的各节点输出的函数的形式确定。即,各输出值y1、y2能够由何种函数表现受到输出层(L=4)的前一个隐藏层(L=3)的节点的个数支配。因此,在如图11所示的神经网络中,在增大隐藏层的节点的个数时,如图11所示,增大输出层(L=4)的前一个隐藏层(L=3)的节点的个数。Even when the output layer (L=4) of the neural network has a plurality of nodes as shown in FIG. 11, the output value z of the node passing through the hidden layer (L=3) preceding the output layer (L=4) 1 and z 2 are determined in the form of functions output from each node of the output layer (L=4). That is, what kind of function expression each output value y 1 , y 2 can represent is governed by the number of nodes in the hidden layer (L=3) immediately preceding the output layer (L=4). Therefore, in the neural network shown in Figure 11, when the number of nodes in the hidden layer is increased, as shown in Figure 11, the previous hidden layer (L=3) of the output layer (L=4) is increased. the number of nodes.
图12~图14B示出了本发明的机器学习装置的第二实施例。在该第二实施例中,与内燃机相关的运转参数由多个类别的运转参数构成,基于与内燃机相关的多个类别的运转参数的值来进行神经网络的权重的学习。作为具体的一例,示出了如下情况:内燃机的运转参数由内燃机转速、加速器开度(加速器踏板的踩踏量)及外气温构成,制作基于这些内燃机的运转参数的值来推定内燃机的输出转矩的神经网络模型。在该具体的一例中,如图13所示,神经网络的输入层(L=1)由3个节点构成,向各节点输入表示内燃机转速的输入值x1、表示加速器开度的输入值x2及表示外气温的输入值x3。另外,隐藏层(L=2,L=3)的层数能够设为1个或任意的个数,隐藏层(L=2,L=3)的节点的个数也能够设为任意的个数。需要说明的是,在图13所示的例子中,输出层(L=4)的节点的个数被设为1个。12 to 14B illustrate a second embodiment of the machine learning apparatus of the present invention. In the second embodiment, the operating parameters related to the internal combustion engine are composed of a plurality of categories of operating parameters, and the learning of the weights of the neural network is performed based on the values of the operating parameters related to the plurality of categories of the internal combustion engine. As a specific example, a case is shown in which the operating parameters of the internal combustion engine are composed of the engine speed, the accelerator opening (the amount of depression of the accelerator pedal), and the outside air temperature, and values based on the operating parameters of the internal combustion engine are created to estimate the output torque of the internal combustion engine. neural network model. In this specific example, as shown in FIG. 13 , the input layer (L=1) of the neural network is composed of three nodes, and an input value x 1 representing the engine speed and an input value x representing the accelerator opening are input to each node. 2 and the input value x 3 representing the outside air temperature. In addition, the number of hidden layers (L=2, L=3) can be set to one or an arbitrary number, and the number of nodes of the hidden layers (L=2, L=3) can also be set to an arbitrary number number. It should be noted that, in the example shown in FIG. 13 , the number of nodes of the output layer (L=4) is set to one.
另一方面,在图14A中,A1与B1之间即R1表示内燃机转速的预先设定的范围,A2与B2之间即R2表示加速器开度的预先设定的范围,A3与B3之间即R3表示外气温的预先设定的范围。需要说明的是,图14B也与图14A同样,A1与B1之间表示内燃机转速的预先设定的范围,A2与B2之间表示加速器开度的预先设定的范围,A3与B3之间表示外气温的预先设定的范围。需要说明的是,在该第二实施例中,加速器开度由负荷传感器41检测,外气温由大气温传感器24检测。另外,在该第二实施例中,例如,由安装于内燃机曲轴的转矩传感器实测内燃机的输出转矩,通过该实测而得到的转矩被设为训练数据。 On the other hand, in FIG. 14A, between A1 and B1, that is, R1, represents a preset range of the engine speed, and between A2 and B2 , that is, R2 , represents a preset range of the accelerator opening. Between A3 and B3 , that is, R3 represents a preset range of the outside air temperature. 14B is the same as FIG. 14A , between A 1 and B 1 represents a preset range of the engine speed, between A 2 and B 2 represents a preset range of the accelerator opening, and A 3 Between B3 and B3 indicates the preset range of the outside air temperature. It should be noted that, in the second embodiment, the accelerator opening is detected by the
在该第二实施例中也是,对预先设定的范围Rn内的各种输入值xn(n=1,2,3)实测出的内燃机输出转矩作为训练数据而事先求出,即,对预先设定的范围Rn内的与内燃机相关的多个类别的运转参数的值通过实测而事先求出了训练数据,根据这些与内燃机相关的多个类别的运转参数的值及训练数据来决定神经网络的构造,以使输出值y与对应于与内燃机相关的多个类别的运转参数的值的训练数据之差变小的方式事先学习神经网络的权重。对该预先设定的范围Rn内的与内燃机相关的多个类别的运转参数的值通过实测而事先求出的训练数据存储于电子控制单元30的存储部。在该第二实施例中也是,使用与在事先学习中使用的神经网络相同构造的神经网络,使用学习完成时的神经网络的权重,在车辆运转中以车载方式进一步进行学习。图12示出了该以车载方式进行的第二实施例的学习处理例程,该学习处理例程通过每隔一定时间(例如,每隔一秒)的中断来执行。Also in this second embodiment, the output torque of the internal combustion engine actually measured for various input values x n (n=1, 2, 3) within a preset range Rn is obtained as training data in advance, that is, Training data is obtained in advance for the values of the plurality of types of operating parameters related to the internal combustion engine within the preset range Rn by actual measurement, and is determined based on the values of the plurality of types of operating parameters related to the internal combustion engine and the training data. The structure of the neural network is to learn the weights of the neural network in advance so that the difference between the output value y and the training data corresponding to the values of the plurality of categories of operating parameters related to the internal combustion engine becomes small. The training data obtained in advance by actual measurement of the values of the plurality of types of operating parameters related to the internal combustion engine within the preset range Rn are stored in the storage unit of the
参照图12,首先,在步骤201中,读入存储于电子控制单元30的存储部的已学习的权重、在事先的学习中使用的训练数据即对预先设定的范围Rn内的与内燃机相关的多个类别的运转参数的值通过实测而事先求出的训练数据、表示输入数据的范围即与内燃机相关的多个类别的运转参数的值的预先设定的范围的值An、Bn(n=1,2,3)(图13A)。该已学习的权重用作权重的初始值。接着,在步骤202中,读入在事先的学习中使用的神经网络的输出层的前一个隐藏层的节点的个数K。接着,进入步骤203,取得新的输入值x即新的与内燃机相关的多个类别的运转参数的值,该新的输入值x即新的与内燃机相关的多个类别的运转参数的值存储于电子控制单元30的存储部。而且,在步骤203中,将相对于新的输入值x的内燃机输出转矩的实测值作为训练数据而存储于电子控制单元30的存储部。即,在步骤203中,将对新取得的与内燃机相关的多个类别的运转参数的值通过实测而得到的训练数据存储于电子控制单元30的存储部。Referring to FIG. 12 , first, in
接着,在步骤204中,判别新的输入值xn即新取得的与内燃机相关的多个类别的运转参数的值是否处于预先设定的范围Rn(An与Bn之间)内,即,新的输入值xn是否为An以上且Bn以下。在新的输入值xn处于预先设定的范围Rn内时,进入步骤205,将各输入值xn即新取得的与内燃机相关的多个类别的运转参数的值向神经网络的输入层的对应的节点输入,基于从神经网络的输出层的节点输出的输出值y和对新取得的与内燃机相关的多个类别的运转参数的值通过实测而求出的训练数据,使用误差反向传播法,以使输出值y与训练数据之差变小的方式学习神经网络的权重。Next, in
另一方面,在步骤204中判别为新的输入值xn即新取得的与内燃机相关的多个类别的运转参数的值中的至少一个类别的运转参数的值不处于预先设定的范围Rn(An与Bn之间)内时,例如,在图14B中表示内燃机转速的输入值x1处于B1~C1(B1<C1)的预先设定的范围(B1~C1)内的情况或者在图13B中表示外气温的输入值x3处于C3~A3(C3<A3)的预先设定的范围(C3~A3)内的情况下,进入步骤206。在步骤206中,首先,算出新的输入值xn所属的范围(Bn~Cn)或范围(Cn~An)内的训练数据相对于新的输入值xn的密度D(=训练数据个数/(Cn-Bn)或训练数据个数/(An-Cn))。On the other hand, in
在图14B中,B1及C1分别表示内燃机转速的预先设定的范围的最小值和最大值,即运转参数的值的预先设定的范围的最小值和最大值,训练数据密度D表示将训练数据的个数除以表示运转参数的值的预先设定的范围的最大值C1及最小值B1的差值(C1-B1)而得到的值。另外,在图14B中,C3及A3分别表示预先设定的外气温的范围的最小值和最大值,即运转参数的值的预先设定的范围的最小值和最大值,训练数据密度D表示将训练数据的个数除以表示运转参数的值的预先设定的范围的最大值C3及最小值A3的差值(C3-A3)而得到的值。在步骤206中,当算出训练数据密度D后,判别训练数据密度D是否变得比预先确定的数据密度D0高。在训练数据密度D比预先确定的数据密度D0低的情况下,完成处理循环。In FIG. 14B , B 1 and C 1 respectively represent the minimum and maximum values of the preset range of the engine speed, that is, the minimum and maximum values of the preset ranges of the values of the operating parameters, and the training data density D represents the A value obtained by dividing the number of training data by the difference (C 1 -B 1 ) between the maximum value C 1 and the minimum value B 1 in a predetermined range representing the values of the operating parameters. In addition, in FIG. 14B , C 3 and A 3 respectively indicate the minimum and maximum values of the range of the preset outside air temperature, that is, the minimum and maximum values of the preset ranges of the values of the operating parameters, and the training data density. D represents a value obtained by dividing the number of training data by the difference (C 3 -A 3 ) between the maximum value C 3 and the minimum value A 3 in a predetermined range representing the value of the operating parameter. In
另一方面,在步骤206中判别为训练数据密度D变得比预先确定的数据密度D0高时,进入步骤207。在该情况下,在D(=训练数据个数/(An-Cn))>D0时,通过下式算出追加节点数α。On the other hand, when it is determined in
追加节点数α=round{(K/(Bn-An))·(An-Cn)}Number of additional nodes α=round{(K/(Bn-An))·(An-Cn)}
另一方面,在D(=训练数据个数/(Cn-Bn))>D0时,通过下式算出追加节点数α。On the other hand, when D (=number of training data/(C n −B n ))>D 0 , the number of additional nodes α is calculated by the following equation.
追加节点数α=round{(K/(Bn-An))·(Cn-Bn)}Number of additional nodes α=round{(K/(Bn-An))·(Cn-Bn)}
需要说明的是,在上式中,K表示节点的个数,round意味着四舍五入。It should be noted that, in the above formula, K represents the number of nodes, and round means rounding.
当在步骤207中算出追加节点数α后,进入步骤208,更新神经网络的输出层的前一个隐藏层的节点的个数K,使输出层的前一个隐藏层的节点的个数K增大追加节点数α(K←K+α)。这样,在该第二实施例中,当将训练数据的个数除以表示运转参数的值的预先设定的范围的最大值及最小值的差值而得到的数据密度增大时,增大神经网络的输出层的前一个隐藏层的节点的个数。即,在该第二实施例中,与将训练数据的个数除以表示运转参数的值的预先设定的范围的最大值及最小值的差值而得到的数据密度的增大相应地,增大神经网络的输出层的前一个隐藏层的节点的个数。After calculating the number of additional nodes α in
另一方面,如上所述,从步骤206进入步骤207是在训练数据密度D达到了预先确定的数据密度D0时,因此,在步骤207中用于追加节点数α的算出的(An-Cn)的值及(Cn-Bn)的值与训练数据的个数成比例。因此,从上式可知,追加节点数α与新的输入值xn所属的范围(Bn~Cn)或范围(Cn~An)内的训练数据的个数成比例。即,在该第二实施例中,与对新取得的与内燃机相关的多个类别的运转参数的值通过实测而得到的训练数据的个数的增大相应地,增大神经网络的输出层的前一个隐藏层的节点的个数。On the other hand, as described above, when the training data density D reaches the predetermined data density D 0 from the
当在步骤208中使输出层的前一个隐藏层的节点的个数K增大追加节点数α后(K←K+α),进入步骤209,以使输出层的前一个隐藏层的节点的个数K增大的方式更新神经网络。接着,进入步骤205。在步骤205中,将对新的输入值x新得到的训练数据也包含于训练数据,以使输出值y与训练数据之差变小的方式学习更新后的神经网络的权重。即,在步骤205中,使用对新取得的与内燃机相关的多个类别的运转参数的值通过实测而得到的训练数据及对预先设定的范围Rn内的与内燃机相关的多个类别的运转参数的值通过实测而事先求出的训练数据,以使根据预先设定的范围内及预先设定的范围外的与内燃机相关的多个类别的运转参数的值而变化的输出值y与对应于与该内燃机相关的多个类别的运转参数的值的训练数据之差变小的方式学习更新后的神经网络的权重。When the number K of nodes in the previous hidden layer of the output layer is increased by the number of additional nodes α in step 208 (K←K+α), go to step 209, so that the number of nodes in the previous hidden layer of the output layer is increased. The neural network is updated by increasing the number K. Next, go to step 205 . In
在本发明的第二实施例中,表示对与内燃机相关的多个类别的运转参数的值而预先设定的范围Rn的值An、Bn预先存储于ROM32内即上述的存储部。另外,已学习的权重及对预先设定的范围Rn内的与内燃机相关的多个类别的运转参数的值通过实测而事先求出的训练数据存储于RAM33内即上述的存储部。In the second embodiment of the present invention, the values An and Bn representing the range Rn preset for the values of the plurality of types of operating parameters related to the internal combustion engine are prestored in the
图15示出了在本发明的第二实施例中使用的神经网络的变形例。在该变形例中,输出层(L=4)具有两个节点。FIG. 15 shows a modification of the neural network used in the second embodiment of the present invention. In this modification, the output layer (L=4) has two nodes.
在该变形例中,与图13所示的例子同样,输入值x1被设为内燃机转速,输入值x2被设为加速器开度,输入值x3被设为外气温。另一方面,在该变形例中,一方的输出量y1与图13所示的例子同样,被设为内燃机的输出转矩,另一方的输出量y2被设为内燃机的热效率。该热效率基于实际使内燃机运转时的内燃机转速、内燃机负荷、吸入空气压、吸入空气温、废气压、废气温、内燃机冷却水温等的检测值而算出,因此,该热效率表示通过实测而得到的值。在该变形例中也是,在输出层(L=4)的各节点中,使用恒等函数作为活性化函数。而且,在该变形例中,使用以内燃机的输出转矩的实测值及热效率的实测值为训练数据的前式(8)所示的平方和误差E作为误差函数E,使用误差反向传播法,以使误差函数E的值减小的方式更新权重w的值。In this modification, as in the example shown in FIG. 13 , the input value x 1 is the engine speed, the input value x 2 is the accelerator opening degree, and the input value x 3 is the outside air temperature. On the other hand, in this modification, one of the output amounts y 1 is assumed to be the output torque of the internal combustion engine, and the other output amount y 2 is assumed to be the thermal efficiency of the internal combustion engine, as in the example shown in FIG. 13 . This thermal efficiency is calculated based on the detected values of the engine speed, engine load, intake air pressure, intake air temperature, exhaust gas pressure, exhaust gas temperature, engine cooling water temperature, etc. when the internal combustion engine is actually operated. Therefore, the thermal efficiency represents a value obtained by actual measurement. . Also in this modification, each node of the output layer (L=4) uses the identity function as the activation function. Furthermore, in this modification example, the error back propagation method is used by using the square sum error E shown in the preceding equation (8) as the training data with the actual measured value of the output torque of the internal combustion engine and the actual measured value of the thermal efficiency as the training data. , update the value of the weight w in such a way that the value of the error function E decreases.
在如图15所示那样神经网络的输出层(L=4)具有多个节点的情况下也是,通过输出层(L=4)的前一个隐藏层(L=3)的节点的输出值z1、z2、z3、z4,从输出层(L=4)的各节点输出的函数的形式确定。即,各输出值y1、y2能够由何种函数表现受到输出层(L=4)的前一个隐藏层(L=3)的节点的个数支配。因此,在如图15所示的神经网络中,在增大隐藏层的节点的个数时,增大输出层(L=4)的前一个隐藏层(L=3)的节点的个数。Even when the output layer (L=4) of the neural network has a plurality of nodes as shown in FIG. 15, the output value z of the node passing through the hidden layer (L=3) preceding the output layer (L=4) 1 , z 2 , z 3 , and z 4 are determined in the form of functions output from each node of the output layer (L=4). That is, what kind of function expression each output value y 1 , y 2 can represent is governed by the number of nodes in the hidden layer (L=3) immediately preceding the output layer (L=4). Therefore, in the neural network shown in FIG. 15 , when the number of nodes in the hidden layer is increased, the number of nodes in the hidden layer (L=3) immediately preceding the output layer (L=4) is increased.
图16及图17示出了本发明的机器学习装置的第三实施例。在该第三实施例中也是,与内燃机相关的运转参数包含多个类别的运转参数,基于与内燃机相关的多个类别的运转参数的值来进行神经网络的权重的学习。在该第三实施例中也是,关于与内燃机相关的多个类别的运转参数的各个,预先设定有各类别的运转参数的值的范围。图17作为一例而示出了与内燃机相关的运转参数由两个类别的运转参数构成的情况,在图17中,一个类别的运转参数的值的预先设定的范围由Rx表示,另一类别的运转参数的值的预先设定的范围由Ry表示。在该第三实施例中,如图17所示,将各类别的运转参数的值的预先设定的范围Rx、Ry划分为多个,并且预先设定有通过各类别的运转参数的值的划分后的各划分范围的组合而划定的多个划分区域〔Xn,Ym〕(n=1,2…n,m=1,2…m)。16 and 17 show a third embodiment of the machine learning apparatus of the present invention. Also in this third embodiment, the operating parameters related to the internal combustion engine include a plurality of categories of operating parameters, and the learning of the weight of the neural network is performed based on the values of the operating parameters of the plural categories related to the internal combustion engine. Also in this third embodiment, a range of values of the operating parameters of each category is set in advance for each of a plurality of categories of operating parameters related to the internal combustion engine. FIG. 17 shows, as an example, a case where the operating parameters related to the internal combustion engine are composed of two types of operating parameters. In FIG. 17 , the preset range of the values of the operating parameters of one type is represented by Rx, and the other type is represented by Rx. The preset range of the value of the operating parameter is represented by Ry. In the third embodiment, as shown in FIG. 17 , the preset ranges Rx and Ry for the values of the operating parameters of each category are divided into a plurality of groups, and the ranges Rx and Ry that pass the values of the operating parameters of each category are preset. A plurality of divided regions [Xn, Ym] (n=1, 2...n, m=1, 2...m) defined by the combination of the divided division ranges.
需要说明的是,在图17中,X1、X2…Xn及Y1、Y2…Yn分别表示各类别的运转参数的值的划分范围。另外,在该第三实施例中,作为具体的一例而示出了如下情况:内燃机的运转参数由内燃机转速及外气温构成,制作基于这些内燃机的运转参数的值来推定来自内燃机的HC排出量的神经网络模型。在该情况下,X1、X2…Xn表示例如每隔1000rpm划分出的内燃机转速(1000rpm≤X1<200rpm,2000rpm≤X2<3000rpm…),Y1、Y2…Yn表示例如每隔10℃划分出的外气温(-30℃≤Y1<-20℃,-20℃≤Y2<-10℃…)。In addition, in FIG. 17, X1, X2...Xn and Y1, Y2...Yn each show the division|segmentation range of the value of each type of operation parameter. In the third embodiment, as a specific example, the operating parameters of the internal combustion engine are composed of the engine speed and the outside air temperature, and values based on the operating parameters of the internal combustion engine are created to estimate the HC discharge amount from the internal combustion engine. neural network model. In this case, X1, X2,...Xn represent, for example, the rotational speed of the internal combustion engine divided at every 1000rpm (1000rpm≤X1<200rpm, 2000rpm≤X2<3000rpm...), and Y1, Y2...Yn are, for example, the external engine speed divided at every 10°C. Air temperature (-30℃≤Y1<-20℃, -20℃≤Y2<-10℃…).
在该第三实施例中,针对各划分区域〔Xn,Ym〕分别制作有独立的神经网络。在这些神经网络中,输入层(L=1)由2个节点构成,向输入层(L=1)的各节点输入表示内燃机转速的输入值x1及表示外气温的输入值x2。另外,隐藏层(L=2,L=3)的层数能够设为1个或任意的个数,隐藏层(L=2,L=3)的节点的个数也能够设为任意的个数。需要说明的是,无论在哪个神经网络中,输出层(L=4)的节点的个数都被设为1个。需要说明的是,在该第三实施例中也是,作为变形例,可以将输出层(L=4)的节点的个数设为2个。在该情况下,例如,来自输出层(L=4)的一方的节点的输出量被设为来自内燃机的HC排出量,来自输出层(L=4)的另一方的节点的输出量被设为来自内燃机的NOX排出量。In the third embodiment, an independent neural network is created for each of the divided regions [Xn, Ym]. In these neural networks, the input layer (L=1) is composed of two nodes, and an input value x1 representing the engine speed and an input value x2 representing the outside air temperature are input to each node of the input layer (L=1). In addition, the number of hidden layers (L=2, L=3) can be set to one or an arbitrary number, and the number of nodes of the hidden layers (L=2, L=3) can also be set to an arbitrary number number. It should be noted that, in any neural network, the number of nodes of the output layer (L=4) is set to one. It should be noted that, also in the third embodiment, as a modification, the number of nodes of the output layer (L=4) may be set to two. In this case, for example, the output amount from one node of the output layer (L=4) is set to the HC discharge amount from the internal combustion engine, and the output amount from the other node of the output layer (L=4) is set to is the NOx emission from the internal combustion engine.
在该第三实施例中,隐藏层(L=3)的节点的个数针对各神经网络而不同,以下,将划分区域〔Xn,Ym〕中的神经网络的输出层的前一个隐藏层的节点的个数用Knm表示。该隐藏层的节点的个数Knm根据各划分区域〔Xn,Ym〕内的训练数据的变化相对于输入值的变化的复杂度而事先设定。需要说明的是,在该第三实施例中,取代图1所示的NOX传感器23而在排气通路内配置HC传感器。在该第三实施例中,由该HC传感器实测来自内燃机的HC排出量,通过该实测而得到的HC排出量被设为训练数据。需要说明的是,在上述的第三实施例的变形例中,除了图1所示的NOX传感器23之外还在排气通路内配置HC传感器。In this third embodiment, the number of nodes in the hidden layer (L=3) is different for each neural network. Hereinafter, the output layer of the neural network in the divided region [Xn, Ym] will be divided into the previous hidden layer of the neural network. The number of nodes is represented by Knm. The number Knm of nodes of the hidden layer is set in advance according to the complexity of the change of the training data in each of the divided regions [Xn, Ym] with respect to the change of the input value. In addition, in this 3rd Example, the HC sensor was arrange|positioned in the exhaust passage instead of the NOx
在该第三实施例中,对在与内燃机相关的多个类别的运转参数的值的预先设定的范围Rx、Ry内形成的各划分区域〔Xn,Ym〕(n=1,2…n,m=1,2…m)内的各种输入值x1、x2实测出的HC排出量作为训练数据而事先求出,即,对预先设定的范围Rx、Ry内的与内燃机相关的多个类别的运转参数的值通过实测而事先求出了训练数据,根据这些与内燃机相关的多个类别的运转参数的值及训练数据,也包含隐藏层的节点的个数Knm而决定相对于各划分区域〔Xn,Ym〕的神经网络的构造,以使输出值y与对应于与内燃机相关的多个类别的运转参数的值2的训练数据之差变小的方式事先学习各划分区域〔Xn,Ym〕的神经网络的权重。因此,在该第三实施例中,以下,有时也将该事先进行了学习的划分区域〔Xn,Ym〕(n=1,2…n,m=1,2…m)称作已学习划分区域〔Xn,Ym〕。需要说明的是,对该预先设定的范围Rx、Ry内的与内燃机相关的多个类别的运转参数的值通过实测而事先求出的训练数据存储于电子控制单元30的存储部。在该第三实施例中也是,关于各划分区域〔Xn,Ym〕,使用与在事先学习中使用的神经网络相同构造的神经网络,使用学习完成时的神经网络的权重,在车辆运转中以车载方式进一步进行学习。图16示出了该以车载方式进行的第三实施例的学习处理例程,该学习处理例程通过每隔一定时间(例如,每隔一秒)的中断来执行。In the third embodiment, each of the divided regions [Xn, Ym] (n=1, 2 . , m=1, 2… The values of the operating parameters of each category are obtained in advance through actual measurement to obtain training data, and based on the values of the operating parameters of the plurality of categories related to the internal combustion engine and the training data, the number Knm of nodes including the hidden layer is also determined. The structure of the neural network that divides the regions [Xn, Ym] is learned in advance for each of the divided regions [Xn , Ym] the weights of the neural network. Therefore, in the third embodiment, the previously learned division region [Xn, Ym] (n=1, 2...n, m=1, 2...m) is sometimes referred to as a learned division hereinafter. Area [Xn, Ym]. It should be noted that the training data obtained in advance by actual measurement of the values of the plurality of types of operating parameters related to the internal combustion engine within the preset ranges Rx and Ry are stored in the storage unit of the
参照图16,首先,在步骤301中,读入存储于电子控制单元30的存储部的已学习的权重、在事先的学习中使用的训练数据即对预先设定的范围Rx、Ry内的与内燃机相关的多个类别的运转参数的值通过实测而事先求出的训练数据、各已学习划分区域〔Xn,Ym〕(n=1,2…n,m=1,2…m)。该已学习的权重用作权重的初始值。接着,在步骤302中,读入对各已学习划分区域〔Xn,Ym〕在事先的学习中使用的输出层的前一个隐藏层的节点的个数Knm。接着,进入步骤303,取得新的输入值x1、x2即新的与内燃机相关的多个类别的运转参数的值,该新的输入值x1、x2即新的与内燃机相关的多个类别的运转参数的值存储于电子控制单元30的存储部。而且,在步骤303中,将相对于新的输入值x1、x2的HC排出量的实测值作为训练数据而存储于电子控制单元30的存储部。即,在步骤303中,将对新取得的与内燃机相关的多个类别的运转参数的值通过实测而得到的训练数据存储于电子控制单元30的存储部。Referring to FIG. 16 , first, in
接着,在步骤304中,判别新的输入值x1、x2是否处于已学习划分区域〔Xn,Ym〕内,即,新取得的与内燃机相关的多个类别的运转参数的值是否处于预先设定的范围Rx、Ry内。在新的输入值x1、x2处于已学习划分区域〔Xn,Ym〕内时,即,在新取得的与内燃机相关的多个类别的运转参数的值处于预先设定的范围Rx、Ry内时,进入步骤305,将输入值x1、x2即新取得的与内燃机相关的多个类别的运转参数的值向新取得的与内燃机相关的多个类别的运转参数的值所属的已学习划分区域〔Xn,Ym〕的神经网络的输入层的各节点输入,基于从神经网络的输出层的节点输出的输出值y和对新取得的与内燃机相关的多个类别的运转参数的值通过实测而求出的训练数据,使用误差反向传播法,以使输出值y与训练数据之差变小的方式进一步学习新取得的与内燃机相关的多个类别的运转参数的值所属的已学习划分区域〔Xn,Ym〕的神经网络的权重。Next, in
另一方面,在步骤304中判别为新的输入值x1、x2不处于已学习划分区域〔Xn,Ym〕内时,例如,在图17中输入值x1、x2所属且通过输入值x1、x2的预先设定的范围的组合而划定的未学习区域〔Xa,Yb〕被设定于已学习划分区域〔Xn,Ym〕外。即,换言之,在新取得的与内燃机相关的多个类别的运转参数中的至少一个类别的运转参数的值为预先设定的范围Rx、Ry外时,各类别的运转参数的值所属且通过各类别的运转参数的值的预先设定的范围的组合而划定的未学习区域〔Xa,Yb〕被设定于预先设定的范围Rx、Ry外。On the other hand, when it is determined in
图17所示的例子示出了如下情况:新取得的与内燃机相关的一个类别的运转参数的值为预先设定的范围Rx外,新取得的与内燃机相关的另一类别的运转参数的值属于预先设定的范围Ry内的划分范围Y2,在该情况下,该未学习区域〔Xa,Yb〕在对于一个类别的运转参数的值而处于预先设定的范围Rx外且另一类别的运转参数的值所属的划分范围Y2内,与该划分范围Y2内的已学习划分区域〔Xn,Ym〕相邻地设定。The example shown in FIG. 17 shows the case where the newly acquired value of the operating parameter of one category related to the internal combustion engine is outside the preset range Rx, and the value of the operating parameter of another category related to the internal combustion engine is newly acquired It belongs to the division range Y2 within the preset range Ry, and in this case, the unlearned area [Xa, Yb] is outside the preset range Rx for the value of the operating parameter of one category and is in the other category. Within the division range Y2 to which the value of the operating parameter belongs, it is set adjacent to the learned division area [Xn, Ym] in the division range Y2.
当设定未学习区域〔Xa,Yb〕后,进入步骤306。在步骤306中,首先,算出新的输入值x1、x2所属的未学习区域〔Xa,Yb〕内的训练数据密度D。该训练数据密度D(=训练数据个数/〔Xa,Yb〕)表示将训练数据的个数除以未学习区域〔Xa,Yb〕的面积即各类别的运转参数的值的预先设定的范围之积而得到的值。接着,判别训练数据密度D是否变得比预先确定的数据密度D0高及新的输入值x1、x2所属的新区域〔Xa,Yb〕内的训练数据的方差S2是否比预先确定的方差S2 0大。在训练数据密度D比预先确定的数据密度D0低的情况或训练数据的方差S2比预先确定的方差S2 0小的情况下,完成处理循环。When the unlearned area [Xa, Yb] is set, the process proceeds to step 306 . In
另一方面,在步骤306中判别为训练数据密度D比预先确定的数据密度D0高且训练数据的方差S2比预先确定的方差S2 0大时,进入步骤307。需要说明的是,在步骤306中,也可以省略方差S2是否比预先确定的方差S2 0大的判别,仅判断训练数据密度D是否变得比预先确定的数据密度D0高。在该情况下,在训练数据密度D比预先确定的数据密度D0低的情况下完成处理循环,在判别为训练数据密度D比预先确定的数据密度D0高时进入步骤307。在步骤307中,基于下述的节点数算出式,根据未学习区域〔Xa,Yb〕周围的已学习划分区域〔Xn,Ym〕中的节点数Knm的平均值来算出相对于未学习区域〔Xa,Yb〕的节点数Kab。On the other hand, when it is determined in
节点数Kab=1/NΣΣKij(i=(a-1)~(a+1),j=(b-1)~(b+1))Number of nodes Kab=1/NΣΣKij (i=(a-1)~(a+1), j=(b-1)~(b+1))
需要说明的是,在上式中,N表示在未学习区域〔Xa,Yb〕周围相邻存在的已学习划分区域〔Xn,Ym〕的个数。在该情况下,在未学习区域〔Xa,Yb〕周围的相邻的划分区域〔Xn,Ym〕中存在还未使用的划分区域〔Xn,Ym〕即不存在节点数Knm的划分区域〔Xn,Ym〕的情况下,该划分区域〔Xn,Ym〕被从个数N的算出排除。例如,若以图15所示的例子来说,则未学习区域〔Xa,Yb〕周围的相邻的已学习划分区域〔Xn,Y1〕的节点数Kn1、已学习划分区域〔Xn,Y2〕的节点数Kn2及已学习划分区域〔Xn,Y3〕的节点数Kn3的平均值被设为相对于未学习区域〔Xa,Yb〕的节点数Kab。It should be noted that, in the above formula, N represents the number of learned divided regions [Xn, Ym] that exist adjacent to the unlearned region [Xa, Yb]. In this case, there is an unused divided area [Xn, Ym] in the adjacent divided areas [Xn, Ym] around the unlearned area [Xa, Yb], that is, there is no divided area [Xn with the number of nodes Knm] , Ym], the division area [Xn, Ym] is excluded from the calculation of the number N. For example, taking the example shown in FIG. 15, the number of nodes Kn1 of the adjacent learned divided areas [Xn, Y1] around the unlearned area [Xa, Yb], and the learned divided areas [Xn, Y2] The average value of the number of nodes Kn2 of , and the number of nodes Kn3 of the learned divided area [Xn, Y3] is set to Kab with respect to the number of nodes of the unlearned area [Xa, Yb].
在各划分区域〔Xn,Ym〕内的训练数据的变化相对于输入值的变化的关系单纯的情况下,即使使隐藏层的节点的个数Knm少也能够充分进行学习,但在各划分区域〔Xn,Ym〕内的训练数据的变化相对于输入值的变化的关系复杂的情况下,若不使隐藏层的节点的个数Knm多则无法进行充分的学习。因此,如前所述,已学习划分区域〔Xn,Ym〕中的神经网络的隐藏层的节点的个数Knm根据各已学习划分区域〔Xn,Ym〕内的训练数据的变化相对于输入值的变化的复杂度而设定。在两个划分区域〔Xn,Ym〕接近的情况下,在这些划分区域〔Xn,Ym〕之间,训练数据的变化相对于输入值的变化的关系相似,因此,在两个划分区域〔Xn,Ym〕接近的情况下,能够使用相同个数作为隐藏层的节点的个数Knm。因此,在该第三实施例中,未学习区域〔Xa,Yb〕周围的相邻的已学习划分区域〔Xn,Ym〕中的节点数Knm的平均值被设为相对于未学习区域〔Xa,Yb〕的节点数Kab。When the relationship between the change of the training data and the change of the input value in each of the divided regions [Xn, Ym] is simple, sufficient learning can be performed even if the number Knm of nodes in the hidden layer is small, but in each of the divided regions When the relationship between changes in training data in [Xn, Ym] with respect to changes in input values is complicated, sufficient learning cannot be performed unless the number Knm of nodes in the hidden layer is increased. Therefore, as described above, the number of nodes Knm of the hidden layer nodes of the neural network in the learned division area [Xn, Ym] is relative to the input value according to the change of the training data in each learned division area [Xn, Ym] The complexity of the change is set. In the case where the two divided areas [Xn, Ym] are close, the relationship between the change of the training data and the change of the input value is similar between these divided areas [Xn, Ym]. Therefore, between the two divided areas [Xn, Ym] , Ym] are close to each other, the same number can be used as the number Knm of nodes of the hidden layer. Therefore, in this third embodiment, the average value of the number of nodes Knm in the adjacent learned divided regions [Xn, Ym] around the unlearned region [Xa, Yb] is set relative to the unlearned region [Xa , Yb] the number of nodes Kab.
在此,作为第三实施例的变形例,对将未学习区域〔Xa,Yb〕中的训练数据的个数纳入考虑来求出相对于未学习区域〔Xa,Yb〕的节点数Kab的方法进行简单说明。即,在未学习区域〔Xa,Yb〕中的训练数据个数比未学习区域〔Xa,Yb〕周围的相邻的已学习划分区域〔Xn,Ym〕中的训练数据个数多的情况下,相对于未学习区域〔Xa,Yb〕的节点数Kab优选比未学习区域〔Xa,Yb〕周围的相邻的已学习划分区域〔Xn,Ym〕中的节点数Knm多。因此,在该变形例中,求出未学习区域〔Xa,Yb〕周围的相邻的已学习区域〔Xn,Ym〕中的训练数据个数的平均值MD,通过将未学习区域〔Xa,Yb〕中的输入数据个数MN除以平均值MD来求出节点数增大率RK(=MN/MD),对根据上述的节点数算出式求出的相对于未学习区域〔Xa,Yb〕的节点数Kab乘以该节点数增大率RK,设为最终的相对于未学习区域〔Xa,Yb〕的节点数Kab。Here, as a modification of the third embodiment, a method of obtaining the number of nodes Kab relative to the unlearned area [Xa, Yb] considering the number of training data in the unlearned area [Xa, Yb] is considered Give a brief explanation. That is, when the number of training data in the unlearned region [Xa, Yb] is larger than the number of training data in the adjacent learned divided regions [Xn, Ym] around the unlearned region [Xa, Yb] , the number of nodes Kab relative to the unlearned area [Xa, Yb] is preferably larger than the number of nodes Knm in the adjacent learned divided areas [Xn, Ym] around the unlearned area [Xa, Yb]. Therefore, in this modification, the average value MD of the number of training data in the adjacent learned regions [Xn, Ym] around the unlearned region [Xa, Yb] is obtained, and the unlearned region [Xa, Ym] is calculated by dividing the unlearned region [Xa, Ym] The number of input data MN in Yb] is divided by the average value MD to obtain the node number increase rate RK (=MN/MD). ] is multiplied by the node number increase rate RK to obtain the final node number Kab with respect to the unlearned area [Xa, Yb].
当在步骤307中算出相对于未学习区域〔Xa,Yb〕的节点数Kab后,进入步骤308,制作相对于未学习区域〔Xa,Yb〕的新的神经网络。在该新的神经网络中,节点的个数关于输入层被设为2个,关于输出层的前一个隐藏层被设为Kab个,关于输出层被设为1个或多个。接着,进入步骤305。在步骤305中,关于未学习区域〔Xa,Yb〕,以使输出值y与训练数据之差变小的方式,学习对未学习区域〔Xa,Yb〕制作出的神经网络的权重。After calculating the number of nodes Kab with respect to the unlearned area [Xa, Yb] in
接着,参照图18A~图19B,对将本发明的机器学习装置应用于低负荷用的特殊的内燃机的情况的具体例进行说明。在该具体例中,如图12所示,使用隐藏层(L=3)具有4个节点的神经网络,制作根据节气门12的开度、内燃机转速及点火正时来输出表示NOX排出量的输出值y的模型。需要说明的是,在该具体例中使用的内燃机中,节气门12的开度的使用范围被设定为5.5°~11.5°(将最大闭阀位置下的节气门12的开度设为0°)之间,内燃机转速的使用范围被设定为1600(rpm)~3000(rpm)之间,点火正时的使用范围被设定为0°(压缩上止点)~ATDC(压缩上止点前)40°之间。Next, a specific example of a case where the machine learning apparatus of the present invention is applied to a special low-load internal combustion engine will be described with reference to FIGS. 18A to 19B . In this specific example, as shown in FIG. 12 , a neural network having 4 nodes in the hidden layer (L=3) is used to create an output representing the NOx emission amount according to the opening degree of the
图18A示出了训练数据相对于点火正时和内燃机转速的分布,图18B示出了训练数据相对于节气门开度和点火正时的分布。需要说明的是,在图18A及图18B中,黑圆表示事先取得的训练数据的存在场所,三角记号表示事先未取得训练数据的场所。从图18A及图18B可知对于何种节气门开度、何种内燃机转速及何种点火正时事先取得了训练数据。例如,可知,在图18A中,在内燃机转速N为2000(rpm)且点火正时为ATDC20°时,事先取得了训练数据,如图18B所示,在点火正时为ATDC20°时,对于各种节气门开度,事先取得了训练数据。FIG. 18A shows the distribution of the training data with respect to the ignition timing and the engine speed, and FIG. 18B shows the distribution of the training data with respect to the throttle valve opening and the ignition timing. In addition, in FIG. 18A and FIG. 18B, the black circle shows the place where the training data acquired in advance exists, and the triangle mark shows the place where the training data is not acquired beforehand. It can be seen from FIGS. 18A and 18B that the training data has been acquired in advance for which throttle valve opening, which engine speed, and which ignition timing. For example, in FIG. 18A , when the engine speed N is 2000 (rpm) and the ignition timing is
另一方面,在该具体例中,将节气门开度、内燃机转速及点火正时向神经网络的输入层(L=1)的各节点输入,以使输出值y与表示由NOX传感器23检测到的NOX排出量的训练数据之差变小的方式学习神经网络的权重。学习后的输出值y与训练数据的关系示于图18C、图19A及图19B,需要说明的是,在图18C、图19A及图19B中,学习后的输出值y及训练数据的值以使最大值成为1的方式标准化而示出。On the other hand, in this specific example, the throttle valve opening, the engine speed, and the ignition timing are input to each node of the input layer (L=1) of the neural network, so that the output value y and the output value y indicated by the NOx
如上所述,在该具体例中使用的内燃机中,节气门12的开度的使用范围被设定为5.5°~11.5°之间,内燃机转速N的使用范围被设定为1600(rpm)~3000(rpm)之间,点火正时的使用范围被设定为0°(压缩上止点)~ATDC40°之间。在图18C中,将在这些使用范围内使用了节气门开度、内燃机转速N及点火正时时的NOX排出量作为训练数据而事先取得,以使输出值y与事先取得的训练数据之差变小的方式学习了神经网络的权重时的学习后的输出值y与训练数据的关系用圆形记号示出。As described above, in the internal combustion engine used in this specific example, the use range of the opening degree of the
如图18C所示,表示学习后的输出值y与训练数据的关系的圆形记号集中于一直线上,因此可知,学习后的输出值y与训练数据一致。例如,若举出节气门12的开度为例,则因发动机的个体差异或历时变化,节气门12的开度会从标准的开度偏离,即使节气门12的开度的使用范围被设定为5.5°~11.5°之间,实际上,节气门12的开度有时也会超过预先设定的使用范围。图18A及图18B所示的三角标记表示在节气门12的开度超过预先设定的使用范围而成为了13.5°时新取得的训练数据的场所。As shown in FIG. 18C , the circle marks representing the relationship between the output value y after learning and the training data are concentrated on a straight line, so it can be seen that the output value y after learning agrees with the training data. For example, taking the opening degree of the
图18C的三角标记示出了在这样节气门12的开度超过预先设定的使用范围而成为了13.5°时不使用新取得的训练数据而仅使用事先取得的训练数据进行了神经网络的权重的学习的情况。在该情况下,可知,节气门12的开度超过预先设定的使用范围而成为了13.5°时的NOX排出量的推定值会从实测值大幅偏离。另一方面,图19A的圆形记号示出了使用在这样节气门12的开度超过预先设定的使用范围而成为了13.5°时取得的新的训练数据和事先取得的训练数据这双方的训练数据进行了神经网络的权重的学习的情况。在该情况下,可知,NOX排出量的推定值会整体从实测值偏离。The triangular mark in FIG. 18C shows that when the opening degree of the
相对于此,图19B的圆形记号示出了与图19A同样地使用在节气门12的开度超过预先设定的使用范围而成为了13.5°时取得的新的训练数据和事先取得的训练数据这双方的训练数据,不同于图19A,将神经网络的隐藏层(L=3)的节点的个数从4个增大为7个后进行了神经网络的权重的学习的情况。在该情况下,可知,NOX排出量的推定值与实测值高精度地一致。这样,在新取得的内燃机的运转参数的值为预先设定的范围外时,通过使神经网络的输出层的前一个隐藏层的节点的个数增大,能够提高推定精度。On the other hand, the circle mark in FIG. 19B shows the use of new training data and previously acquired training obtained when the opening degree of the
图20~图25示出了将本发明的机器学习装置应用于空调(空调机)的自动调整的情况的第四实施例。在该实施例中,根据气温、湿度、位置及安装有空调的房间的大小,自动地设定最佳的空调的风量、风向及运转时间。在该情况下,空调的使用的条件、场所的范围即气温、湿度、位置及安装有空调的房间的大小等运转参数的值的使用范围能够根据空调的种类而预先设想,因此,通常,对空调的运转参数的值的预先设定的范围,以使神经网络的输出值与最佳的空调的风量、风向及运转时间之差变小的方式预先学习神经网络的权重。20 to 25 show a fourth example of a case where the machine learning apparatus of the present invention is applied to automatic adjustment of an air conditioner (air conditioner). In this embodiment, the optimal air volume, air direction, and operation time of the air conditioner are automatically set according to the temperature, humidity, location, and the size of the room in which the air conditioner is installed. In this case, the conditions for the use of the air conditioner and the range of locations, that is, the range of use of the values of the operating parameters such as temperature, humidity, location, and the size of the room where the air conditioner is installed, can be preconceived according to the type of the air conditioner. The predetermined range of the values of the operating parameters of the air conditioner is to learn the weights of the neural network in advance so that the difference between the output value of the neural network and the optimal air volume, air direction, and operation time of the air conditioner becomes small.
然而,在该情况下,空调的运转参数的值有时也会成为预先设定的范围外,在该情况下,对于预先设定的范围外,由于未进行基于实际的值的学习,所以使用神经网络运算出的输出值会成为从实际的值大幅背离的值,于是,在该实施例中也是,在新取得的与空调相关的运转参数的值为预先设定的范围外时,使神经网络的输出层的前一个隐藏层的节点的个数增大,或者增大神经网络的个数,使用对新取得的与空调相关的运转参数的值得到的训练数据及对预先设定的范围内的与空调相关的运转参数的值得到的训练数据来学习神经网络的权重。However, in this case, the value of the operating parameter of the air conditioner may be outside the preset range. In this case, since learning based on actual values is not performed outside the preset range, neural Since the output value calculated by the network will be a value that deviates greatly from the actual value, in this embodiment also, when the value of the newly acquired operating parameter related to the air conditioner is outside the preset range, the neural network is set to The number of nodes in the previous hidden layer of the output layer is increased, or the number of neural networks is increased, and the training data obtained from the newly acquired values of the operating parameters related to the air conditioner and the preset range are used. The weights of the neural network are learned from the training data obtained by the values of the operating parameters related to the air conditioner.
接着,对该第四实施例进行具体说明。参照图20,50表示空调主体,51表示配置于空调主体50内的送风电动机,52表示配置于空调主体50内的风向调整电动机,53表示用于检测气温的温度计,54表示用于检测大气的湿度的湿度计,55表示用于检测空调的设置位置的GPS,56表示具有与图1所示的电子控制单元30同样的结构的电子控制单元。如图20所示,由温度计53检测到的气温、由湿度计54检测到的大气的湿度及由GPS55检测到的位置信息向电子控制单元56输入,从电子控制单元56输出用于得到最佳的空调的风量的送风电动机51的驱动信号及用于得到最佳的空调的风向的风向调整电动机52的驱动信号。需要说明的是,安装有空调的房间的大小例如向电子控制单元56手动输入。Next, the fourth embodiment will be specifically described. 20, 50 denotes an air conditioner main body, 51 denotes a blower motor disposed in the air conditioner
图21示出了在该第四实施例中使用的神经网络。在该第四实施例中,如图21所示,神经网络的输入层(L=1)由4个节点构成,向各节点输入表示气温的输入值x1、表示湿度的输入值x2、表示位置的输入值x3及表示安装有空调的房间的大小的输入值x4。另外,隐藏层(L=2,L=3)的层数可以设为1个或任意的个数,隐藏层(L=2,L=3)的节点的个数也可以设为任意的个数。另外,在该第四实施例中,输出层(L=4)由3个节点构成,从各节点输出表示空调的风量的输出值y1、表示空调的风向的输出值y2及表示空调的运转时间的输入值y3。FIG. 21 shows the neural network used in this fourth embodiment. In this fourth embodiment, as shown in FIG. 21 , the input layer (L=1) of the neural network consists of four nodes, and input values x 1 representing air temperature, input values x 2 representing humidity, The input value x 3 representing the position and the input value x 4 representing the size of the room where the air conditioner is installed are represented. In addition, the number of hidden layers (L=2, L=3) can be set to one or an arbitrary number, and the number of nodes of the hidden layers (L=2, L=3) can also be set to any number number. In addition, in the fourth embodiment, the output layer (L=4) is composed of three nodes, and the output value y 1 representing the air volume of the air conditioner, the
另一方面,在图22A中,A1与B1之间即R1表示气温的预先设定的范围(例如,-5℃~40℃),A2与B2之间即R2表示湿度的预先设定的范围(例如,30%~90%),A3与B3之间即R3表示位置(例如,北纬20度~46度之间)的预先设定的范围,A4与B4之间即R4表示安装有空调的房间的大小的预先设定的范围。需要说明的是,图22B也与图22A同样,A1~B1之间表示气温的预先设定的范围,A2与B2之间表示湿度的预先设定的范围,A3与B3之间表示位置的预先设定的范围,A4与B4之间表示安装有空调的房间的大小的预先设定的范围。 On the other hand, in FIG. 22A, between A1 and B1, that is, R1, represents a preset range of air temperature (for example, -5°C to 40°C ) , and between A2 and B2 , that is, R2 , represents humidity. The pre-set range (for example, 30% to 90%), between A 3 and B 3 , that is, R 3 represents the pre-set range of the position (for example, between 20 degrees to 46 degrees north latitude), A 4 and B Between B 4 ie R 4 represents a preset range of the size of the room where the air conditioner is installed. 22B is the same as FIG. 22A , between A 1 to B 1 shows a preset range of air temperature, between A 2 and B 2 shows a preset range of humidity, A 3 and B 3 Between represents the preset range of the position, and between A 4 and B 4 represents the preset range of the size of the room where the air conditioner is installed.
在该第四实施例中也是,对预先设定的范围Rn内的各种输入值xn(n=1,2,3,4)实测出的最佳的空调的风量、风向及运转时间作为训练数据而事先求出,即,对预先设定的范围Rn内的与空调相关的多个类别的运转参数的值通过实测而事先求出了训练数据,根据这些与空调相关的多个类别的运转参数的值及训练数据来决定神经网络的构造,以使各输出值y1、y2、y3与对应于与空调相关的多个类别的运转参数的值的训练数据之差变小的方式事先学习神经网络的权重。对该预先设定的范围Rn内的与空调相关的多个类别的运转参数的值通过实测而事先求出的训练数据存储于电子控制单元56的存储部。Also in this fourth embodiment, the air volume, air direction and operation time of the optimal air conditioner actually measured for various input values x n (n=1, 2, 3, 4) within the preset range Rn are used as The training data is obtained in advance by training data, that is, the values of the operating parameters of the plurality of categories related to the air conditioner within the preset range Rn are obtained in advance through actual measurement. The values of the operating parameters and the training data determine the structure of the neural network so that the difference between the output values y 1 , y 2 , and y 3 and the training data corresponding to the values of the operating parameters of a plurality of categories related to the air conditioner becomes small. way to learn the weights of the neural network in advance. The training data obtained in advance by actual measurement of the values of the plurality of types of operating parameters related to the air conditioner within the preset range Rn are stored in the storage unit of the
在该第四实施例中也是,使用与在事先学习中使用的神经网络相同构造的神经网络,使用学习完成时的神经网络的权重,在车辆运转中以车载方式进一步进行学习。图23示出了该以车载方式进行的第四实施例的学习处理例程,该学习处理例程通过每隔一定时间(例如,每隔一秒)的中断来执行。需要说明的是,在图23所示的学习处理例程的各步骤中进行的处理除了输入值的类别和输入值的个数及输出值的类别和输出值的个数不同这一点之外,与在图12所示的学习处理例程的各步骤中进行的处理相同。Also in this fourth embodiment, a neural network having the same structure as the neural network used in the previous learning is used, and the weight of the neural network at the time of completion of the learning is used, and the learning is further performed on-board while the vehicle is running. FIG. 23 shows a learning processing routine of the fourth embodiment carried out in the vehicle-mounted manner, and the learning processing routine is executed by interruption at regular intervals (for example, every one second). It should be noted that the processing performed in each step of the learning processing routine shown in FIG. 23 is different from the type of input value and the number of input values, and the type of output value and the number of output values. The processing is the same as that performed in each step of the learning processing routine shown in FIG. 12 .
即,参照图23,首先,在步骤401中,读入存储于电子控制单元56的存储部的已学习的权重、在事先的学习中使用的训练数据即对预先设定的范围Rn内的与空调相关的多个类别的运转参数的值通过实测而事先求出的训练数据、表示输入数据的范围即与空调相关的多个类别的运转参数的值的预先设定的范围的值An、Bn(n=1,2,3,4)(图22A)。该已学习的权重用作权重的初始值。接着,在步骤402中,读入在事先的学习中使用的神经网络的输出层的前一个隐藏层的节点的个数K。接着,进入步骤403,取得新的输入值x即新的与空调相关的多个类别的运转参数的值,该新的输入值x即新的与空调相关的多个类别的运转参数的值存储于电子控制单元56的存储部。而且,在步骤403中,将相对于新的输入值x的空调的风量、风向及运转时间的实测值作为训练数据而存储于电子控制单元56的存储部。即,在步骤403中,将对新取得的与空调相关的多个类别的运转参数的值通过实测而得到的训练数据存储于电子控制单元56的存储部。23 , first, in
接着,在步骤404中,判别新的输入值xn即新取得的与空调相关的多个类别的运转参数的值是否处于预先设定的范围Rn(An与Bn之间)内,即新的输入值xn是否为An以上且Bn以下。在新的输入值xn处于预先设定的范围Rn内时,进入步骤405,将各输入值xn即新取得的与空调相关的多个类别的运转参数的值向神经网络的输入层的对应的节点输入,基于从神经网络的输出层的节点输出的输出值y1、y2、y3和对新取得的与空调相关的多个类别的运转参数的值通过实测而求出的训练数据,使用误差反向传播法,以使输出值y1、y2、y3与训练数据之差变小的方式学习神经网络的权重。Next, in
另一方面,在步骤404中判别为新的输入值xn即新取得的与空调相关的多个类别的运转参数的值中的至少一个类别的运转参数的值不处于预先设定的范围Rn(An与Bn之间)内时,例如,在图22B中表示气温的输入值x1处于B1~C1(B1<C1)的预先设定的范围(B1~C1)内的情况或在图22B中表示位置的输入值x3处于C3~A3(C3<A3)的预先设定的范围(C3~A3)内的情况下,进入步骤406。在步骤406中,首先,算出新的输入值xn所属的范围(Bn~Cn)或范围(Cn~An)内的训练数据相对于新的输入值xn的密度D(=训练数据个数/(Cn-Bn)或训练数据个数/(An-Cn))。关于该训练数据密度D的定义,如前所述。在步骤406中,当算出训练数据密度D后,判别训练数据密度D是否变得比预先确定的数据密度D0高。在训练数据密度D比预先确定的数据密度D0低的情况下,完成处理循环。On the other hand, it is determined in
另一方面,在步骤406中判别为训练数据密度D变得比预先确定的数据密度D0高时,进入步骤407。在该情况下,在D(=训练数据个数/(An-Cn))>D0时,通过下式来算出追加节点数α。On the other hand, when it is determined in
追加节点数α=round{(K/(Bn-An))·(An-Cn)}Number of additional nodes α=round{(K/(Bn-An))·(An-Cn)}
另一方面,在D(=训练数据个数/(Cn-Bn))>D0时,通过下式来算出追加节点数α。On the other hand, when D (=number of training data/(C n -B n ))>D 0 , the number of additional nodes α is calculated by the following equation.
追加节点数α=round{(K/(Bn-An))·(Cn-Bn)}Number of additional nodes α=round{(K/(Bn-An))·(Cn-Bn)}
需要说明的是,在上式中,K表示节点的个数,round意味着四舍五入。It should be noted that, in the above formula, K represents the number of nodes, and round means rounding.
当在步骤407中算出追加节点数α后,进入步骤408,更新神经网络的输出层的前一个隐藏层的节点的个数K,使输出层的前一个隐藏层的节点的个数K增大追加节点数α(K←K+α)。这样,在该第四实施例中,当将训练数据的个数除以表示运转参数的值的预先设定的范围的最大值及最小值的差值而得到的数据密度增大时,增大神经网络的输出层的前一个隐藏层的节点的个数。即,在该第四实施例中,与将训练数据的个数除以表示运转参数的值的预先设定的范围的最大值及最小值的差值而得到的数据密度的增大相应地,增大神经网络的输出层的前一个隐藏层的节点的个数。After calculating the number of additional nodes α in
当在步骤408中使输出层的前一个隐藏层的节点的个数K增大追加节点数α后(K←K+α),进入步骤409,以使输出层的前一个隐藏层的节点的个数K增大的方式更新神经网络。接着,进入步骤405。在步骤405中,将对新的输入值x新得到的训练数据也包含于训练数据,以使输出值y1、y2、y3与训练数据之差变小的方式学习更新后的神经网络的权重。即,在步骤405中,使用对新取得的与空调相关的多个类别的运转参数的值通过实测而得到的训练数据及对预先设定的范围Rn内的与空调相关的多个类别的运转参数的值通过实测而事先求出的训练数据,以使根据预先设定的范围内及预先设定的范围外的与空调相关的多个类别的运转参数的值而变化的输出值y1、y2、y3与对应于与该空调相关的多个类别的运转参数的值的训练数据之差变小的方式学习更新后的神经网络的权重。When the number K of nodes in the previous hidden layer of the output layer is increased by the number of additional nodes α in step 408 (K←K+α), go to step 409, so that the number of nodes in the previous hidden layer of the output layer is increased. The neural network is updated by increasing the number K. Next, go to step 405 . In
图24及图25示出了第四实施例的变形例。在该变形例中,与空调相关的各类别的运转参数的值的预先设定的范围被划分为多个。即,图24中的Rw、Rx、Ry、Rz分别表示气温、湿度、位置及安装有空调的房间的大小的预先设定的范围,如图24所示,对气温、湿度、位置及安装有空调的房间的大小预先设定的范围被划分为多个。需要说明的是,在图24中,W1、W2…Wn,X1、X2…Xn,Y1、Y2…Yn,Z1、Z2…Zn分别表示各类别的运转参数的值的划分范围。24 and 25 show a modification of the fourth embodiment. In this modification, the preset range of the value of each type of operating parameter related to the air conditioner is divided into a plurality of groups. That is, Rw, Rx, Ry, and Rz in FIG. 24 represent the preset ranges of temperature, humidity, position, and the size of the room where the air conditioner is installed, respectively. As shown in FIG. The predetermined range of the size of the air-conditioned room is divided into a plurality of parts. It should be noted that, in FIG. 24 , W 1 , W 2 . . . W n , X 1 , X 2 . The division range of the value of the operating parameter.
而且,在该变形例中,预先设定有通过各类别的运转参数的值的划分后的各划分范围的组合而划定的多个划分区域〔Wi,Xj,Yk,Zl〕(i=1,2…n,j=1,2…n,k=1,2…n,l=1,2…n),针对各划分区域〔Wi,Xj,Yk,Zl〕分别制作有独立的神经网络。这些神经网络具有图22所示的构造。在该情况下,隐藏层(L=3)的节点的个数针对各神经网络而不同,以下,将划分区域〔Wi,Xj,Yk,Zl〕中的神经网络的输出层的前一个隐藏层的节点的个数用Ki、j、k、l表示。该隐藏层的节点的个数Ki、j、k、l根据各划分区域〔Wi,Xj,Yk,Zl〕内的训练数据的变化相对于输入值的变化的复杂度而事先设定。Furthermore, in this modification, a plurality of divided regions [Wi, Xj, Yk, Zl] (i=1) demarcated by the combination of the divided ranges obtained by dividing the values of the operating parameters of each category are preset. , 2...n, j=1, 2...n, k=1, 2...n, l=1, 2...n), an independent neural network is created for each divided area [Wi, Xj, Yk, Zl] . These neural networks have the configuration shown in FIG. 22 . In this case, the number of nodes in the hidden layer (L=3) is different for each neural network. Hereinafter, the hidden layer before the output layer of the neural network in the region [Wi, Xj, Yk, Zl] will be divided The number of nodes is represented by Ki, j, k, l. The numbers Ki, j, k, and l of nodes in the hidden layer are set in advance according to the complexity of the change of the training data in each of the divided regions [Wi, Xj, Yk, Zl] with respect to the change of the input value.
在该变形例中,对在预先设定的范围Rw、Rx、Ry、Rz内形成的各划分区域〔Wi,Xj,Yk,Zl〕内的各种输入值x1、x2、x3、x4即气温、湿度、位置及安装有空调的房间的大小实测出的空调的风量、风向及运转时间作为训练数据而事先求出,即,对预先设定的范围Rw、Rx、Ry、Rz内的与空调相关的多个类别的运转参数的值通过实测而事先求出了训练数据,根据这些与空调相关的多个类别的运转参数的值及训练数据,也包含隐藏层的节点的个数Ki、j、k、l而决定相对于各划分区域〔Wi,Xj,Yk,Zl〕的神经网络的构造,以使输出值y1、y2、y3与对应的训练数据之差变小的方式事先学习各划分区域〔Wi,Xj,Yk,Zl〕的神经网络的权重。In this modification, various input values x1, x2, x3, and x4, ie, air temperatures, are applied to each of the divided regions [Wi, Xj, Yk, Zl] formed in the preset ranges Rw, Rx, Ry, and Rz. , humidity, location, and the size of the room where the air conditioner is installed. The measured air volume, wind direction, and operating time of the air conditioner are obtained as training data in advance. The values of the operating parameters of the plurality of categories related to the air conditioner are obtained in advance by actual measurement, and the training data is obtained in advance. Based on the values of the operating parameters of the plurality of categories related to the air conditioner and the training data, the numbers Ki and j of the nodes in the hidden layer are also included. , k, and l to determine the structure of the neural network for each division region [Wi, Xj, Yk, Zl] so that the difference between the output values y 1 , y 2 , y 3 and the corresponding training data is reduced in advance Learn the weights of the neural network for each divided region [Wi, Xj, Yk, Zl].
因此,在该变形例中,以下,将该事先进行了学习的划分区域〔Wi,Xj,Yk,Zl〕也称作已学习划分区域〔Wi,Xj,Yk,Zl〕。需要说明的是,对该预先设定的范围Rw、Rx、Ry、Rz内的与空调相关的多个类别的运转参数的值通过实测而事先求出的训练数据存储于电子控制单元56的存储部。在该变形例中也是,关于各划分区域〔Wi,Xj,Yk,Zl〕,使用与在事先学习中使用的神经网络相同构造的神经网络,使用学习完成时的神经网络的权重,在车辆运转中以车载方式进一步进行学习。图25示出了该以车载方式进行的变形例的学习处理例程,该学习处理例程通过每隔一定时间(例如,每隔一秒)的中断来执行。Therefore, in this modification, the previously learned division area [Wi, Xj, Yk, Zl] is also referred to as a learned division area [Wi, Xj, Yk, Zl] hereinafter. It should be noted that the training data obtained in advance by actual measurement of the values of the operating parameters of the plurality of categories related to the air conditioner within the preset ranges Rw, Rx, Ry, and Rz are stored in the memory of the
参照图25,首先,在步骤500中,读入存储于电子控制单元56的存储部的已学习的权重、在事先的学习中使用的训练数据即对预先设定的范围Rw、Rx、Ry、Rz内的与空调相关的多个类别的运转参数的值通过实测而事先求出的训练数据、各已学习划分区域〔Wi,Xj,Yk,Zl〕。该已学习的权重用作权重的初始值。接着,在步骤501中,读入对各已学习划分区域〔Wi,Xj,Yk,Zl〕在事先的学习中使用的输出层的前一个隐藏层的节点的个数Ki、j、k、l。接着,进入步骤502,取得新的输入值x1、x2、x3、x4即气温、湿度、位置及安装有空调的房间的大小,该新的输入值x1、x2、x3、x4即新的与空调相关的多个类别的运转参数的值存储于电子控制单元56的存储部。而且,在步骤502中,将相对于新的输入值x1、x2、x3、x4的空调的风量、风向及运转时间作为训练数据而存储于电子控制单元56的存储部。即,在步骤502中,将对新取得的与空调相关的多个类别的运转参数的值通过实测而得到的训练数据存储于电子控制单元56的存储部。Referring to FIG. 25 , first, in
接着,在步骤503中,判别新的输入值x1、x2、x3、x4是否处于已学习划分区域〔Wi,Xj,Yk,Zl〕内,即新取得的与空调相关的多个类别的运转参数的值是否处于预先设定的范围Rw、Rx、Ry、Rz内。在新的输入值x1、x2、x3、x4处于已学习划分区域Rw、Rx、Ry、Rz内时,即,在新取得的与空调相关的多个类别的运转参数的值处于预先设定的范围Rw、Rx、Ry、Rz内时,进入步骤504,将新的输入值x1、x2、x3、x4即新取得的与空调相关的多个类别的运转参数的值向新取得的与空调相关的多个类别的运转参数的值所属的已学习划分区域〔Wi,Xj,Yk,Zl〕的神经网络的输入层的各节点输入,基于从神经网络的输出层的节点输出的输出值y1、y2、y3和对新取得的与空调相关的多个类别的运转参数的值通过实测而求出的训练数据,使用误差反向传播法,以使输出值y1、y2、y3与训练数据之差变小的方式进一步学习新取得的与内燃机相关的多个类别的运转参数的值所属的已学习划分区域〔Wi,Xj,Yk,Zl〕的神经网络的权重。Next, in
另一方面,在步骤503中判别为新的输入值x1、x2、x3、x4不处于已学习划分区域〔Wi,Xj,Yk,Zl〕内时,进入步骤505,首先,在预先设定的范围Rw、Rx、Ry、Rz外设定由新的输入值x1、x2、x3、x4划定的未学习区域。例如,在判别为新的输入值x2、x3、x4处于对应的预先设定的范围Rx、Ry、Rz内且新的输入值x1不处于对应的预先设定的范围Rw内时,若将新的输入值x1所属的范围设为Wa,则设定由新的输入值x1、x2、x3、x4划定的未学习区域〔Wa,Xj,Yk,Zl〕。另外,在判别为新的输入值x3、x4处于对应的预先设定的范围Ry、Rz内且新的输入值x1、x2不处于对应的预先设定的范围Rw、Rx内时,若将新的输入值x1所属的范围设为Wa且将新的输入值x2所属的范围设为Xb,则设定由新的输入值x1、x2、x3、x4划定的未学习区域〔Wa,Xb,Yk,Zl〕。On the other hand, when it is determined in
接着,在步骤505中,制作相对于未学习区域〔Xa,Yb〕的新的神经网络。当在步骤505中制作新的神经网络后,进入步骤504。在步骤504中,关于未学习区域,以使输出值y1、y2、y3与训练数据之差变小的方式学习对未学习区域制作出的新的神经网络的权重。Next, in
图26~图33示出了将本发明的机器学习装置应用于二次电池的劣化度的推定的情况的第五实施例。在该实施例中,根据气温、二次电池的温度、二次电池的放电时间及二次电池的每单位时间的放电能量来检测二次电池的劣化度。在该情况下,二次电池的使用条件及使用方式的范围即气温、二次电池的温度、二次电池的放电时间及二次电池的每单位时间的放电能量等二次电池的运转参数的值的使用范围能够根据二次电池的种类而预先设想,因此,通常,对二次电池的运转参数的值的预先设定的范围,以使神经网络的输出值与实测出的二次电池的劣化度之差变小的方式预先学习神经网络的权重。26 to 33 show a fifth example of a case where the machine learning apparatus of the present invention is applied to estimation of the degree of deterioration of a secondary battery. In this embodiment, the degree of deterioration of the secondary battery is detected based on the air temperature, the temperature of the secondary battery, the discharge time of the secondary battery, and the discharge energy per unit time of the secondary battery. In this case, the range of use conditions and modes of use of the secondary battery, that is, the operating parameters of the secondary battery, such as air temperature, temperature of the secondary battery, discharge time of the secondary battery, and discharge energy per unit time of the secondary battery, etc. The use range of the value can be pre-estimated according to the type of the secondary battery. Therefore, the range of the value of the operating parameter of the secondary battery is usually set in advance so that the output value of the neural network and the actual measured value of the secondary battery are set in advance. The weights of the neural network are learned in advance so that the difference in the degree of deterioration becomes smaller.
然而,在该情况下,二次电池的运转参数的值有时也会成为预先设定的范围外,在该情况下,对于预先设定的范围外,由于未进行基于实际的值的学习,所以使用神经网络运算出的输出值会成为从实际的值大幅背离的值,于是,在该实施例中也是,在新取得的与二次电池相关的运转参数的值为预先设定的范围外时,使神经网络的输出层的前一个隐藏层的节点的个数增大,或者增大神经网络的个数,使用对新取得的与二次电池相关的运转参数的值得到的训练数据及对预先设定的范围内的与二次电池相关的运转参数的值得到的训练数据来学习神经网络的权重。However, in this case, the value of the operating parameter of the secondary battery may be outside the preset range. In this case, learning based on the actual value is not performed outside the preset range. Since the output value calculated using the neural network is a value that deviates significantly from the actual value, even in this embodiment, when the newly acquired value of the operating parameter related to the secondary battery is outside the preset range , increase the number of nodes in the previous hidden layer of the output layer of the neural network, or increase the number of neural networks, using the training data obtained from the newly acquired values of the operating parameters related to the secondary battery and the The weights of the neural network are learned from the training data obtained from the values of the operating parameters related to the secondary battery within the preset range.
接着,对该第五实施例进行具体说明。参照图26,60表示二次电池,61表示电动电动机,62表示电动电动机61的驱动控制装置,63表示用于检测二次电池60的输出端子间电压的电压计,64表示用于检测从二次电池60经由驱动控制装置62而向电动电动机61供给的电流的电流计,65表示用于检测气温的温度计,66表示用于检测二次电池60的温度的温度传感器,67表示具有与图1所示的电子控制单元30同样的结构的电子控制单元。如图26所示,由电流计53检测到的向电动电动机61的供给电流、由电压计64检测到的二次电池60的输出端子间电压、由温度计65检测到的气温及由温度传感器66检测到的二次电池60的温度向电子控制单元56输入,在电子控制单元56内,算出二次电池60的劣化度的推定值。需要说明的是,在电子控制单元56内,基于电流计64的检测值来求出二次电池60的放电时间,基于电流计64的检测值及电压计63的检测值来求出二次电池60的每单位时间的放电能量(电流·电压)。Next, the fifth embodiment will be specifically described. 26, 60 denotes a secondary battery, 61 denotes an electric motor, 62 denotes a drive control device for the
图27示出了在该第五实施例中使用的神经网络。在该第五实施例中,如图27所示,神经网络的输入层(L=1)由4个节点构成,向各节点输入表示气温的输入值x1、表示二次电池60的温度的输入值x2、二次电池60的放电时间x3及表示二次电池60的每单位时间的放电能量的输入值x4。另外,隐藏层(L=2,L=3)的层数可以设为1个或任意的个数,隐藏层(L=2,L=3)的节点的个数也可以设为任意的个数。另外,在该第四实施例中,输出层(L=4)的节点被设为1个,从该节点输出表示二次电池20的劣化度的输出值y。FIG. 27 shows the neural network used in this fifth embodiment. In this fifth embodiment, as shown in FIG. 27 , the input layer (L=1) of the neural network is composed of four nodes, and input value x 1 representing the temperature and an input value x 1 representing the temperature of the
另一方面,在图28A中,A1与B1之间即R1表示气温的预先设定的范围(例如,-5℃~40℃),A2与B2之间即R2表示二次电池60的温度的预先设定的范围(例如,-40℃~40℃),A3与B3之间即R3表示二次电池60的放电时间的预先设定的范围,A4与B4之间即R4表示二次电池60的每单位时间的放电能量的预先设定的范围。需要说明的是,图28B也与图28A同样,A1与B1之间表示气温的预先设定的范围,A2与B2之间表示二次电池60的温度的预先设定的范围,A3与B3之间表示二次电池60的放电时间的预先设定的范围,A4与B4之间表示二次电池60的每单位时间的放电能量的预先设定的范围。 On the other hand, in FIG. 28A , between A1 and B1, that is, R1 represents a preset range of air temperature ( for example, -5°C to 40°C ) , and between A2 and B2, that is, R2 represents two The pre-set range of the temperature of the secondary battery 60 (for example, -40° C. to 40 ° C.), between A3 and B3 , that is, R3 represents the pre - set range of the discharge time of the
在此,对气温、二次电池60的温度、二次电池60的放电时间及二次电池60的每单位时间的放电能量与二次电池60的劣化度的关系进行简单说明。二次电池60越劣化则内部电阻越高,因此,能够根据内部电阻的变化来推定二次电池60的劣化度。然而,实际上,检测内部电阻是困难的。另一方面,在放电电流一定的情况下,内部电阻越高则二次电池60的发热量越增大,因此,内部电阻越高,即二次电池60越劣化,则二次电池60的温度越高。因此,基于二次电池60的温度上升量,能够推定二次电池60的劣化度。在该情况下,二次电池60的温度上升量受到气温的影响,另外,受到二次电池60的放电时间及二次电池60的每单位时间的放电能量左右。因此,二次电池60的劣化度根据气温、二次电池60的温度、二次电池60的放电时间及二次电池60的每单位时间的放电能量而求出,因此,根据气温、二次电池60的温度、二次电池60的放电时间及二次电池60的每单位时间的放电能量,能够推定二次电池60的劣化度。Here, the relationship between the air temperature, the temperature of the
另一方面,当二次电池60劣化时,向二次电池60的充电量减少。在该情况下,在向二次电池60的充电刚完成后使二次电池60的电路成为了闭环时,在二次电池60的输出端子间会出现与向二次电池60的充电量成比例的电压。即,在向二次电池60的充电刚完成后由电压计64检测到的二次电池60的输出端子间电压与向二次电池60的充电量成比例。因此,根据向二次电池60的充电刚完成后的电压计64的检测电压,能够检测二次电池60的劣化度。因此,在该第五实施例中,使用根据向二次电池60的充电刚完成后的电压计64的检测电压而检测到的二次电池60的劣化度作为输出值y的训练数据。On the other hand, when the
接着,参照图29及图30,对在电子控制单元67内执行的二次电池60的放电时间等的算出例程和训练数据的取得处理例程进行说明。参照示出算出例程的图29,在步骤600中,根据电流计64的输出值来算出二次电池60的放电时间。接着,在步骤601中,根据电流计64的输出值及电压计63的输出值来算出二次电池60的每单位时间的放电能量。Next, with reference to FIGS. 29 and 30 , a routine for calculating the discharge time of the
另一方面,参照示出训练数据的取得处理例程的图30,首先,在步骤610中,判别是否进行着向二次电池60的充电处理。在未进行向二次电池60的充电处理时,完成处理循环。相对于此,在进行着向二次电池60的充电处理时,进入步骤611而判别向二次电池60的充电是否已完成。在判别为向二次电池60的充电已完成时,进入步骤612,判别是否设置有在要求训练数据时设置的训练数据要求标志。关于该训练数据要求标志后述。在未设置训练数据要求标志时,完成处理循环。相对于此,在设置有训练数据要求标志时,进入步骤613,根据电压计64的检测电压来检测二次电池60的劣化度。接着,进入步骤614,设置追加学习标志。On the other hand, referring to FIG. 30 showing the training data acquisition processing routine, first, in
在该第五实施例中也是,相对于预先设定的范围Rn内的各种输入值xn(n=1,2,3,4)即气温、二次电池60的温度、二次电池60的放电时间及二次电池60的每单位时间的放电能量的二次电池60的劣化度作为训练数据而事先求出,即,对预先设定的范围Rn内的与二次电池60相关的多个类别的运转参数的值通过实测而事先求出了训练数据,根据这些与二次电池60相关的多个类别的运转参数的值及训练数据来决定神经网络的构造,以使输出值y与对应于与二次电池60相关的多个类别的运转参数的值的训练数据之差变小的方式事先学习神经网络的权重。对该预先设定的范围Rn内的与二次电池60相关的多个类别的运转参数的值通过实测而事先求出的训练数据存储于电子控制单元67的存储部。Also in this fifth embodiment, with respect to various input values xn ( n =1, 2, 3, 4) within a preset range Rn, that is, the air temperature, the temperature of the
在该第五实施例中也是,使用与在事先学习中使用的神经网络相同构造的神经网络,使用学习完成时的神经网络的权重,在车辆运转中以车载方式进一步进行学习。图31示出了该以车载方式进行的第五实施例的学习处理例程,该学习处理例程通过每隔一定时间(例如,每隔一秒)的中断来执行。Also in this fifth embodiment, a neural network having the same structure as the neural network used in the previous learning is used, and the weight of the neural network at the time of completion of learning is used, and the learning is further performed on-board while the vehicle is running. FIG. 31 shows a learning processing routine of the fifth embodiment carried out in the vehicle-mounted manner, and the learning processing routine is executed by interruption at regular intervals (for example, every one second).
即,参照图31,首先,在步骤700中,读入存储于电子控制单元67的存储部的已学习的权重、在事先的学习中使用的训练数据即对预先设定的范围Rn内的与二次电池60相关的多个类别的运转参数的值通过实测而事先求出的训练数据、表示输入数据的范围即与二次电池60相关的多个类别的运转参数的值的预先设定的范围的值An、Bn(n=1,2,3,4)(图28A)。该已学习的权重用作权重的初始值。接着,在步骤701中,读入在事先的学习中使用的神经网络的输出层的前一个隐藏层的节点的个数K。接着,在步骤702中,判别是否设置有追加学习标志。在未设置追加学习标志时,进入步骤703。31 , first, in
在步骤703中,取得新的输入值x即新的与二次电池60相关的多个类别的运转参数的值,该新的输入值x即新的与二次电池60相关的多个类别的运转参数的值存储于电子控制单元67的存储部。In
接着,在步骤704中,判别新的输入值xn即新取得的与二次电池60相关的多个类别的运转参数的值是否处于预先设定的范围Rn(An与Bn之间)内,即新的输入值xn是否为An以上且Bn以下。在新的输入值xn处于预先设定的范围Rn内时,进入步骤705,将各输入值xn即新取得的与二次电池60相关的多个类别的运转参数的值向神经网络的输入层的对应的节点输入,基于从神经网络的输出层的节点输出的输出值y和对新取得的与二次电池60相关的多个类别的运转参数的值通过实测而求出的训练数据,使用误差反向传播法,以使输出值y与训练数据之差变小的方式学习神经网络的权重。Next, in
另一方面,在步骤704中判别为新的输入值xn即新取得的与二次电池60相关的多个类别的运转参数的值中的至少一个类别的运转参数的值不处于预先设定的范围Rn(An与Bn之间)内时,例如,在图28B中表示气温的输入值x1处于B1~C1(B1<C1)的预先设定的范围(B1~C1)内的情况或在图28B中表示二次电池60的放电时间的输入值x3处于C3~A3(C3<A3)的预先设定的范围(C3~A3)内的情况下,进入步骤706。在步骤706中,设置训练数据要求标志,将此时取得的新的输入值xn作为用于追加学习的新的输入值xn而存储。接着,完成处理循环。On the other hand, it is determined in
当设置训练数据要求标志后,从图30的训练数据取得处理例程可知,在二次电池60的充电完成时,检测二次电池60的劣化度,该二次电池60的劣化度作为用于追加学习的训练数据而存储。接着,设置追加学习标志。当在步骤706中设置追加学习标志后,在下次的处理循环中,从步骤702进入步骤707。在步骤707中,从存储部读出为了用于追加学习而存储的新的输入值xn和为了用于追加学习而存储的训练数据,算出该新的输入值xn所属的范围(Bn~Cn)或范围(Cn~An)内的训练数据相对于新的输入值xn的密度D(=训练数据个数/(Cn-Bn)或训练数据个数/(An-Cn))。关于该训练数据密度D的定义,如前所述。在步骤707中,当算出训练数据密度D后,判别训练数据密度D是否变得比预先确定的数据密度D0高。在训练数据密度D比预先确定的数据密度D0低的情况下,完成处理循环。When the training data request flag is set, it can be seen from the training data acquisition processing routine of FIG. 30 that when the charging of the
另一方面,在步骤707中判别为训练数据密度D变得比预先确定的数据密度D0高时,进入步骤708。在该情况下,在D(=训练数据个数/(An-Cn))>D0时,通过下式来算出追加节点数α。On the other hand, when it is determined in
追加节点数α=round{(K/(Bn-An))·(An-Cn)}Number of additional nodes α=round{(K/(Bn-An))·(An-Cn)}
另一方面,在D(=训练数据个数/(Cn-Bn))>D0时,通过下式来算出追加节点数α。On the other hand, when D (=number of training data/(C n -B n ))>D 0 , the number of additional nodes α is calculated by the following equation.
追加节点数α=round{(K/(Bn-An))·(Cn-Bn)}Number of additional nodes α=round{(K/(Bn-An))·(Cn-Bn)}
需要说明的是,在上式中,K表示节点的个数,round意味着四舍五入。It should be noted that, in the above formula, K represents the number of nodes, and round means rounding.
当在步骤708中算出追加节点数α后,进入步骤709,更新神经网络的输出层的前一个隐藏层的节点的个数K,使输出层的前一个隐藏层的节点的个数K增大追加节点数α(K←K+α)。这样,在该第五实施例中,当将训练数据的个数除以表示运转参数的值的预先设定的范围的最大值及最小值的差值而得到的数据密度增大时,增大神经网络的输出层的前一个隐藏层的节点的个数。即,在该第五实施例中,与将训练数据的个数除以表示运转参数的值的预先设定的范围的最大值及最小值的差值而得到的数据密度的增大相应地,增大神经网络的输出层的前一个隐藏层的节点的个数。After calculating the number of additional nodes α in
当在步骤709中使输出层的前一个隐藏层的节点的个数K增大追加节点数α后(K←K+α),进入步骤710,以使输出层的前一个隐藏层的节点的个数K增大的方式更新神经网络。接着,进入步骤705。在步骤705中,将相对于新的输入值x新得到的训练数据也包含于训练数据,以使输出值y与训练数据之差变小的方式学习更新后的神经网络的权重。即,在步骤705中,使用对新取得的与二次电池60相关的多个类别的运转参数的值通过实测而得到的训练数据及对预先设定的范围Rn内的与二次电池60相关的多个类别的运转参数的值通过实测而事先求出的训练数据,以使根据预先设定的范围内及预先设定的范围外的与二次电池60相关的多个类别的运转参数的值而变化的输出值y与对应于与该二次电池60相关的多个类别的运转参数的值的训练数据之差变小的方式学习更新后的神经网络的权重。When the number K of nodes in the previous hidden layer of the output layer is increased by the number of additional nodes α in step 709 (K←K+α), the process proceeds to step 710, so that the number of nodes in the previous hidden layer of the output layer is increased. The neural network is updated by increasing the number K. Next, go to step 705 . In
图32及图33示出了第五实施例的变形例。在该变形例中,与二次电池60相关的各类别的运转参数的值的预先设定的范围被划分为多个。即,图32中的Rw、Rx、Ry、Rz分别表示气温、二次电池60的温度、二次电池60的放电时间及二次电池60的每单位时间的放电能量的预先设定的范围,如图24所示,气温、二次电池60的温度、二次电池60的放电时间及二次电池60的每单位时间的放电能量的预先设定的范围被划分为多个。需要说明的是,在图32中,W1、W2…Wn,X1、X2…Xn,Y1、Y2…Yn,Z1、Z2…Zn分别表示各类别的运转参数的值的划分范围。32 and 33 show a modification of the fifth embodiment. In this modification, the predetermined range of the value of the operating parameter of each type related to the
而且,在该变形例中,预先设定有通过各类别的运转参数的值的划分后的各划分范围的组合而划定的多个划分区域〔Wi,Xj,Yk,Zl〕(i=1,2…n,j=1,2…n,k=1,2…n,l=1,2…n),针对各划分区域〔Wi,Xj,Yk,Zl〕分别制作有独立的神经网络。这些神经网络具有图27所示的构造。在该情况下,隐藏层(L=3)的节点的个数针对各神经网络而不同,以下,将划分区域〔Wi,Xj,Yk,Zl〕中的神经网络的输出层的前一个隐藏层的节点的个数用Ki、j、k、l表示。该隐藏层的节点的个数Ki、j、k、l根据各划分区域〔Wi,Xj,Yk,Zl〕内的训练数据的变化相对于输入值的变化的复杂度而事先设定。Furthermore, in this modification, a plurality of divided regions [Wi, Xj, Yk, Zl] (i=1) demarcated by the combination of the divided ranges obtained by dividing the values of the operating parameters of each category are preset. , 2...n, j=1, 2...n, k=1, 2...n, l=1, 2...n), an independent neural network is created for each divided area [Wi, Xj, Yk, Zl] . These neural networks have the configuration shown in FIG. 27 . In this case, the number of nodes in the hidden layer (L=3) is different for each neural network. Hereinafter, the hidden layer before the output layer of the neural network in the region [Wi, Xj, Yk, Zl] will be divided The number of nodes is represented by Ki, j, k, l. The numbers Ki, j, k, and l of nodes in the hidden layer are set in advance according to the complexity of the change of the training data in each of the divided regions [Wi, Xj, Yk, Zl] with respect to the change of the input value.
在该变形例中,对在与二次电池60相关的多个类别的运转参数的值的预先设定的范围Rw、Rx、Ry、Rz内形成的各划分区域〔Wi,Xj,Yk,Zl〕内的各种输入值x1、x2、x3、x4即气温、二次电池60的温度、二次电池60的放电时间及二次电池60的每单位时间的放电能量实测出的二次电池60的劣化度作为训练数据而事先求出,即,对预先设定的范围Rw、Rx、Ry、Rz内的与二次电池60相关的多个类别的运转参数的值通过实测而事先求出了训练数据,根据这些与二次电池60相关的多个类别的运转参数的值及训练数据,也包含隐藏层的节点的个数Ki、j、k、l而决定相对于各划分区域〔Wi,Xj,Yk,Zl〕的神经网络的构造,以使输出值y1、y2、y3与对应的训练数据之差变小的方式事先学习各划分区域〔Wi,Xj,Yk,Zl〕的神经网络的权重。In this modified example, each of the divided regions [Wi, Xj, Yk, Zl formed in the preset ranges Rw, Rx, Ry, Rz of the values of the operating parameters of the plurality of types related to the
因此,在该变形例中,以下,有时也将该事先进行了学习的划分区域〔Wi,Xj,Yk,Zl〕称作已学习划分区域〔Wi,Xj,Yk,Zl〕。需要说明的是,对该预先设定的范围Rw、Rx、Ry、Rz内的与二次电池60相关的多个类别的运转参数的值通过实测而事先求出的训练数据存储于电子控制单元56的存储部。在该变形例中也是,关于各划分区域〔Wi,Xj,Yk,Zl〕,使用与在事先学习中使用的神经网络相同构造的神经网络,使用学习完成时的神经网络的权重,在车辆运转中以车载方式进一步进行学习。图33示出了该以车载方式进行的变形例的学习处理例程,该学习处理例程通过每隔一定时间(例如,每隔一秒)的中断来执行。Therefore, in this modification, the previously learned division area [Wi, Xj, Yk, Zl] may also be referred to as a learned division area [Wi, Xj, Yk, Zl] hereinafter. It should be noted that training data obtained in advance by actual measurement of the values of the plurality of types of operating parameters related to the
参照图33,首先,在步骤800中,读入存储于电子控制单元56的存储部的已学习的权重、在事先的学习中使用的训练数据即对预先设定的范围Rw、Rx、Ry、Rz内的与二次电池60相关的多个类别的运转参数的值通过实测而事先求出的训练数据、各已学习划分区域〔Wi,Xj,Yk,Zl〕。该已学习的权重用作权重的初始值。接着,在步骤801中,读入对各已学习划分区域〔Wi,Xj,Yk,Zl〕在事先的学习中使用的输出层的前一个隐藏层的节点的个数Ki、j、k、l。接着,在步骤802中,判别是否设置有追加学习标志。在未设置追加学习标志时,进入步骤803。Referring to FIG. 33 , first, in
在步骤803中,取得新的输入值x1、x2、x3、x4即气温、二次电池60的温度、二次电池60的放电时间及二次电池60的每单位时间的放电能量,该新的输入值x1、x2、x3、x4即新的与二次电池60相关的多个类别的运转参数的值存储于电子控制单元56的存储部。In
接着,在步骤804中,判别新的输入值x1、x2、x3、x4是否处于已学习划分区域〔Wi,Xj,Yk,Zl〕内,即新取得的与二次电池60相关的多个类别的运转参数的值是否处于预先设定的范围Rw、Rx、Ry、Rz内。在新的输入值x1、x2、x3、x4处于已学习划分区域Rw、Rx、Ry、Rz内时,即,在新取得的与二次电池60相关的多个类别的运转参数的值处于预先设定的范围Rw、Rx、Ry、Rz内时,进入步骤805,将输入值x1、x2、x3、x4即新取得的与二次电池60相关的多个类别的运转参数的值向新取得的与二次电池60相关的多个类别的运转参数的值所属的已学习划分区域〔Wi,Xj,Yk,Zl〕的神经网络的输入层的各节点输入,基于从神经网络的输出层的节点输出的输出值y和对新取得的与二次电池60相关的多个类别的运转参数的值通过实测而求出的训练数据,使用误差反向传播法,以使输出值y与训练数据之差变小的方式,进一步学习新取得的与二次电池60相关的多个类别的运转参数的值所属的已学习划分区域〔Wi,Xj,Yk,Zl〕的神经网络的权重。Next, in
另一方面,在步骤804中判别为新的输入值x1、x2、x3、x4不处于已学习划分区域〔Wi,Xj,Yk,Zl〕内时,进入步骤806。在步骤806中,设置训练数据要求标志,此时取得的新的输入值xn作为用于追加学习的新的输入值xn而存储。接着,完成处理循环。On the other hand, when it is determined in
当设置训练数据要求标志后,从图30的训练数据取得处理例程可知,在二次电池60的充电完成时,检测二次电池60的劣化度,该二次电池60的劣化度作为用于追加学习的训练数据而存储。接着,设置追加学习标志。当在步骤806中设置追加学习标志后,在下次的处理循环中,从步骤802进入步骤807。在步骤807中,从存储部读出为了用于追加学习而存储的新的输入值xn和为了用于追加学习而存储的训练数据,在预先设定的范围Rw、Rx、Ry、Rz外设定由为了用于追加学习而存储的新的输入值x1、x2、x3、x4划定的未学习区域。例如,在判别为该新的输入值x2、x3、x4处于对应的预先设定的范围Rx、Ry、Rz内且新的输入值x1不处于对应的预先设定的范围Rw内时,若将新的输入值x1所属的范围设为Wa,则设定由新的输入值x1、x2、x3、x4划定的未学习区域〔Wa,Xj,Yk,Zl〕。另外,在判别为新的输入值x3、x4处于对应的预先设定的范围Ry、Rz内且新的输入值x1、x2不处于对应的预先设定的范围Rw、Rx内时,若将新的输入值x1所属的范围设为Wa,将新的输入值x2所属的范围设为Xb,则设定由新的输入值x1、x2、x3、x4划定的未学习区域〔Wa,Xb,Yk,Zl〕。When the training data request flag is set, it can be seen from the training data acquisition processing routine of FIG. 30 that when the charging of the
接着,在步骤807中,制作相对于未学习区域的新的神经网络。当在步骤807中制作新的神经网络后,进入步骤805。在步骤805中,关于未学习区域,以使输出值y与为了用于追加学习而存储的训练数据之差变小的方式学习对未学习区域制作出的新的神经网络的权重。Next, in
根据以上的说明,在本发明的实施例中,在用于使用神经网络来输出相对于机器的运转参数的值的输出值的机器学习装置中,预先设定有与该机器相关的特定类别的运转参数的值的范围,并且预先设定有与该机器相关的特定类别的运转参数的值的范围所对应的神经网络的隐藏层的节点的个数。在新取得的与该机器相关的特定类别的运转参数的值为预先设定的范围外时,使神经网络的输出层的前一个隐藏层的节点的个数增大,使用对新取得的与该机器相关的特定类别的运转参数的值通过实测而得到的训练数据及对预先设定的范围内的该机器的运转参数的值通过实测而得到的训练数据来学习神经网络的权重。使用学习了权重的神经网络来输出相对于与该机器相关的特定类别的运转参数的值的输出值。From the above description, in the embodiment of the present invention, in the machine learning device for outputting the output value with respect to the value of the operating parameter of the machine using the neural network, the specific type of the machine is preset. The value range of the operating parameter is preset, and the number of nodes of the hidden layer of the neural network corresponding to the value range of the specific type of operating parameter related to the device is preset. When the newly acquired value of the operation parameter of the specific type related to the machine is outside the preset range, increase the number of nodes in the previous hidden layer of the output layer of the neural network, and use the newly acquired and The weights of the neural network are learned from training data obtained by actual measurement of the values of the specific types of operating parameters related to the equipment and training data obtained by actual measurement of the values of the operating parameters of the equipment within a preset range. A neural network that has learned weights is used to output an output value relative to the value of a particular class of operating parameters associated with the machine.
在该情况下,在本发明的实施例中,机器学习装置具备电子控制单元。该电子控制单元具备:参数值取得部,取得与上述的机器相关的特定类别的运转参数的值;运算部,使用包含输入层、隐藏层及输出层的神经网络来进行运算;及存储部,与上述的机器相关的特定类别的运转参数的值被向输入层输入,根据与上述的机器相关的特定类别的运转参数的值而变化的输出值被从输出层输出。预先设定有与上述的机器相关的特定类别的运转参数的值的范围,并且预先设定有与上述的机器相关的特定类别的运转参数的值的范围所对应的神经网络的隐藏层的节点的个数,对预先设定的范围内的与上述的机器相关的特定类别的运转参数的值通过实测而事先求出的训练数据存储于存储部。在新取得的与上述的机器相关的特定类别的运转参数的值为预先设定的范围内时,使用对新取得的与上述的机器相关的特定类别的运转参数的值通过实测而得到的训练数据,利用运算部,以使根据新取得的与上述的机器相关的特定类别的运转参数的值而变化的输出值与对新取得的与上述的机器相关的特定类别的运转参数的值通过实测而得到的训练数据之差变小的方式学习神经网络的权重。在由参数值取得部新取得的与上述的机器相关的特定类别的运转参数的值为预先设定的范围外时,与对新取得的与上述的机器相关的特定类别的运转参数的值通过实测而得到的训练数据的个数的增大或将训练数据的个数除以表示运转参数的值的预先设定的范围的最大值及最小值的差值而得到的数据密度的增大相应地,增大神经网络的输出层的前一个隐藏层的节点的个数,并且使用对新取得的与上述的机器相关的特定类别的运转参数的值通过实测而得到的训练数据及事先求出的训练数据,利用运算部,以使根据预先设定的范围内及预先设定的范围外的与上述的机器相关的特定类别的运转参数的值而变化的输出值与对应于与上述的机器相关的特定类别的运转参数的值的训练数据之差变小的方式学习神经网络的权重。使用学习了权重的神经网络来输出相对于与上述的机器相关的特定类别的运转参数的值的输出值。In this case, in the embodiment of the present invention, the machine learning device includes an electronic control unit. The electronic control unit includes: a parameter value acquisition unit that acquires values of operating parameters of a specific type related to the above-mentioned equipment; a calculation unit that performs calculations using a neural network including an input layer, a hidden layer, and an output layer; and a storage unit, The value of the operating parameter of the specific type related to the above-mentioned equipment is input to the input layer, and the output value that varies according to the value of the specific type of operating parameter related to the above-mentioned equipment is output from the output layer. The range of values of the operating parameters of the specific type related to the above-mentioned equipment is preset, and the nodes of the hidden layer of the neural network corresponding to the value range of the specific type of operating parameters related to the above-mentioned equipment are preset. The number of objects is stored in the storage unit as training data obtained in advance by actual measurement for the values of the operating parameters of the specific type related to the above-mentioned equipment within a preset range. When the newly acquired value of the operating parameter of the specific type related to the above-mentioned equipment is within a preset range, a training method obtained by actual measurement of the newly acquired value of the specific type of operating parameter related to the above-mentioned equipment is used The data is used to make the output value changed according to the newly acquired value of the operating parameter of the specific type related to the above-mentioned equipment and the value of the newly acquired operating parameter of the specific type related to the above-mentioned equipment to pass actual measurement by using the calculation unit The weights of the neural network are learned in such a way that the difference between the obtained training data becomes smaller. When the value of the operation parameter of the specific type related to the above-mentioned equipment newly acquired by the parameter value acquisition unit is outside the preset range, the value of the newly acquired operation parameter of the specific type related to the above-mentioned equipment is passed through The increase in the number of training data obtained by the actual measurement or the increase in the data density obtained by dividing the number of training data by the difference between the maximum value and the minimum value of the preset range representing the value of the operating parameter corresponds to an increase in the data density. In order to increase the number of nodes in the previous hidden layer of the output layer of the neural network, and use the training data obtained by the actual measurement of the newly acquired values of the operating parameters of the specific category related to the above-mentioned equipment, and obtained in advance The training data is used to make the output values that vary according to the values of the specific types of operating parameters related to the above-mentioned equipment within the preset range and outside the preset range to correspond to the above-mentioned equipment. The weights of the neural network are learned so that the difference between the training data of the values of the operation parameters of the specific category becomes smaller. An output value with respect to a value of a specific class of operating parameters related to the above-mentioned machine is output using a neural network that has learned weights.
另外,在该情况下,在本发明的实施例中,电子控制单元具备:参数值取得部,取得与上述的机器相关的特定类别的运转参数的值;运算部,使用包含输入层、隐藏层及输出层的神经网络来进行运算;及存储部,与上述的机器相关的特定类别的运转参数的值被向输入层输入,根据与上述的机器相关的特定类别的运转参数的值而变化的多个输出值被从输出层输出。预先设定有与上述的机器相关的特定类别的运转参数的值的范围,并且预先设定有与上述的机器相关的特定类别的运转参数的值的范围所对应的神经网络的隐藏层的节点的个数,对预先设定的范围内的与上述的机器相关的特定类别的运转参数的值通过实测而事先求出的训练数据存储于存储部。在由参数值取得部新取得的上述的机器的运转参数的值为预先设定的范围内时,使用对新取得的与上述的机器相关的特定类别的运转参数的值通过实测而得到的训练数据,利用运算部,以使根据与上述的机器相关的特定类别的运转参数的值而变化的多个输出值与对应于与上述的机器相关的特定类别的运转参数的值的训练数据之差变小的方式学习神经网络的权重。在由参数值取得部新取得的与上述的机器相关的特定类别的运转参数的值为预先设定的范围外时,根据对新取得的与上述的机器相关的特定类别的运转参数的值通过实测而得到的训练数据的个数的增大或将训练数据的个数除以表示预先设定的范围的最大值及最小值的差值而得到的数据密度的增大,来使神经网络的输出层的前一个隐藏层的节点的个数增大,并且使用对预先设定的范围内及预先设定的范围外的新取得的与上述的机器相关的特定类别的运转参数的值通过实测而得到的训练数据及事先求出的训练数据,利用运算部,以使根据与上述的机器相关的特定类别的运转参数的值而变化的多个输出值与对应于与上述的机器相关的特定类别的运转参数的值的训练数据之差变小的方式学习神经网络的权重。使用学习了权重的神经网络来输出相对于与上述的机器相关的特定类别的运转参数的值的多个输出值。In addition, in this case, in the embodiment of the present invention, the electronic control unit includes: a parameter value acquisition unit that acquires the value of the operation parameter of the specific type related to the above-mentioned equipment; and a calculation unit that uses the input layer and the hidden layer and the neural network of the output layer to perform operations; and a storage unit, the value of the specific type of operating parameter related to the above-mentioned equipment is input to the input layer, and the value of the specific type of operating parameter related to the above-mentioned equipment is changed. Multiple output values are output from the output layer. The range of values of the operating parameters of the specific type related to the above-mentioned equipment is preset, and the nodes of the hidden layer of the neural network corresponding to the value range of the specific type of operating parameters related to the above-mentioned equipment are preset. The number of objects is stored in the storage unit as training data obtained in advance by actual measurement for the values of the operating parameters of the specific type related to the above-mentioned equipment within a preset range. When the value of the operating parameter of the above-mentioned equipment newly acquired by the parameter value acquisition unit is within the preset range, the training obtained by the actual measurement of the newly acquired value of the operating parameter of the specific type related to the above-mentioned equipment is used The data is obtained by using the arithmetic unit to obtain a difference between a plurality of output values that vary according to the value of the specific type of operating parameter related to the above-mentioned equipment and training data corresponding to the value of the specific type of operating parameter related to the above-mentioned equipment Learning the weights of a neural network in a smaller way. When the value of the operation parameter of the specific type related to the above-mentioned equipment newly acquired by the parameter value acquisition unit is outside the preset range, the value of the operation parameter of the specific type related to the above-mentioned equipment newly acquired is passed through The increase of the number of training data obtained by actual measurement or the increase of the data density obtained by dividing the number of training data by the difference between the maximum value and the minimum value representing a preset range increases the performance of the neural network. The number of nodes in the previous hidden layer of the output layer is increased, and the values of the operating parameters of the specific category related to the above-mentioned equipment that are newly acquired within the preset range and outside the preset range are used to pass the actual measurement. The obtained training data and the previously obtained training data are used by the arithmetic unit to make a plurality of output values that vary according to the value of the operating parameter of the specific type related to the above-mentioned equipment to correspond to the specific type of the above-mentioned equipment. The weights of the neural network are learned so that the difference between the values of the operating parameters of the classes and the training data becomes smaller. A plurality of output values with respect to the values of the above-described specific types of operating parameters related to the machine are output using the neural network that has learned the weights.
另一方面,在本发明的实施例中,在用于使用神经网络来输出相对于机器的运转参数的值的输出值的机器学习装置中,预先设定有与该机器相关的多个类别的运转参数的值的范围,并且预先设定有与该机器相关的多个类别的运转参数的值的范围所对应的神经网络的隐藏层的节点的个数,在新取得的与该机器相关的多个类别的运转参数的值为预先设定的范围外时,使神经网络的输出层的前一个隐藏层的节点的个数增大,使用对新取得的与该机器相关的多个类别的运转参数的值通过实测而得到的训练数据及对预先设定的范围内的该机器的运转参数的值通过实测而得到的训练数据来学习神经网络的权重。使用学习了权重的神经网络来输出相对于与该机器相关的多个类别的运转参数的值的输出值。On the other hand, in the embodiment of the present invention, in the machine learning device for outputting the output value with respect to the value of the operating parameter of the machine using the neural network, a plurality of categories related to the machine are preset. The value range of the operating parameter, and the number of nodes in the hidden layer of the neural network corresponding to the range of values of the operating parameters of a plurality of categories related to the machine is preset, and the newly acquired data related to the machine is used. When the values of the operating parameters of multiple categories are outside the preset range, the number of nodes in the previous hidden layer of the output layer of the neural network is increased, and the newly acquired values of multiple categories related to the machine are used. The weights of the neural network are learned from the training data obtained by the actual measurement of the values of the operation parameters and the training data obtained by the actual measurement of the values of the operation parameters of the equipment within a preset range. The neural network that has learned the weights is used to output output values with respect to the values of the operating parameters of the plurality of categories related to the machine.
在该情况下,在本发明的实施例中,机器学习装置具备电子控制单元,该电子控制单元具备:参数值取得部,取得与上述的机器相关的多个类别的运转参数的值;运算部,使用包含输入层、隐藏层及输出层的神经网络来进行运算;及存储部,与上述的机器相关的多个类别的运转参数的值被向输入层输入,根据与上述的机器相关的多个类别的运转参数的值而变化的输出值被从输出层输出。关于与上述的机器相关的多个类别的运转参数的各个,预先设定有各类别的运转参数的值的范围,并且预先设定有与上述的机器相关的多个类别的运转参数的值的范围所对应的神经网络的隐藏层的节点的个数,对多个类别的运转参数的值通过实测而事先求出且各类别的运转参数的值为预先设定的范围内的训练数据存储于存储部。在由参数值取得部新取得的上述的机器的多个运转参数的值分别为预先设定的范围内时,使用对新取得的与上述的机器相关的多个类别的运转参数的值通过实测而得到的训练数据,利用运算部,以使根据与上述的机器相关的多个类别的运转参数的值而变化的输出值与对应于与上述的机器相关的多个类别的运转参数的值的训练数据之差变小的方式学习神经网络的权重。在由参数值取得部新取得的与上述的机器相关的多个类别的运转参数中的至少一个类别的运转参数的值为预先设定的范围外时,与对新取得的与上述的机器相关的多个类别的运转参数的值通过实测而得到的训练数据的个数的增大或将训练数据的个数除以表示预先设定的范围的最大值及最小值的差值而得到的数据密度的增大相应地,使神经网络的输出层的前一个隐藏层的节点的个数增大,并且使用对预先设定的范围内及该预先设定的范围外的新取得的与上述的机器相关的多个类别的运转参数的值通过实测而得到的训练数据及事先求出的训练数据,利用运算部,以使根据与上述的机器相关的多个类别的运转参数的值而变化的输出值与对应于与上述的机器相关的多个类别的运转参数的值的训练数据之差变小的方式学习神经网络的权重。使用学习了权重的神经网络来输出相对于与上述的机器相关的多个类别的运转参数的值的输出值。In this case, in the embodiment of the present invention, the machine learning device includes an electronic control unit including: a parameter value acquisition unit that acquires values of a plurality of types of operating parameters related to the above-mentioned equipment; a calculation unit , using a neural network including an input layer, a hidden layer, and an output layer to perform operations; and a storage unit, in which the values of a plurality of types of operating parameters related to the above-mentioned equipment are input to the input layer, Output values that vary depending on the values of the operating parameters of each category are output from the output layer. For each of the plurality of types of operating parameters related to the above-mentioned equipment, a range of values of the operating parameters of each type is preset, and a value of the plurality of types of operating parameters related to the above-mentioned equipment is preset. The number of nodes in the hidden layer of the neural network corresponding to the range, the values of the operating parameters of a plurality of categories are obtained in advance through actual measurement, and the training data of the values of the operating parameters of each category within the preset range is stored in storage department. When the values of the plurality of operating parameters of the above-mentioned equipment newly acquired by the parameter value acquisition unit are within the preset ranges, respectively, the actual measurement is performed using the newly acquired values of the operating parameters of the plurality of types related to the above-mentioned equipment. The training data thus obtained is used by the computing unit to make the output values that vary according to the values of the operating parameters of the plurality of categories related to the above-mentioned equipment and the values of the operating parameters corresponding to the plurality of categories related to the above-mentioned equipment to be The weights of the neural network are learned in such a way that the difference between the training data becomes smaller. When the value of the operation parameter of at least one of the plurality of types of operation parameters related to the above-mentioned equipment newly acquired by the parameter value acquisition unit is outside the preset range, the newly acquired operation parameter related to the above-mentioned equipment is The values of the operating parameters of a plurality of categories are obtained by increasing the number of training data obtained by actual measurement or by dividing the number of training data by the difference between the maximum value and the minimum value representing a preset range. The increase of the density correspondingly increases the number of nodes in the previous hidden layer of the output layer of the neural network, and uses the newly obtained data within the preset range and outside the preset range with the above-mentioned The training data obtained by the actual measurement of the values of the operating parameters of the plurality of categories related to the equipment and the training data obtained in advance are used by the computing unit to change according to the values of the operating parameters of the plurality of categories related to the above-mentioned equipment. The weights of the neural network are learned such that the difference between the output value and the training data corresponding to the values of the operating parameters of the above-described plurality of categories of equipment becomes smaller. Output values for the values of the operating parameters of the plurality of categories related to the above-mentioned equipment are output using the neural network that has learned the weights.
另外,在该情况下,在本发明的实施例中,电子控制单元具备:参数值取得部,取得与上述的机器相关的多个类别的运转参数的值;运算部,使用包含输入层、隐藏层及输出层的神经网络来进行运算;及存储部,与上述的机器相关的多个类别的运转参数的值被向输入层输入,根据与上述的机器相关的多个类别的运转参数的值而变化的多个输出值被从输出层输出。关于与上述的机器相关的多个类别的运转参数的各个,预先设定有各类别的运转参数的值的范围,并且预先设定有与上述的机器相关的多个类别的运转参数的值的范围所对应的神经网络的隐藏层的节点的个数,对多个类别的运转参数的值通过实测而事先求出且各类别的运转参数的值为预先设定的范围内的训练数据存储于存储部。在由参数值取得部新取得的上述的机器的多个运转参数的值分别为预先设定的范围内时,使用对新取得的与上述的机器相关的多个类别的运转参数的值通过实测而得到的训练数据,利用运算部,以使根据与上述的机器相关的多个类别的运转参数的值而变化的多个输出值与对应于与上述的机器相关的多个类别的运转参数的值的训练数据之差变小的方式学习神经网络的权重。在由参数值取得部新取得的与上述的机器相关的多个类别的运转参数中的至少一个类别的运转参数的值为预先设定的范围外时,与对新取得的与上述的机器相关的多个类别的运转参数的值通过实测而得到的训练数据的个数的增大或将训练数据的个数除以表示预先设定的范围的最大值及最小值的差值而得到的数据密度的增大相应地,使神经网络的输出层的前一个隐藏层的节点的个数增大,并且使用对预先设定的范围内及预先设定的范围外的新取得的与上述的机器相关的多个类别的运转参数的值通过实测而得到的训练数据及事先求出的训练数据,利用运算部,以使根据与上述的机器相关的多个类别的运转参数的值而变化的多个输出值与对应于与上述的机器相关的多个类别的运转参数的值的训练数据之差变小的方式学习神经网络的权重。使用学习了权重的神经网络来输出相对于与上述的机器相关的多个类别的运转参数的值的多个输出值。In addition, in this case, in the embodiment of the present invention, the electronic control unit includes: a parameter value acquisition unit that acquires the values of a plurality of types of operation parameters related to the above-mentioned equipment; layer and the output layer of the neural network to perform operations; and a storage unit, the values of the plurality of types of operating parameters related to the above-mentioned equipment are input to the input layer, and the values of the plurality of types of operating parameters related to the above-mentioned equipment are used. And the changed multiple output values are output from the output layer. For each of the plurality of types of operating parameters related to the above-mentioned equipment, a range of values of the operating parameters of each type is preset, and a value of the plurality of types of operating parameters related to the above-mentioned equipment is preset. The number of nodes in the hidden layer of the neural network corresponding to the range, the values of the operating parameters of a plurality of categories are obtained in advance through actual measurement, and the training data of the values of the operating parameters of each category within the preset range is stored in storage department. When the values of the plurality of operating parameters of the above-mentioned equipment newly acquired by the parameter value acquisition unit are within the preset ranges, respectively, the actual measurement is performed using the newly acquired values of the operating parameters of the plurality of types related to the above-mentioned equipment. The training data thus obtained is used by the arithmetic unit so that a plurality of output values that vary according to the values of the plurality of types of operating parameters related to the above-mentioned equipment and the plurality of types of operating parameters corresponding to the above-mentioned equipment are obtained. The weights of the neural network are learned in such a way that the difference between the values of the training data becomes smaller. When the value of the operation parameter of at least one of the plurality of types of operation parameters related to the above-mentioned equipment newly acquired by the parameter value acquisition unit is outside the preset range, the newly acquired operation parameter related to the above-mentioned equipment is The values of the operating parameters of a plurality of categories are obtained by increasing the number of training data obtained by actual measurement or by dividing the number of training data by the difference between the maximum value and the minimum value representing a preset range. The increase of the density correspondingly increases the number of nodes in the previous hidden layer of the output layer of the neural network, and uses the newly acquired and the above-mentioned machines within the preset range and outside the preset range. The training data obtained by the actual measurement and the training data obtained in advance for the values of the operating parameters of the plurality of categories related to the above-mentioned apparatuses are used to change more according to the values of the operating parameters of the plurality of categories related to the above-mentioned equipment. The weights of the neural network are learned such that the difference between each output value and the training data corresponding to the values of the operating parameters of the plurality of categories related to the above-mentioned equipment becomes smaller. A plurality of output values for the values of the operating parameters of the plurality of categories related to the above-mentioned equipment are output using the neural network that has learned the weights.
另一方面,在本发明的实施例中,在用于使用神经网络来输出相对于机器的运转参数的值的输出值的机器学习装置中,预先设定有与该机器相关的多个类别的运转参数的值的范围,并且预先形成有与该机器相关的多个类别的运转参数的值的范围所对应的神经网络,在新取得的与该机器相关的多个类别的运转参数的值中的至少一个类别的运转参数的值为预先设定的范围外时,形成新的神经网络,使用对新取得的与该机器相关的多个类别的运转参数的值通过实测而得到的训练数据来学习新的神经网络的权重。使用学习了权重的神经网络来输出相对于与该机器相关的多个类别的运转参数的值的输出值。On the other hand, in the embodiment of the present invention, in the machine learning device for outputting the output value with respect to the value of the operating parameter of the machine using the neural network, a plurality of categories related to the machine are preset. The range of values of the operating parameters, and the neural network corresponding to the range of values of the operating parameters of the plurality of categories related to the device is pre-formed, among the newly acquired values of the operating parameters of the plurality of categories related to the device When the value of the operating parameter of at least one of the categories is outside the preset range, a new neural network is formed, and the training data obtained by the actual measurement of the newly acquired values of the operating parameters of the plurality of categories related to the machine is used to form a new neural network. Learn the weights of a new neural network. The neural network that has learned the weights is used to output output values with respect to the values of the operating parameters of the plurality of categories related to the machine.
在该情况下,在本发明的实施例中,机器学习装置具备电子控制单元,该电子控制单元具备:参数值取得部,取得与上述的机器相关的多个类别的运转参数的值;运算部,使用包含输入层、隐藏层及输出层的多个神经网络来进行运算;及存储部。与上述的机器相关的多个类别的运转参数的值被向输入层输入,根据与上述的机器相关的多个类别的运转参数的值而变化的输出值被从对应的输出层输出。关于与上述的机器相关的多个类别的运转参数的各个,预先设定有各类别的运转参数的值的范围,将各类别的运转参数的值的预先设定的范围划分为多个,并且预先设定有通过各类别的运转参数的值的划分后的各范围的组合而划定的多个划分区域。针对各划分区域制作有神经网络并且预先设定有各神经网络的隐藏层的节点的个数。对多个类别的运转参数的值通过实测而事先求出的训练数据存储于存储部。在由参数值取得部新取得的与上述的机器相关的多个类别的运转参数的值为预先设定的范围内时,使用对新取得的与上述的机器相关的多个类别的运转参数的值通过实测而得到的训练数据,利用运算部,以使根据与上述的机器相关的多个类别的运转参数的值而变化的输出值与对应于与上述的机器相关的多个类别的运转参数的值的训练数据之差变小的方式,学习新取得的与上述的机器相关的多个类别的运转参数的值所属的划分区域的神经网络的权重。在由参数值取得部新取得的与上述的机器相关的多个类别的运转参数中的至少一个类别的运转参数的值为预先设定的范围外时,设定至少一个类别的运转参数的值所属且由各类别的运转参数的值的预先设定的范围的组合划定的新的区域,并且对该新的区域制作新的神经网络。使用对新取得的与上述的机器相关的多个类别的运转参数的值通过实测而得到的训练数据,利用运算部,以使根据与上述的机器相关的多个类别的运转参数的值而变化的输出值与对应于与上述机器相关的多个类别的运转参数的值的训练数据之差变小的方式学习新的神经网络的权重。使用学习了权重的各神经网络来输出相对于上述的机器的运转参数的值的输出值。In this case, in the embodiment of the present invention, the machine learning device includes an electronic control unit including: a parameter value acquisition unit that acquires values of a plurality of types of operating parameters related to the above-mentioned equipment; a calculation unit , using a plurality of neural networks including an input layer, a hidden layer and an output layer to perform operations; and a storage unit. The values of the operating parameters of the plurality of categories related to the above-mentioned equipment are input to the input layer, and output values that vary according to the values of the plurality of categories of operating parameters related to the above-mentioned equipment are output from the corresponding output layer. For each of the plurality of types of operation parameters related to the above-mentioned equipment, a range of values of the operation parameters of each type is preset, and the preset ranges of the values of the operation parameters of each type are divided into a plurality of groups, and A plurality of divided regions demarcated by the combination of the divided ranges of the values of the operating parameters of each category are set in advance. A neural network is created for each divided region, and the number of nodes of the hidden layer of each neural network is preset. The training data obtained in advance by actual measurement for the values of the operating parameters of the plurality of categories are stored in the storage unit. When the values of the operating parameters of the plurality of categories related to the above-mentioned equipment newly acquired by the parameter value acquisition unit are within a preset range, the values of the newly acquired operating parameters of the plurality of categories related to the above-mentioned equipment are used. The training data obtained by the actual measurement is used to make the output values that vary according to the values of the operating parameters of the plurality of categories related to the above-mentioned equipment and the operating parameters corresponding to the plurality of categories of the above-mentioned equipment by the computing unit. The weights of the neural network of the divided regions to which the values of the operating parameters of the plurality of categories newly acquired related to the above-mentioned equipment belong are learned so that the difference between the values of the training data becomes smaller. When the value of the operation parameter of at least one type of the operation parameters of the plurality of types newly acquired by the parameter value acquisition unit is outside the preset range, the value of the operation parameter of at least one type is set It belongs to a new region defined by a combination of preset ranges of operating parameter values for each category, and a new neural network is created for the new region. Using the training data obtained by the actual measurement of the newly acquired values of the operating parameters of the plurality of categories related to the above-mentioned equipment, the arithmetic unit is used to change the values of the operating parameters of the plurality of categories related to the above-mentioned equipment. The weights of the new neural network are learned in such a way that the difference between the output value of and the training data corresponding to the values of the operating parameters of the plurality of categories related to the above-mentioned machine becomes smaller. Output values corresponding to the values of the above-mentioned operating parameters of the equipment are output using each neural network that has learned the weights.
标号说明Label description
1 内燃机1 Internal combustion engine
14 节气门开度传感器14 Throttle valve opening sensor
23 NOX传感器23 NO X sensor
24 大气温传感器24 Atmospheric temperature sensor
30、56、67 电子控制单元30, 56, 67 Electronic control unit
50 空调主体50 Main body of air conditioner
53、65 温度计53, 65 Thermometer
54 湿度计54 Hygrometer
55 GPS55 GPS
60 二次电池60 Secondary batteries
53 电流计53 Galvanometer
64 电压计。64 Voltmeter.
Claims (9)
Applications Claiming Priority (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018-018425 | 2018-02-05 | ||
JP2018018425 | 2018-02-05 | ||
JP2018216766A JP2019135392A (en) | 2018-02-05 | 2018-11-19 | Control device for internal combustion engine and device for outputting output value |
JP2018-216850 | 2018-11-19 | ||
JP2018-216766 | 2018-11-19 | ||
JP2018216850A JP6501032B1 (en) | 2018-11-19 | 2018-11-19 | Machine learning device |
PCT/JP2019/004080 WO2019151536A1 (en) | 2018-02-05 | 2019-02-05 | Machine learning device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110352297A CN110352297A (en) | 2019-10-18 |
CN110352297B true CN110352297B (en) | 2020-09-15 |
Family
ID=67910416
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201980001105.XA Expired - Fee Related CN110352297B (en) | 2018-02-05 | 2019-02-05 | machine learning device |
Country Status (3)
Country | Link |
---|---|
US (1) | US10853727B2 (en) |
CN (1) | CN110352297B (en) |
DE (1) | DE112019000020B4 (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6852141B2 (en) * | 2018-11-29 | 2021-03-31 | キヤノン株式会社 | Information processing device, imaging device, control method of information processing device, and program |
JP6593560B1 (en) * | 2019-02-15 | 2019-10-23 | トヨタ自動車株式会社 | Internal combustion engine misfire detection device, internal combustion engine misfire detection system, data analysis device, and internal combustion engine control device |
JP6849028B2 (en) * | 2019-08-23 | 2021-03-24 | ダイキン工業株式会社 | Air conditioning control system, air conditioner, and machine learning device |
US11427210B2 (en) * | 2019-09-13 | 2022-08-30 | Toyota Research Institute, Inc. | Systems and methods for predicting the trajectory of an object with the aid of a location-specific latent map |
KR102726697B1 (en) * | 2019-12-11 | 2024-11-06 | 현대자동차주식회사 | System and Method for providing driving information based on big data |
US11459962B2 (en) * | 2020-03-02 | 2022-10-04 | Sparkcognitton, Inc. | Electronic valve control |
US20230029746A1 (en) * | 2021-08-02 | 2023-02-02 | Prezerv Technologies | Mapping subsurface infrastructure |
KR20230045490A (en) * | 2021-09-28 | 2023-04-04 | 에스케이플래닛 주식회사 | Apparatus for providing traffic information based on driving noise and method therefor |
CN115144301B (en) * | 2022-06-29 | 2024-12-03 | 厦门大学 | A method for automatic identification of scale alignment in static weighing calibration of glass float |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6098012A (en) * | 1995-02-13 | 2000-08-01 | Daimlerchrysler Corporation | Neural network based transient fuel control method |
CN1981123A (en) * | 2004-06-25 | 2007-06-13 | Fev电机技术有限公司 | Motor vehicle control device provided with a neuronal network |
JP2007299366A (en) * | 2006-01-31 | 2007-11-15 | Sony Corp | Learning system and method, recognition device and method, creation device and method, recognition and creation device and method, and program |
CN101630144A (en) * | 2009-08-18 | 2010-01-20 | 湖南大学 | Self-learning inverse model control method of electronic throttle |
JP2011132915A (en) * | 2009-12-25 | 2011-07-07 | Honda Motor Co Ltd | Device for estimating physical quantity |
JP2012112277A (en) * | 2010-11-24 | 2012-06-14 | Honda Motor Co Ltd | Control device of internal combustion engine |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5093899A (en) | 1988-09-17 | 1992-03-03 | Sony Corporation | Neural network with normalized learning constant for high-speed stable learning |
JP2606317B2 (en) | 1988-09-20 | 1997-04-30 | ソニー株式会社 | Learning processing device |
JPH0738186B2 (en) | 1989-03-13 | 1995-04-26 | シャープ株式会社 | Self-expanding neural network |
US5331550A (en) * | 1991-03-05 | 1994-07-19 | E. I. Du Pont De Nemours And Company | Application of neural networks as an aid in medical diagnosis and general anomaly detection |
JPH1182137A (en) | 1998-02-09 | 1999-03-26 | Matsushita Electric Ind Co Ltd | Parameter estimation device |
US6269351B1 (en) | 1999-03-31 | 2001-07-31 | Dryken Technologies, Inc. | Method and system for training an artificial neural network |
US7483868B2 (en) | 2002-04-19 | 2009-01-27 | Computer Associates Think, Inc. | Automatic neural-net model generation and maintenance |
US7917333B2 (en) * | 2008-08-20 | 2011-03-29 | Caterpillar Inc. | Virtual sensor network (VSN) based control system and method |
US9400955B2 (en) * | 2013-12-13 | 2016-07-26 | Amazon Technologies, Inc. | Reducing dynamic range of low-rank decomposition matrices |
JP5899272B2 (en) | 2014-06-19 | 2016-04-06 | ヤフー株式会社 | Calculation device, calculation method, and calculation program |
US20190073580A1 (en) * | 2017-09-01 | 2019-03-07 | Facebook, Inc. | Sparse Neural Network Modeling Infrastructure |
US10634081B2 (en) | 2018-02-05 | 2020-04-28 | Toyota Jidosha Kabushiki Kaisha | Control device of internal combustion engine |
-
2019
- 2019-02-05 US US16/486,836 patent/US10853727B2/en not_active Expired - Fee Related
- 2019-02-05 CN CN201980001105.XA patent/CN110352297B/en not_active Expired - Fee Related
- 2019-02-05 DE DE112019000020.9T patent/DE112019000020B4/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6098012A (en) * | 1995-02-13 | 2000-08-01 | Daimlerchrysler Corporation | Neural network based transient fuel control method |
CN1981123A (en) * | 2004-06-25 | 2007-06-13 | Fev电机技术有限公司 | Motor vehicle control device provided with a neuronal network |
JP2007299366A (en) * | 2006-01-31 | 2007-11-15 | Sony Corp | Learning system and method, recognition device and method, creation device and method, recognition and creation device and method, and program |
CN101630144A (en) * | 2009-08-18 | 2010-01-20 | 湖南大学 | Self-learning inverse model control method of electronic throttle |
JP2011132915A (en) * | 2009-12-25 | 2011-07-07 | Honda Motor Co Ltd | Device for estimating physical quantity |
JP2012112277A (en) * | 2010-11-24 | 2012-06-14 | Honda Motor Co Ltd | Control device of internal combustion engine |
Also Published As
Publication number | Publication date |
---|---|
US20200234136A1 (en) | 2020-07-23 |
CN110352297A (en) | 2019-10-18 |
DE112019000020T5 (en) | 2019-10-02 |
DE112019000020B4 (en) | 2020-10-15 |
US10853727B2 (en) | 2020-12-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110352297B (en) | machine learning device | |
CN110118130B (en) | Control device for internal combustion engine | |
US6755078B2 (en) | Methods and apparatus for estimating the temperature of an exhaust gas recirculation valve coil | |
CN111016920B (en) | Control device and control method of drive device for vehicle, vehicle electronic control unit, learned model and machine learning system | |
US9989029B2 (en) | Method and device for determining a charge air mass flow rate | |
US7174250B2 (en) | Method for determining an exhaust gas recirculation quantity for an internal combustion engine provided with exhaust gas recirculation | |
US10825267B2 (en) | Control system of internal combustion engine, electronic control unit, server, and control method of internal combustion engine | |
CN111476345A (en) | machine learning device | |
US10947909B2 (en) | Control device of internal combustion engine and control method of same and learning model for controlling internal combustion engine and learning method of same | |
CN112412649A (en) | Vehicle control device, vehicle learning system, and vehicle control method | |
CN108571391A (en) | The control device and control method of internal combustion engine | |
CN113392574A (en) | Gasoline engine secondary charging model air inflow estimation method based on neural network model | |
CN110005537B (en) | Control device for internal combustion engine | |
JP6501032B1 (en) | Machine learning device | |
CN109684704B (en) | An online calibration method of engine intake air flow based on velocity density model | |
WO2019151536A1 (en) | Machine learning device | |
JP2020197165A (en) | Abnormality detection system of exhaust gas recirculation system | |
JP2021085335A (en) | Internal combustion engine control device | |
JP2019143477A (en) | Control device of internal combustion engine | |
JP5488520B2 (en) | Control device for internal combustion engine | |
Sidorow et al. | Model based fault diagnosis of the intake and exhaust path of turbocharged diesel engines | |
JP2019148243A (en) | Control device of internal combustion engine | |
JP5601232B2 (en) | Control device for internal combustion engine | |
JP4429355B2 (en) | Recirculation exhaust gas flow rate calculation device | |
JP2022012826A (en) | Machine learning system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20200915 |
|
CF01 | Termination of patent right due to non-payment of annual fee |