[go: up one dir, main page]

CN110352297B - machine learning device - Google Patents

machine learning device Download PDF

Info

Publication number
CN110352297B
CN110352297B CN201980001105.XA CN201980001105A CN110352297B CN 110352297 B CN110352297 B CN 110352297B CN 201980001105 A CN201980001105 A CN 201980001105A CN 110352297 B CN110352297 B CN 110352297B
Authority
CN
China
Prior art keywords
values
internal combustion
combustion engine
operating parameters
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201980001105.XA
Other languages
Chinese (zh)
Other versions
CN110352297A (en
Inventor
北川荣来
江原雅人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toyota Motor Corp
Original Assignee
Toyota Motor Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2018216766A external-priority patent/JP2019135392A/en
Priority claimed from JP2018216850A external-priority patent/JP6501032B1/en
Application filed by Toyota Motor Corp filed Critical Toyota Motor Corp
Priority claimed from PCT/JP2019/004080 external-priority patent/WO2019151536A1/en
Publication of CN110352297A publication Critical patent/CN110352297A/en
Application granted granted Critical
Publication of CN110352297B publication Critical patent/CN110352297B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F02COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
    • F02DCONTROLLING COMBUSTION ENGINES
    • F02D41/00Electrical control of supply of combustible mixture or its constituents
    • F02D41/02Circuit arrangements for generating control signals
    • F02D41/14Introducing closed-loop corrections
    • F02D41/1401Introducing closed-loop corrections characterised by the control or regulation method
    • F02D41/1405Neural network control
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F02COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
    • F02DCONTROLLING COMBUSTION ENGINES
    • F02D41/00Electrical control of supply of combustible mixture or its constituents
    • F02D41/02Circuit arrangements for generating control signals
    • F02D41/14Introducing closed-loop corrections
    • F02D41/1438Introducing closed-loop corrections using means for determining characteristics of the combustion gases; Sensors therefor
    • F02D41/1444Introducing closed-loop corrections using means for determining characteristics of the combustion gases; Sensors therefor characterised by the characteristics of the combustion gases
    • F02D41/146Introducing closed-loop corrections using means for determining characteristics of the combustion gases; Sensors therefor characterised by the characteristics of the combustion gases the characteristics being an NOx content or concentration
    • F02D41/1461Introducing closed-loop corrections using means for determining characteristics of the combustion gases; Sensors therefor characterised by the characteristics of the combustion gases the characteristics being an NOx content or concentration of the exhaust gases emitted by the engine
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/0205Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric not using a model or a simulator of the controlled system
    • G05B13/024Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric not using a model or a simulator of the controlled system in which a parameter or coefficient is automatically adjusted to optimise the performance
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B13/00Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
    • G05B13/02Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
    • G05B13/0265Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion
    • G05B13/027Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion using neural networks only
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B23/00Testing or monitoring of control systems or parts thereof
    • G05B23/02Electric testing or monitoring
    • G05B23/0205Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults
    • G05B23/0218Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults
    • G05B23/0243Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults model based detection method, e.g. first-principles knowledge model
    • G05B23/0254Electric testing or monitoring by means of a monitoring system capable of detecting and responding to faults characterised by the fault detection method dealing with either existing or incipient faults model based detection method, e.g. first-principles knowledge model based on a quantitative model, e.g. mathematical relationships between inputs and outputs; functions: observer, Kalman filter, residual calculation, Neural Networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F02COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
    • F02DCONTROLLING COMBUSTION ENGINES
    • F02D2200/00Input parameters for engine control
    • F02D2200/02Input parameters for engine control the parameters being related to the engine
    • F02D2200/04Engine intake system parameters
    • F02D2200/0414Air temperature
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F02COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
    • F02DCONTROLLING COMBUSTION ENGINES
    • F02D2200/00Input parameters for engine control
    • F02D2200/02Input parameters for engine control the parameters being related to the engine
    • F02D2200/10Parameters related to the engine output, e.g. engine torque or engine speed
    • F02D2200/1002Output torque
    • F02D2200/1004Estimation of the output torque
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F02COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
    • F02DCONTROLLING COMBUSTION ENGINES
    • F02D2200/00Input parameters for engine control
    • F02D2200/02Input parameters for engine control the parameters being related to the engine
    • F02D2200/10Parameters related to the engine output, e.g. engine torque or engine speed
    • F02D2200/101Engine speed
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F02COMBUSTION ENGINES; HOT-GAS OR COMBUSTION-PRODUCT ENGINE PLANTS
    • F02DCONTROLLING COMBUSTION ENGINES
    • F02D41/00Electrical control of supply of combustible mixture or its constituents
    • F02D41/02Circuit arrangements for generating control signals
    • F02D41/14Introducing closed-loop corrections
    • F02D41/1438Introducing closed-loop corrections using means for determining characteristics of the combustion gases; Sensors therefor
    • F02D41/1444Introducing closed-loop corrections using means for determining characteristics of the combustion gases; Sensors therefor characterised by the characteristics of the combustion gases
    • F02D41/1459Introducing closed-loop corrections using means for determining characteristics of the combustion gases; Sensors therefor characterised by the characteristics of the combustion gases the characteristics being a hydrocarbon content or concentration
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B11/00Automatic controllers
    • G05B11/01Automatic controllers electric
    • G05B11/36Automatic controllers electric with provision for obtaining particular characteristics, e.g. proportional, integral, differential
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Automation & Control Theory (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Mechanical Engineering (AREA)
  • Feedback Control In General (AREA)
  • Combined Controls Of Internal Combustion Engines (AREA)

Abstract

即使运转参数的值为预先设定的范围外,也能够得到合适的输出值。在用于使用神经网络来输出相对于机器的运转参数的值的输出值的机器学习装置中,在机器的运转参数的值为预先设定的范围外时,增大神经网络的输出层的前一个隐藏层的节点的个数,使用对新取得的机器的运转参数的值通过实测而得到的训练数据,以使根据机器的运转参数的值而变化的输出值与对应于机器的运转参数的值的训练数据之差变小的方式学习神经网络的权重。

Figure 201980001105

Appropriate output values can be obtained even when the values of the operating parameters are outside the preset range. In a machine learning device for using a neural network to output an output value with respect to a value of an operating parameter of a device, when the value of the operating parameter of the device is outside a preset range, the front end of the output layer of the neural network is increased. The number of nodes in one hidden layer is based on the training data obtained by actual measurement of the newly acquired values of the operating parameters of the machine, so that the output values that change according to the values of the operating parameters of the machine are the same as the values corresponding to the operating parameters of the machine. The weights of the neural network are learned in such a way that the difference between the values of the training data becomes smaller.

Figure 201980001105

Description

机器学习装置machine learning device

技术领域technical field

本发明涉及机器学习装置。The present invention relates to machine learning devices.

背景技术Background technique

在使用神经网络的内燃机的控制装置中,如下的内燃机的控制装置是公知的:其基于内燃机转速、吸入空气量等内燃机的运转参数的值,以使向燃烧室内的吸入气体量与实际的向燃烧室内的吸入气体量一致的方式预先学习神经网络的权重,在内燃机运转时,使用学习了权重的神经网络,根据内燃机的运转参数的值来推定向燃烧室内的吸入气体量(例如参照专利文献1)。Among the control devices for internal combustion engines using neural networks, there are known control devices for internal combustion engines that adjust the amount of intake gas into the combustion chamber to the actual amount based on values of operating parameters of the internal combustion engine, such as the engine speed and the amount of intake air. The weights of the neural network are learned in advance so that the amount of intake gas in the combustion chamber is the same, and the amount of intake gas into the combustion chamber is estimated from the values of the operating parameters of the internal combustion engine using the neural network that has learned the weights during the operation of the internal combustion engine (for example, refer to Patent Documents). 1).

现有技术文献prior art literature

专利文献Patent Literature

专利文献1:日本特开2012-112277号公报Patent Document 1: Japanese Patent Laid-Open No. 2012-112277

发明内容SUMMARY OF THE INVENTION

发明所要解决的课题The problem to be solved by the invention

另外,内燃机转速这样的与内燃机相关的特定类别的运转参数的值的使用范围能够根据内燃机的种类而预先设想,因此,通常,对于内燃机的运转参数的值的预先设想的使用范围,以使神经网络的输出值与向燃烧室内的实际的吸入气体量这样的实际的值之差变小的方式预先学习神经网络的权重。然而,实际上,内燃机的运转参数的值有时会成为预先设想的使用范围外,在该情况下,对于预先设想的使用范围外,由于未进行基于实际的值的学习,所以存在使用神经网络运算出的输出值会成为从实际的值大幅背离的值这一问题。这样的问题不限于内燃机的领域,而会在成为机器学习的对象的各种领域的机器中产生。In addition, the use range of the value of a specific type of operating parameter related to the internal combustion engine, such as the engine speed, can be pre-estimated according to the type of the internal combustion engine. Therefore, the pre-estimated use range of the value of the operating parameter of the internal combustion engine is usually used so that the nerve The weights of the neural network are learned in advance so that the difference between the output value of the network and the actual value such as the actual intake gas amount into the combustion chamber becomes small. However, in practice, the values of the operating parameters of the internal combustion engine may be outside the pre-estimated range of use. In this case, since learning based on the actual values is not performed for the out-of-premise use range, there is a possibility of using a neural network calculation. There is a problem that the output value will be a value that deviates greatly from the actual value. Such a problem is not limited to the field of internal combustion engines, but occurs in machines of various fields that are targets of machine learning.

为了解决上述问题,根据第一个发明,提供一种机器学习装置,用于使用神经网络来输出相对于机器的运转参数的值的输出值,其中,预先设定有与上述的机器相关的特定类别的运转参数的值的范围,并且预先设定有与上述的机器相关的特定类别的运转参数的值的范围所对应的神经网络的隐藏层的节点的个数,在新取得的与上述的机器相关的特定类别的运转参数的值为预先设定的范围外时,使神经网络的输出层的前一个隐藏层的节点的个数增大,使用对新取得的与上述的机器相关的特定类别的运转参数的值通过实测而得到的训练数据及对预先设定的范围内的上述的机器的运转参数的值通过实测而得到的训练数据来学习神经网络的权重,使用学习了权重的神经网络来输出相对于与上述的机器相关的特定类别的运转参数的值的输出值。In order to solve the above-mentioned problem, according to the first invention, there is provided a machine learning apparatus for outputting an output value with respect to a value of an operation parameter of a machine using a neural network, wherein a specific parameter related to the above-mentioned machine is preset. The value range of the operating parameter of the category, and the number of nodes of the hidden layer of the neural network corresponding to the value range of the operating parameter of the specific category related to the above-mentioned machine is preset. When the value of the operating parameter of the specific type related to the machine is outside the preset range, the number of nodes in the previous hidden layer of the output layer of the neural network is increased, and the newly acquired specific type related to the machine is used. The training data obtained by the actual measurement of the values of the operating parameters of the category and the training data obtained by the actual measurement of the values of the operating parameters of the above-mentioned equipment within a preset range are used to learn the weights of the neural network, and the neural network that has learned the weights is used. The network outputs output values relative to the values of the specific classes of operating parameters associated with the above-mentioned machines.

为了解决上述问题,根据第二个发明,提供一种机器学习装置,用于使用神经网络来输出相对于机器的运转参数的值的输出值,其中,预先设定有与上述的机器相关的多个类别的运转参数的值的范围,并且预先设定有与上述的机器相关的多个类别的运转参数的值的范围所对应的神经网络的隐藏层的节点的个数,在新取得的与上述的机器相关的多个类别的运转参数的值为预先设定的范围外时,使神经网络的输出层的前一个隐藏层的节点的个数增大,使用对新取得的与上述的机器相关的多个类别的运转参数的值通过实测而得到的训练数据及对预先设定的范围内的上述的机器的运转参数的值通过实测而得到的训练数据来学习神经网络的权重,使用学习了权重的神经网络来输出相对于与上述的机器相关的多个类别的运转参数的值的输出值。In order to solve the above-mentioned problems, according to the second invention, there is provided a machine learning apparatus for outputting an output value with respect to a value of an operating parameter of a machine using a neural network, wherein a plurality of parameters related to the above-mentioned machine are preset. The range of the values of the operating parameters of each category, and the number of nodes in the hidden layer of the neural network corresponding to the value ranges of the operating parameters of the plurality of categories related to the above-mentioned equipment is preset, and the newly acquired and When the values of the operating parameters of the above-mentioned equipment-related categories are outside the preset range, increase the number of nodes in the previous hidden layer of the output layer of the neural network, and use the newly acquired equipment related to the above-mentioned equipment. The weights of the neural network are learned using the training data obtained by the actual measurement of the values of the operating parameters of the related multiple categories and the training data obtained by the actual measurement of the values of the operating parameters of the above-mentioned equipment within a preset range. A weighted neural network is used to output an output value with respect to the values of the operating parameters of the plurality of categories related to the above-mentioned equipment.

为了解决上述问题,根据第三个发明,提供一种机器学习装置,用于使用神经网络来输出相对于机器的运转参数的值的输出值,其中,预先设定有与上述的机器相关的多个类别的运转参数的值的范围,并且预先形成有与上述的机器相关的多个类别的运转参数的值的范围所对应的神经网络,在新取得的与上述的机器相关的多个类别的运转参数的值中的至少一个类别的运转参数的值为预先设定的范围外时,形成新的神经网络,使用对新取得的与上述的机器相关的多个类别的运转参数的值通过实测而得到的训练数据来学习新的神经网络的权重,使用学习了权重的神经网络来输出相对于与上述的机器相关的多个类别的运转参数的值的输出值。In order to solve the above-mentioned problems, according to a third invention, there is provided a machine learning apparatus for outputting an output value with respect to a value of an operation parameter of a machine using a neural network, wherein a plurality of parameters related to the above-mentioned machine are preset. The range of values of the operating parameters of each category, and a neural network corresponding to the value range of the operating parameters of the plurality of categories related to the above-mentioned equipment is pre-formed, and the newly acquired plural categories of the above-mentioned equipment related to the above-mentioned equipment. When the value of the operating parameter of at least one category of the values of the operating parameters is outside the preset range, a new neural network is formed, and the newly acquired values of the operating parameters of the plurality of categories related to the above-mentioned equipment are used to pass the actual measurement. On the other hand, the weights of the new neural network are learned from the obtained training data, and output values corresponding to the values of the operation parameters of the plurality of categories related to the above-mentioned equipment are output using the neural network that has learned the weights.

发明效果Invention effect

在上述的各发明中,在新取得的机器的运转参数的值为预先设定的范围外时,通过使神经网络的隐藏层的节点的个数增大或者通过制作新的神经网络,能够抑制在机器的运转参数的值成为了预先设定的范围外的值时,使用神经网络运算出的输出值成为从实际的值大幅背离的值。In each of the above-described inventions, when the value of the newly acquired operating parameter of the device is outside the preset range, by increasing the number of nodes in the hidden layer of the neural network or by creating a new neural network, it is possible to suppress the When the value of the operating parameter of the device is out of the preset range, the output value calculated using the neural network is a value that greatly deviates from the actual value.

附图说明Description of drawings

图1是内燃机的整体图。FIG. 1 is an overall view of an internal combustion engine.

图2是示出神经网络的一例的图。FIG. 2 is a diagram showing an example of a neural network.

图3A及图3B是示出Sigmoid函数σ的值的变化的图。3A and 3B are diagrams showing changes in the value of the sigmoid function σ.

图4A及图4B分别是示出神经网络和来自隐藏层的节点的输出值的图。4A and 4B are diagrams showing output values from a neural network and nodes in a hidden layer, respectively.

图5A及图5B分别是示出来自隐藏层的节点的输出值和来自输出层的节点的输出值的图。5A and 5B are diagrams showing output values from nodes in the hidden layer and output values from nodes in the output layer, respectively.

图6A及图6B分别是示出神经网络和来自输出层的节点的输出值的图。6A and 6B are diagrams showing the neural network and the output values from the nodes of the output layer, respectively.

图7A及图7B是用于说明通过本申请发明所要解决的课题的图。7A and 7B are diagrams for explaining the problem to be solved by the invention of the present application.

图8A及图8B分别是示出神经网络和神经网络的输入值与输出值的关系的图。8A and 8B are diagrams showing a neural network and a relationship between input values and output values of the neural network, respectively.

图9是示出神经网络的图。FIG. 9 is a diagram illustrating a neural network.

图10是用于执行学习处理的流程图。FIG. 10 is a flowchart for performing learning processing.

图11是示出神经网络的变形例的图。FIG. 11 is a diagram showing a modification of the neural network.

图12是示出用于执行学习处理的另一实施例的流程图。FIG. 12 is a flowchart illustrating another embodiment for performing a learning process.

图13是示出神经网络的图。FIG. 13 is a diagram showing a neural network.

图14A及图14B是示出内燃机转速等的预先设定的范围的图。14A and 14B are diagrams showing preset ranges of the engine speed and the like.

图15是示出神经网络的变形例的图。FIG. 15 is a diagram showing a modification of the neural network.

图16是示出用于执行学习处理的又一实施例的流程图。FIG. 16 is a flowchart showing yet another embodiment for performing the learning process.

图17是示出根据内燃机的运转参数的值而划分出的已学习划分区域的图。FIG. 17 is a diagram showing learned divided regions divided according to the values of the operating parameters of the internal combustion engine.

图18A、图18B及图18C分别是示出训练数据相对于内燃机转速和点火正时的分布、训练数据相对于点火正时和节气门开度的分布及训练数据与学习后的输出值的关系的图。18A , 18B and 18C respectively show the distribution of training data with respect to the engine speed and ignition timing, the distribution of training data with respect to ignition timing and throttle opening, and the relationship between the training data and the output value after learning 's diagram.

图19A及图19B是示出训练数据与学习后的输出值的关系的图。19A and 19B are diagrams showing the relationship between training data and learned output values.

图20是用于进行空调的自动调整的机器学习装置的整体图。FIG. 20 is an overall view of a machine learning device for performing automatic adjustment of an air conditioner.

图21是示出神经网络的图。FIG. 21 is a diagram showing a neural network.

图22A及图22B是示出气温等的预先设定的范围的图。22A and 22B are diagrams showing preset ranges of air temperature and the like.

图23是示出用于执行学习处理的又一实施例的流程图。FIG. 23 is a flowchart showing yet another embodiment for performing the learning process.

图24A及图24B是示出气温等的预先设定的范围的图。24A and 24B are diagrams showing preset ranges of air temperature and the like.

图25是示出用于执行学习处理的又一实施例的流程图。FIG. 25 is a flowchart showing yet another embodiment for performing a learning process.

图26是用于推定二次电池的劣化度的机器学习装置的整体图。FIG. 26 is an overall view of a machine learning apparatus for estimating the degree of deterioration of a secondary battery.

图27是示出神经网络的图。FIG. 27 is a diagram illustrating a neural network.

图28A及图28B是示出气温等的预先设定的范围的图。28A and 28B are diagrams showing preset ranges of air temperature and the like.

图29是用于执行算出处理的流程图。FIG. 29 is a flowchart for executing calculation processing.

图30是用于执行训练数据取得处理的流程图。FIG. 30 is a flowchart for executing training data acquisition processing.

图31是示出用于执行学习处理的又一实施例的流程图。FIG. 31 is a flowchart showing yet another embodiment for performing a learning process.

图32A及图32B是示出气温等的预先设定的范围的图。32A and 32B are diagrams showing preset ranges of air temperature and the like.

图33是示出用于执行学习处理的又一实施例的流程图。FIG. 33 is a flowchart showing yet another embodiment for performing the learning process.

具体实施方式Detailed ways

<内燃机的整体结构><Overall structure of internal combustion engine>

首先,对将本发明的机器学习装置应用于内燃机的情况进行说明。参照示出内燃机的整体图的图1,1表示内燃机主体,2表示各气缸的燃烧室,3表示配置于各气缸的燃烧室2内的火花塞,4表示用于向各气缸供给燃料(例如,汽油)的燃料喷射阀,5表示平衡罐,6表示进气支管,7表示排气歧管。平衡罐5经由进气管道8而连结于排气涡轮增压器9的压缩机9a的出口,压缩机9a的入口经由吸入空气量检测器10而连结于空气滤清器11。在进气管道8内配置有由致动器13驱动的节气门12,在节气门12安装有用于检测节气门开度的节气门开度传感器14。另外,在进气管道8周围配置有用于冷却在进气管道8内流动的吸入空气的中冷器15。First, a case where the machine learning device of the present invention is applied to an internal combustion engine will be described. Referring to FIG. 1 showing an overall view of an internal combustion engine, 1 denotes an internal combustion engine main body, 2 denotes a combustion chamber of each cylinder, 3 denotes a spark plug arranged in a combustion chamber 2 of each cylinder, and 4 denotes a fuel for supplying fuel to each cylinder (for example, Gasoline) fuel injection valve, 5 represents the balance tank, 6 represents the intake manifold, and 7 represents the exhaust manifold. The balance tank 5 is connected to the outlet of the compressor 9 a of the exhaust turbocharger 9 via the intake duct 8 , and the inlet of the compressor 9 a is connected to the air cleaner 11 via the intake air amount detector 10 . A throttle valve 12 driven by an actuator 13 is arranged in the intake duct 8 , and a throttle valve opening degree sensor 14 for detecting the throttle valve opening degree is attached to the throttle valve 12 . In addition, an intercooler 15 for cooling intake air flowing in the intake duct 8 is arranged around the intake duct 8 .

另一方面,排气歧管7连结于排气涡轮增压器9的排气涡轮机9b的入口,排气涡轮机9b的出口经由排气管16而连结于排气净化用催化转换器17。排气歧管7与平衡罐5经由废气再循环(以下,称作EGR)通路18而互相连结,在EGR通路18内配置有EGR控制阀19。各燃料喷射阀4连结于燃料分配管20,该燃料分配管20经由燃料泵21而连结于燃料箱22。在排气管16内配置有用于检测废气中的NOX浓度的NOX传感器23。另外,在空气滤清器11内配置有用于检测大气温的大气温传感器24。On the other hand, the exhaust manifold 7 is connected to the inlet of the exhaust turbine 9 b of the exhaust turbocharger 9 , and the outlet of the exhaust turbine 9 b is connected to the exhaust gas purification catalytic converter 17 via the exhaust pipe 16 . The exhaust manifold 7 and the balance tank 5 are connected to each other via an exhaust gas recirculation (hereinafter, referred to as EGR) passage 18 , and an EGR control valve 19 is arranged in the EGR passage 18 . Each of the fuel injection valves 4 is connected to a fuel distribution pipe 20 which is connected to a fuel tank 22 via a fuel pump 21 . An NOx sensor 23 for detecting the NOx concentration in the exhaust gas is arranged in the exhaust pipe 16 . In addition, an atmospheric temperature sensor 24 for detecting the atmospheric temperature is arranged in the air cleaner 11 .

电子控制单元30由数字计算机构成,具备通过双向性总线31而互相连接的ROM(只读存储器)32、RAM(随机存取存储器)33、CPU(微处理器)34、输入端口35及输出端口36。吸入空气量检测器10、节气门开度传感器14、NOX传感器23及大气温传感器24的输出信号经由对应的AD变换器37而向输入端口35输入。在加速器踏板40上连接有产生与加速器踏板40的踩踏量成比例的输出电压的负荷传感器41,负荷传感器41的输出电压经由对应的AD变换器37而向输入端口35输入。而且,在输入端口35上连接有每当曲轴旋转例如30°时产生输出脉冲的曲轴角传感器42。在CPU34内,基于曲轴角传感器42的输出信号来算出内燃机转速。另一方面,输出端口36经由对应的驱动电路38而连接于火花塞3、燃料喷射阀4、节气门驱动用致动器13、EGR控制阀19及燃料泵21。The electronic control unit 30 is constituted by a digital computer, and includes a ROM (Read Only Memory) 32, a RAM (Random Access Memory) 33, a CPU (Microprocessor) 34, an input port 35, and an output port, which are connected to each other via a bidirectional bus 31. 36. The output signals of the intake air amount detector 10 , the throttle opening sensor 14 , the NOx sensor 23 , and the atmospheric temperature sensor 24 are input to the input port 35 via the corresponding AD converters 37 . A load sensor 41 that generates an output voltage proportional to the amount of depression of the accelerator pedal 40 is connected to the accelerator pedal 40 , and the output voltage of the load sensor 41 is input to the input port 35 via the corresponding AD converter 37 . Furthermore, a crank angle sensor 42 that generates an output pulse every time the crankshaft rotates, for example, 30°, is connected to the input port 35 . Within the CPU 34 , the engine speed is calculated based on the output signal of the crank angle sensor 42 . On the other hand, the output port 36 is connected to the spark plug 3 , the fuel injection valve 4 , the throttle valve driving actuator 13 , the EGR control valve 19 , and the fuel pump 21 via the corresponding drive circuit 38 .

<神经网络的概要><Outline of Neural Network>

在本发明的实施例中,使用神经网络来推定表示内燃机的性能的各种值。图2示出了该神经网络的一例。图2中的圆形记号表示人工神经元,在神经网络中,该人工神经元通常被称作节点或单元(在本申请中称作节点)。在图2中,L=1表示输入层,L=2及L=3表示隐藏层,L=4表示输出层。另外,在图2中,x1及x2表示来自输入层(L=1)的各节点的输出值,y表示来自输出层(L=4)的节点的输出值,z1、z2及z3表示来自隐藏层(L=2)的各节点的输出值,z1及z2表示来自隐藏层(L=3)的各节点的输出值。需要说明的是,隐藏层的层数可以设为1个或任意的个数,输入层的节点的个数及隐藏层的节点的个数也可以设为任意的个数。需要说明的是,在图2中虽然示出了输出层的节点的个数为1个的情况,但输出层的节点的个数可以设为2个以上的多个。In the embodiment of the present invention, various values representing the performance of the internal combustion engine are estimated using a neural network. FIG. 2 shows an example of this neural network. The circular symbols in Figure 2 represent artificial neurons, which are commonly referred to as nodes or units in neural networks (referred to in this application as nodes). In FIG. 2, L=1 represents the input layer, L=2 and L=3 represent the hidden layer, and L=4 represents the output layer. In addition, in FIG. 2, x 1 and x 2 represent the output values from the nodes of the input layer (L=1), y represents the output values from the nodes of the output layer (L=4), z 1 , z 2 and z 3 represents the output value from each node of the hidden layer (L=2), and z 1 and z 2 represent the output value from each node of the hidden layer (L=3). It should be noted that the number of hidden layers can be set to one or any number, and the number of nodes of the input layer and the number of nodes of the hidden layer can also be set to any number. It should be noted that although FIG. 2 shows a case where the number of nodes of the output layer is one, the number of nodes of the output layer may be more than two.

在输入层的各节点中,输入原样输出。另一方面,对隐藏层(L=2)的各节点输入输入层的各节点的输出值x1及x2,在隐藏层(L=2)的各节点中,使用各自对应的权重w及偏倚b来算出总输入值u。例如,在图2中的隐藏层(L=2)的zk(k=1,2,3)所示的节点中算出的总输入值uk成为下式这样。In each node of the input layer, the input is output as it is. On the other hand, the output values x 1 and x 2 of each node of the input layer are input to each node of the hidden layer (L=2), and the corresponding weights w and 2 are used in each node of the hidden layer (L=2) Bias b to calculate the total input value u. For example, the total input value uk calculated in the node indicated by z k ( k =1, 2, 3) of the hidden layer (L=2) in FIG. 2 is as follows.

Figure BDA0002140171810000061
Figure BDA0002140171810000061

接着,该总输入值uk由活性化函数f进行变换,从隐藏层(L=2)的zk所示的节点作为输出值zk(=f(uk))而输出。关于隐藏层(L=2)的其他节点也是同样的。另一方面,对隐藏层(L=3)的各节点输入隐藏层(L=2)的各节点的输出值z1、z2及z3,在隐藏层(L=3)的各节点中,使用各自对应的权重w及偏倚b来算出总输入值u(Σz·w+b)。该总输入值u同样由活性化函数进行变换,从隐藏层(L=3)的各节点作为输出值z1、z2而输出,需要说明的是,在本发明的实施例中,使用Sigmoid函数σ作为该活性化函数。Next, the total input value uk is transformed by the activation function f, and is output from the node indicated by z k of the hidden layer (L=2) as the output value z k (=f(u k )). The same is true for other nodes in the hidden layer (L=2). On the other hand, the output values z 1 , z 2 and z 3 of each node of the hidden layer (L=2) are input to each node of the hidden layer (L=3), and the output values z 1 , z 2 and z 3 of each node of the hidden layer (L=3) are , the total input value u(Σz·w+b) is calculated using the corresponding weight w and bias b. The total input value u is also transformed by the activation function, and is output from each node of the hidden layer (L=3) as output values z 1 and z 2 . It should be noted that, in the embodiment of the present invention, Sigmoid is used. The function σ serves as the activation function.

另一方面,对输出层(L=4)的节点输入隐藏层(L=3)的各节点的输出值z1及z2,在输出层的节点中,使用各自对应的权重w及偏倚b来算出总输入值u(Σz·w+b),或者,仅使用各自对应的权重w来算出总输入值u(Σz·w)。在本发明的实施例中,在输出层的节点中使用恒等函数,因此,从输出层的节点将在输出层的节点中算出的总输入值u直接作为输出值y而输出。On the other hand, the output values z 1 and z 2 of each node of the hidden layer (L=3) are input to the nodes of the output layer (L=4), and the corresponding weights w and bias b are used in the nodes of the output layer. to calculate the total input value u(Σz·w+b), or use only the corresponding weight w to calculate the total input value u(Σz·w). In the embodiment of the present invention, the identity function is used in the nodes of the output layer, so the total input value u calculated in the nodes of the output layer is directly output as the output value y from the nodes of the output layer.

<基于神经网络的函数的表现><Performance of neural network-based functions>

当使用神经网络时,能够表现任意的函数,接着,对此进行简单说明。首先,对用作活性化函数的Sigmoid函数σ进行说明,Sigmoid函数σ由σ(x)=1/(1+exp(-x))表示,如图3A所示,根据x的值而取0与1之间的值。在此,若将x置换为wx+b,则Sigmoid函数σ由σ(wx+b)=1/(1+exp(-wx-b))表示。在此,若增大w的值,则如图3B中的曲线σ1、σ2、σ3所示,Sigmoid函数σ(wx+b)的曲线部分的倾斜逐渐变陡,若使w的值无限大,则如图3B中的曲线σ4所示,Sigmoid函数σ(wx+b)在成为x=-b/w(成为wx+b=0的x,即,成为σ(wx+b)=0.5的x处,如图3B所示,呈阶梯状变化。若利用这样的Sigmoid函数σ的性质,则能够使用神经网络来表现任意的函数。When a neural network is used, an arbitrary function can be expressed, and this will be briefly described next. First, the sigmoid function σ used as the activation function will be described. The sigmoid function σ is represented by σ(x)=1/(1+exp(-x)), and as shown in FIG. 3A , it takes 0 according to the value of x value between 1 and 1. Here, when x is replaced by wx+b, the sigmoid function σ is represented by σ(wx+b)=1/(1+exp(-wx-b)). Here, when the value of w is increased, as shown by the curves σ 1 , σ 2 , and σ 3 in FIG. 3B , the slope of the curve portion of the sigmoid function σ(wx+b) gradually becomes steeper. is infinite, then as shown by the curve σ 4 in FIG. 3B, the Sigmoid function σ(wx+b) becomes x=-b/w (x which becomes wx+b=0, that is, becomes σ(wx+b) = 0.5 x, as shown in Fig. 3B, changes in a step-like manner. By utilizing the properties of such a sigmoid function σ, an arbitrary function can be expressed using a neural network.

例如,使用如图4A所示的由1个节点构成的输入层(L=1)、由2个节点构成的隐藏层(L=2)及由1个节点构成的输出层(L=3)所构成的神经网络,能够表现近似于二次函数的函数。需要说明的是,在该情况下,即使使输出层(L=3)的个数为多个也能够表现任意的函数,但为了容易理解,以输出层(L=3)的节点的个数为一个的情况为例来说明。在图4A所示的神经网络中,如图4A所示,对输入层(L=1)的节点输入输入值x,对隐藏层(L=2)中的z1所示的节点输入使用权重w1 (L2)及偏倚b1算出的输入值u=x·w1 (L2)+b1。该输入值u由Sigmoid函数σ(x·w1 (L2)+b1)进行变换,作为输出值z1而输出。同样,对隐藏层(L=2)中的z2所示的节点输入使用权重w2 (L2)及偏倚b2算出的输入值u=x·w2 (L2)+b2,该输入值u由Sigmoid函数σ(x·w2 (L2)+b2)进行变换,作为输出值z2而输出。For example, as shown in FIG. 4A, an input layer (L=1) consisting of one node, a hidden layer (L=2) consisting of two nodes, and an output layer (L=3) consisting of one node are used The constructed neural network can express a function that approximates a quadratic function. It should be noted that in this case, even if the number of output layers (L=3) is set to a large number, any function can be expressed, but for ease of understanding, the number of nodes of the output layer (L=3) is used. Take one case as an example. In the neural network shown in FIG. 4A, as shown in FIG. 4A, the input value x is input to the node of the input layer (L=1), and the weight is used to input the node input shown by z 1 in the hidden layer (L=2). The input value u=x·w 1 (L2) +b 1 calculated by w 1 (L2) and the bias b 1 . The input value u is transformed by the sigmoid function σ(x·w 1 (L2) +b 1 ), and is output as the output value z 1 . Similarly, the input value u=x·w 2 (L2) +b 2 calculated using the weight w 2 (L2) and the bias b 2 is input to the node indicated by z 2 in the hidden layer (L=2). u is transformed by the Sigmoid function σ(x·w 2 (L2) +b 2 ), and output as an output value z 2 .

另一方面,对输出层(L=3)的节点输入隐藏层(L=2)的各节点的输出值z1及z2,在输出层的节点中,使用各自对应的权重w1 (y)及w2 (y)来算出总输入值u(Σz·w=z1·w1 (y)+z2·w2 (y))。如前所述,在本发明的实施例中,在输出层的节点中使用恒等函数,因此,从输出层的节点将在输出层的节点中算出的总输入值u直接作为输出值y而输出。On the other hand, the output values z 1 and z 2 of each node of the hidden layer (L=2) are input to the nodes of the output layer (L=3), and the corresponding weights w 1 (y ) are used for the nodes of the output layer. ) and w 2 (y) to calculate the total input value u (Σz·w=z 1 ·w 1 (y) +z 2 ·w 2 (y) ). As mentioned above, in the embodiment of the present invention, the identity function is used in the nodes of the output layer. Therefore, the total input value u calculated in the nodes of the output layer is directly used as the output value y from the nodes of the output layer. output.

图4B的(I)示出了以在x=0处Sigmoid函数σ(x·w1 (L2)+b1)的值成为大致零的方式设定了权重w1 (L2)及偏倚b1时的来自隐藏层(L=2)的节点的输出值z1。另一方面,在Sigmoid函数σ(x·w2 (L2)+b2)中,若例如使权重w2 (L2)为负的值,则Sigmoid函数σ(x·w2 (L2)+b2)的曲线的形状如图4B的(II)所示,成为伴随于x的增大而减小的形状。图4B的(II)示出了以在x=0处Sigmoid函数σ(x·w2 (L2)+b2)的值成为大致零的方式设定了权重w2 (L2)及偏倚b2时的来自隐藏层(L=2)的节点的输出值z2的变化。(I) of FIG. 4B shows that the weight w 1 (L2) and the bias b 1 are set so that the value of the Sigmoid function σ(x·w 1 (L2) + b 1 ) becomes substantially zero at x=0 The output value z 1 from the node of the hidden layer (L=2) at . On the other hand, in the Sigmoid function σ(x·w 2 (L2) +b 2 ), if, for example, the weight w 2 (L2) has a negative value, the Sigmoid function σ(x·w 2 (L2) +b 2 ) The shape of the curve is a shape that decreases as x increases, as shown in (II) of FIG. 4B . (II) of FIG. 4B shows that the weight w 2 (L2) and the bias b 2 are set so that the value of the Sigmoid function σ(x·w 2 (L2) + b 2 ) becomes substantially zero at x=0 The change in the output value z 2 of the node from the hidden layer (L=2) at .

另一方面,在图4B的(III)中,用实线示出了来自隐藏层(L=2)的各节点的输出值z1与z2之和(z1+z2)。需要说明的是,如图4A所示,对各输出值z1、z2乘以各自对应的权重w1 (y)及w2 (y),在图4B的(III)中,用虚线A示出了w1 (y)、w2 (y)>1且w1 (y)≈w2 (y)时的输出值y的变化。而且,在图4B的(III)中,用单点划线B示出了w1 (y)、w2 (y)>1且w1 (y)>w2 (y)时的输出值y的变化,在图4B的(III)中,用单点划线C示出了w1 (y)、w2 (y)>1且w1 (y)<w2 (y)时的输出值y的变化。在图4B的(III)中,W所示的范围内的虚线A的形状表示如y=ax2(a是系数)所示的近似于二次函数的曲线,因此可知,通过使用如图4A所示的神经网络,能够表现近似于二次函数的函数。On the other hand, in (III) of FIG. 4B , the sum (z 1 +z 2 ) of output values z 1 and z 2 from each node of the hidden layer (L=2) is shown by a solid line. It should be noted that, as shown in FIG. 4A , the respective output values z 1 and z 2 are multiplied by their corresponding weights w 1 (y) and w 2 (y) , and in (III) of FIG. 4B , the dotted line A is used. Changes in the output value y when w 1 (y) , w 2 (y) >1 and w 1 (y) ≈ w 2 (y) are shown. Furthermore, in (III) of FIG. 4B , the output value y when w 1 (y) , w 2 (y) >1, and w 1 (y) >w 2 (y) is shown by a one-dot chain line B The change of , in (III) of FIG. 4B , the output values when w 1 (y) , w 2 (y) >1 and w 1 (y) <w 2 (y) are shown with a dashed-dotted line C change in y. In (III) of FIG. 4B , the shape of the broken line A in the range shown by W represents a curve approximated to a quadratic function as shown by y=ax 2 (a is a coefficient), so it can be seen that by using FIG. 4A The neural network shown is capable of expressing a function that approximates a quadratic function.

另一方面,图5A示出了通过增大图4A中的权重w1 (L2)及w2 (L2)的值而使Sigmoid函数σ的值如图3B所示那样呈阶梯状变化的情况。图5A的(I)示出了以在x=-b1/w1 (L2)处Sigmoid函数σ(x·w1 (L2)+b1)的值呈阶梯状增大的方式设定了权重w1 (L2)及偏倚b1时的来自隐藏层(L=2)的节点的输出值z1。另外,图5A的(II)示出了以在比x=-b1/w1 (L2)稍大的x=-b2/w2 (L2)处Sigmoid函数σ(x·w2 (L2)+b2)的值呈阶梯状减小的方式设定了权重w2 (L2)及偏倚b2时的来自隐藏层(L=2)的节点的输出值z2。另外,在图5A的(III)中,用实线示出了来自隐藏层(L=2)的各节点的输出值z1与z2之和(z1+z2)。如图4A所示,对各输出值z1、z2乘以各自对应的权重w1 (y)及w2 (y),在图5A的(III)中,用虚线示出了w1 (y)、w2 (y)>1时的输出值y。On the other hand, FIG. 5A shows a case where the value of the sigmoid function σ is changed stepwise as shown in FIG. 3B by increasing the values of the weights w 1 (L2) and w 2 (L2) in FIG. 4A . (I) of FIG. 5A shows that the value of the Sigmoid function σ(x·w 1 (L2) +b 1 ) at x=−b 1 /w 1 (L2) is set so that the value of the sigmoid function σ(x·w 1 (L2) +b 1 ) increases in a step-like manner The output value z 1 of the node from the hidden layer (L=2) at the weight w 1 (L2) and the bias b 1 . In addition, (II) of FIG. 5A shows the Sigmoid function σ(x·w 2 ( L2) at x=−b 2 /w 2 (L2) slightly larger than x=−b 1 /w 1 (L2) ) +b 2 ) is set so that the value of the weight w 2 (L2) and the output value z 2 from the node of the hidden layer (L=2) when the bias b 2 is reduced in a stepwise manner. In addition, in (III) of FIG. 5A , the sum (z 1 +z 2 ) of output values z 1 and z 2 from each node of the hidden layer (L=2) is shown by a solid line. As shown in FIG. 4A , the respective output values z 1 and z 2 are multiplied by their corresponding weights w 1 (y) and w 2 (y) , and in (III) of FIG. 5A , w 1 ( y) , the output value y when w 2 (y) >1.

这样,在图4A所示的神经网络中,通过隐藏层(L=2)的一对节点,得到如图5A的(III)所示的长条状的输出值y。因此,若增大隐藏层(L=2)的成对的节点数,适当设定隐藏层(L=2)的各节点处的权重w及偏倚b的值,则能够表现如图5B中的虚线的曲线所示的近似函数y=f(x)的函数。需要说明的是,在图5B中,虽然以各长条相接的方式描绘,但实际上各长条有时会局部重叠。另外,由于实际上w的值并非无限大,所以各长条不成为准确的长条状,而成为图3B中的σ3所示的曲线部分的上半部分那样的曲线状。需要说明的是,虽然省略详细的说明,但如图6A所示,若对于不同的两个输入值xI及x2在隐藏层(L=2)中设置各自对应的一对节点,则如图6B所示,得到与输入值xI及x2对应的柱状的输出值y。在该情况下,可知,若对于各输入值xI、x2在隐藏层(L=2)设置许多成对的节点,则分别得到与不同的输入值xI及x2对应的多个柱状的输出值y,因此,能够表现表示输入值xI及x2与输出值y的关系的函数。需要说明的是,即使在存在不同的三个以上的输入值x的情况下,也同样能够表现表示输入值x与输出值y的关系的函数。In this way, in the neural network shown in FIG. 4A , a long output value y as shown in (III) of FIG. 5A is obtained through a pair of nodes in the hidden layer (L=2). Therefore, if the number of pairs of nodes in the hidden layer (L=2) is increased, and the values of the weight w and the bias b at each node of the hidden layer (L=2) are appropriately set, the expression as shown in FIG. 5B can be obtained. The approximate function y=f(x) shown by the dotted curve. In addition, in FIG. 5B, although each strip|line is drawn so that it may contact|connect, in actuality, each long strip|line may partially overlap. In addition, since the value of w is not actually infinite, each strip does not have an exact strip shape, but has a curved shape like the upper half of the curved portion shown by σ 3 in FIG. 3B . It should be noted that although the detailed description is omitted, as shown in FIG. 6A , if a pair of nodes corresponding to two different input values x 1 and x 2 are set in the hidden layer (L=2), the following As shown in FIG. 6B , a columnar output value y corresponding to the input values x1 and x2 is obtained. In this case, it can be seen that if many pairs of nodes are provided in the hidden layer (L=2) for each of the input values x I and x 2 , a plurality of columns corresponding to different input values x I and x 2 are obtained, respectively. Therefore, a function representing the relationship between the input values x 1 and x 2 and the output value y can be expressed. It should be noted that even when there are three or more different input values x, a function representing the relationship between the input value x and the output value y can be expressed similarly.

<神经网络中的学习><Learning in Neural Networks>

另一方面,在本发明的实施例中,使用误差反向传播法来学习神经网络内的各权重w的值及偏倚b的值。该误差反向传播法是周知的,因此,关于误差反向传播法,以下简单说明其概要。需要说明的是,偏倚b是权重w的一种,因此,在以下的说明中,偏倚b被设为权重w的一个。在如图2所示的神经网络中,当将向L=2、L=3或L=4的各层的节点的输入值u(L)中的权重用w(L)表示时,误差函数E对权重w(L)的微分即梯度

Figure BDA0002140171810000101
若改写则成为下式所示那样。On the other hand, in the embodiment of the present invention, the error back propagation method is used to learn the value of each weight w and the value of the bias b in the neural network. This error backpropagation method is well known, and therefore, the outline of the error backpropagation method will be briefly described below. It should be noted that the bias b is one of the weights w, and therefore, in the following description, the bias b is set to be one of the weights w. In the neural network shown in Fig. 2, when the weight in the input value u (L) to the node of each layer of L=2, L=3 or L=4 is represented by w (L) , the error function The differential of E to the weight w (L) is the gradient
Figure BDA0002140171810000101
When rewritten, it becomes as shown in the following formula.

Figure BDA0002140171810000102
Figure BDA0002140171810000102

在此,

Figure BDA0002140171810000103
因此若设为
Figure BDA0002140171810000104
则上述(1)式能够用下式表示。here,
Figure BDA0002140171810000103
So if set to
Figure BDA0002140171810000104
Then, the above-mentioned formula (1) can be represented by the following formula.

Figure BDA0002140171810000105
Figure BDA0002140171810000105

在此,当u(L)变动时,通过下一层的总输入值u(L+1)的变化而引起误差函数E的变动,因此δ(L)能够用下式表示。Here, when u (L) fluctuates, the error function E fluctuates due to changes in the total input value u (L+1) of the next layer, so δ (L) can be expressed by the following equation.

Figure BDA0002140171810000106
Figure BDA0002140171810000106

在此,若表示为z(L)=f(u(L)),则上述(3)式的右边出现的输入值uk (L+1)能够用下式表示。Here, when expressed as z (L) =f(u (L) ), the input value uk (L+1) appearing on the right side of the above-mentioned formula (3) can be expressed by the following formula.

Figure BDA0002140171810000111
Figure BDA0002140171810000111

在此,上述(3)式的右边第一项

Figure BDA0002140171810000112
是δ(L+1),上述(3)式的右边第二项
Figure BDA0002140171810000113
能够用下式表示。Here, the first term on the right side of the above formula (3)
Figure BDA0002140171810000112
is δ (L+1) , the second term on the right-hand side of equation (3) above
Figure BDA0002140171810000113
It can be represented by the following formula.

Figure BDA0002140171810000114
Figure BDA0002140171810000114

因此,δ(L)由下式表示。Therefore, δ (L) is represented by the following formula.

Figure BDA0002140171810000115
Figure BDA0002140171810000115

即,

Figure BDA0002140171810000116
which is,
Figure BDA0002140171810000116

即,当δ(L+1)求出后,能够求出δ(L)That is, after δ (L+1) is obtained, δ (L) can be obtained.

在对某输入值求出了训练数据yt,相对于该输入值的来自输出层的输出值是y的情况下,在使用平方误差作为误差函数的情况下,平方误差E以E=1/2(y-yt)2求出。在该情况下,在图2的输出层(L=4)的节点中,输出值y=f(u(L)),因此,在该情况下,输出层(L=4)的节点处的δ(L)的值成为下式这样。When the training data y t is obtained for a certain input value, and the output value from the output layer is y with respect to the input value, when the squared error is used as the error function, the squared error E is E=1/ 2(yy t ) 2 is found. In this case, in the node of the output layer (L=4) of FIG. 2, the output value y=f(u (L) ), therefore, in this case, at the node of the output layer (L=4) The value of δ (L) is as follows.

Figure BDA0002140171810000117
Figure BDA0002140171810000117

在本发明的实施例中,如前所述,f(u(L))是恒等函数,f’(u(L))=1。因此,δ(L)=y-yt,δ(L)求出。In the embodiment of the present invention, as described above, f(u (L) ) is an identity function, and f'(u (L) )=1. Therefore, δ (L) = yy t , and δ (L) is obtained.

当δ(L)求出后,使用上式(6)求出前层的δ(L-1)。这样依次求出前层的δ,使用这些δ的值,根据上式(2),关于各权重w求出误差函数E的微分,即梯度

Figure BDA0002140171810000121
当求出梯度
Figure BDA0002140171810000122
后,使用该梯度
Figure BDA0002140171810000123
以使误差函数E的值减小的方式更新权重w的值。即,进行权重w的值的学习。需要说明的是,在输出层(L=4)具有多个节点的情况下,若将来自各节点的输出值设为y1、y2…,将对应的训练数据设为yt1、yt2…,则作为误差函数E,使用以下的平方和误差E。After δ (L) is obtained, δ (L-1) of the front layer is obtained using the above formula (6) . In this way, the δ of the previous layer is successively obtained, and using the values of these δ, according to the above formula (2), the differential of the error function E, that is, the gradient, is obtained with respect to each weight w
Figure BDA0002140171810000121
When finding the gradient
Figure BDA0002140171810000122
After that, use the gradient
Figure BDA0002140171810000123
The value of the weight w is updated in such a way that the value of the error function E decreases. That is, the learning of the value of the weight w is performed. It should be noted that when the output layer (L=4) has a plurality of nodes, if the output value from each node is set as y 1 , y 2 . . . , and the corresponding training data is set as y t1 , y t2 ..., the following square sum error E is used as the error function E.

Figure BDA0002140171810000124
Figure BDA0002140171810000124

<本发明的实施例><Example of the present invention>

接着,参照图7A~图10,对本发明的机器学习装置的第一实施例进行说明。在本发明的第一实施例中,如图4A所示,使用包含一个输入层(L=1)、由一层构成的隐藏层(L=2)及一个输出层(L=3)的神经网络。另外,该第一实施例示出了使用如图4A所示的神经网络以使输出值y由输入值x的二次函数表示的方式进行了神经网络的权重的学习的情况。需要说明的是,在图7A~图8B中,虚线表示真正的二次函数的波形,涂黑的圆表示训练数据,环状的圆表示以使对应于输入值x的输出值y与训练数据之差变小的方式进行了神经网络的权重的学习后的输出值y,实线的曲线表示学习结束后的输入值x与输出值y的关系。另外,在图7A~图8B中,A与B之间即R表示输入值x的预先设定的范围。Next, a first embodiment of the machine learning apparatus of the present invention will be described with reference to FIGS. 7A to 10 . In the first embodiment of the present invention, as shown in FIG. 4A, a neural network comprising an input layer (L=1), a hidden layer (L=2) composed of one layer, and an output layer (L=3) is used network. In addition, this first embodiment shows the case where the learning of the weight of the neural network is performed so that the output value y is represented by the quadratic function of the input value x using the neural network shown in FIG. 4A . It should be noted that, in FIG. 7A to FIG. 8B , the dotted line represents the waveform of the real quadratic function, the black circle represents the training data, and the annular circle represents the output value y corresponding to the input value x and the training data. The output value y after the learning of the weight of the neural network is performed so that the difference becomes smaller, and the solid line curve represents the relationship between the input value x and the output value y after the learning is completed. In addition, in FIGS. 7A to 8B , between A and B, that is, R represents a preset range of the input value x.

图7A及图7B是用于说明本申请发明所要解决的课题的图,因此,首先,参照图7A及图7B,对本申请发明所要解决的课题进行说明。图7A示出了如下情况:使用如图4A所示那样隐藏层(L=2)的节点的个数是2个的神经网络,对于预先设定的范围R内的输入值x,以使输出量y成为输入值x的二次函数y=ax2(a是常数)的方式学习了神经网络的权重。如图7A所示,即使在神经网络的隐藏层(L=2)仅具有2个节点的情况下,在输入值x处于预先设定的范围R内的情况下,也如实线所示,表现接近于二次函数的函数。FIGS. 7A and 7B are diagrams for explaining the problem to be solved by the present invention. Therefore, first, the problem to be solved by the present invention will be described with reference to FIGS. 7A and 7B . Fig. 7A shows the following situation: using a neural network in which the number of nodes in the hidden layer (L=2) is 2 as shown in Fig. 4A, for an input value x within a preset range R, the output The weights of the neural network are learned in such a way that the quantity y becomes a quadratic function y=ax 2 (a is a constant) of the input value x. As shown in FIG. 7A , even when the hidden layer (L=2) of the neural network has only two nodes, when the input value x is within the range R set in advance, as shown by the solid line, the expression A function close to a quadratic function.

即,在关于输入值x的预先设定的范围R进行了学习的情况下,关于预先设定的范围R,通过多个Sigmoid函数σ的曲线部分的合适的组合,输出值y表现为接近于二次函数的函数。然而,关于输入值x的预先设定的范围R外,由于未进行学习,所以如实线所示,Sigmoid函数σ大幅变化的曲线部分的两端的直线部分直接作为输出值y而出现。因此,学习完成后的输出值y如图7A中的实线所示,在输入值x的预先设定的范围R内,以接近于二次函数的函数的形式出现,在输入值x的预先设定的范围R外,以相对于输入值x几乎不变的接近于直线的形式出现。因此,如图7A所示,在输入值x的预先设定的范围R外,输出值y相对于虚线所示的2次曲线大幅背离。That is, when learning is performed with respect to the preset range R of the input value x, with respect to the preset range R, the output value y appears close to A function of a quadratic function. However, since learning is not performed outside the preset range R of the input value x, the straight line portions at both ends of the curve portion where the sigmoid function σ greatly changes appears as the output value y as shown by the solid line. Therefore, the output value y after the learning is completed, as shown by the solid line in FIG. 7A , appears in the form of a function close to the quadratic function within the preset range R of the input value x. Outside the set range R, it appears in the form of a nearly straight line that is almost unchanged with respect to the input value x. Therefore, as shown in FIG. 7A , outside the preset range R of the input value x, the output value y deviates significantly from the quadratic curve shown by the dotted line.

另一方面,图7B示出了如下情况:在输入值x例如如图7B中的x0所示那样成为了输入值x的预先设定的范围R外的情况下,将输入值x为x0时的输出值y0也包含于训练数据,学习了神经网络的权重。这样,在也包含输入值x的预先设定的范围R外的输出值y0而进行了学习的情况下,图4B的z1所示的Sigmoid函数σ的成为z1=1的直线部分以包含输出值y0的方式上升,图4B的z2所示的Sigmoid函数σ整体向右移动并且Sigmoid函数σ的值整体变低,因此如图7B中的实线所示,在预先设定的范围R内,学习结束后的输出值y的值相对于2次曲线大幅背离。这样,在成为了输入值x的预先设定的范围R外的情况下,无法得到合适的输出值y。On the other hand, FIG. 7B shows a case where the input value x is set to x when the input value x is outside the preset range R of the input value x as indicated by x 0 in FIG. 7B , for example. The output value y 0 at 0 is also included in the training data, and the weights of the neural network are learned. In this way, when learning is performed including the output value y 0 outside the preset range R of the input value x, the straight line portion of the Sigmoid function σ shown in z 1 in FIG. 4B that becomes z 1 =1 is The way including the output value y 0 rises, the Sigmoid function σ shown in z 2 in FIG. 4B moves to the right as a whole and the value of the Sigmoid function σ becomes lower as a whole, so as shown by the solid line in FIG. In the range R, the value of the output value y after the learning is completed greatly deviates from the quadratic curve. In this way, when the input value x is outside the preset range R, an appropriate output value y cannot be obtained.

但是,弄清楚了,在该情况下,若增大神经网络的隐藏层(L=2)的节点的个数,则即使在输入值x成为了预先设定的范围R外的情况下,也能够得到合适的输出值y。接着,参照示出本发明的第一实施例的图8A及图8B来对此进行说明。图8B示出了在如图8A所示那样将神经网络的隐藏层(L=2)的节点的个数从2个增大为3个的状态下将输入值x为x0时的输出值y0也包含于训练数据而学习了神经网络的权重时的学习结果。当这样增大神经网络的隐藏层(L=2)的节点的个数时,如图8B中的实线所示,输出值y的值与虚线所示的2次曲线重叠。因此,如图8B所示,可知,即使在输入值x成为了预先设想的使用范围R外的情况下,通过增大神经网络的隐藏层(L=2)的节点的个数,也能够得到合适的输出值y。于是,在本发明的第一实施例中,在输入值x成为了预先设定的范围R外的情况下,使神经网络的隐藏层(L=2)的节点的个数增大。However, it was found that in this case, if the number of nodes in the hidden layer (L=2) of the neural network is increased, even if the input value x is outside the preset range R, the A suitable output value y can be obtained. Next, this will be described with reference to FIGS. 8A and 8B showing the first embodiment of the present invention. FIG. 8B shows the output value when the input value x is x 0 in a state where the number of nodes in the hidden layer (L=2) of the neural network is increased from 2 to 3 as shown in FIG. 8A y 0 is also included in the training data and the learning result when the weight of the neural network is learned. When the number of nodes of the hidden layer (L=2) of the neural network is increased in this way, as shown by the solid line in FIG. 8B , the value of the output value y overlaps with the quadratic curve shown by the dotted line. Therefore, as shown in FIG. 8B , it can be seen that even when the input value x is outside the pre-estimated use range R, by increasing the number of nodes in the hidden layer (L=2) of the neural network, it is possible to obtain Appropriate output value y. Therefore, in the first embodiment of the present invention, when the input value x is outside the preset range R, the number of nodes of the hidden layer (L=2) of the neural network is increased.

接着,对图7A~图8B所示的输入值x及输出值y的具体的一例进行说明。在内燃机的领域中,在将与内燃机相关的特定类别的运转参数的值设为输入值x时,实际的输出量y有时成为输入值x的二次函数的形式,作为这样的情况的一例,存在与内燃机相关的特定类别的运转参数的值即输入值x是内燃机转速N(rpm)且输出量y是排气损失量的情况。在该情况下,内燃机转速N的使用范围当内燃机确定时与之相应地确定,因此,内燃机转速N的范围预先设定。另一方面,排气损失量表示从内燃机燃烧室排出的热能量,与从内燃机燃烧室排出的废气量成比例,与从内燃机燃烧室排出的废气温与外气温的温度差成比例。该排气损失量基于实际使内燃机运转时的气体温度等的检测值而算出,因此,该算出的排气损失量表示通过实测而得到的值。Next, a specific example of the input value x and the output value y shown in FIGS. 7A to 8B will be described. In the field of internal combustion engines, when the value of a specific type of operating parameter related to the internal combustion engine is the input value x, the actual output y may be in the form of a quadratic function of the input value x. As an example of such a case, There are cases where the input value x, which is a value of a specific type of operating parameter related to the internal combustion engine, is the engine speed N (rpm) and the output amount y is the exhaust gas loss amount. In this case, the use range of the engine speed N is determined correspondingly when the internal combustion engine is determined, and therefore, the range of the engine speed N is set in advance. On the other hand, the exhaust gas loss amount represents the thermal energy discharged from the combustion chamber of the internal combustion engine, and is proportional to the amount of exhaust gas discharged from the combustion chamber of the internal combustion engine and proportional to the temperature difference between the temperature of the exhaust gas discharged from the combustion chamber of the internal combustion engine and the outside air temperature. The exhaust gas loss amount is calculated based on the detected value of the gas temperature and the like when the internal combustion engine is actually operated. Therefore, the calculated exhaust gas loss amount represents a value obtained by actual measurement.

在该具体的一例中,在输入值x即内燃机转速N为预先设定的范围R内时,使用通过实测而得到的训练数据,以使输出值y与对应于输入值x的训练数据之差变小的方式学习神经网络的权重。即,在与内燃机相关的特定类别的运转参数的值为预先设定的范围R内时,使用通过实测而得到的训练数据,以使输出值y与对应于与内燃机相关的特定类别的运转参数的值的训练数据之差变小的方式学习神经网络的权重。另一方面,在输入值x即内燃机转速N为预先设定的范围外时,增大神经网络的隐藏层的节点的个数,并且使用对新取得的输入值x即内燃机转速N通过实测而得到的训练数据,以使输出值y与对应于输入值x的训练数据之差变小的方式学习神经网络的权重。即,在与内燃机相关的特定类别的运转参数的值为预先设定的范围外时,增大神经网络的隐藏层的节点的个数,并且使用对新取得的与内燃机相关的特定类别的运转参数的值通过实测而得到的训练数据,以使输出值y与对应于与内燃机相关的特定类别的运转参数的值的训练数据之差变小的方式学习神经网络的权重。因此,在该情况下,即使内燃机转速N变得比预先设定的范围R高,也能够比较准确地推定排气损失量。In this specific example, when the input value x, that is, the engine speed N, is within the predetermined range R, training data obtained by actual measurement is used so that the difference between the output value y and the training data corresponding to the input value x is obtained. Learning the weights of a neural network in a smaller way. That is, when the value of the operating parameter of the specific type related to the internal combustion engine is within the preset range R, the training data obtained by actual measurement is used so that the output value y corresponds to the operating parameter of the specific type related to the internal combustion engine. The weights of the neural network are learned in such a way that the difference between the values of the training data becomes smaller. On the other hand, when the input value x, that is, the engine speed N, is outside the preset range, the number of nodes in the hidden layer of the neural network is increased, and the newly acquired input value x, that is, the engine speed N, is determined by actual measurement. From the obtained training data, the weights of the neural network are learned in such a way that the difference between the output value y and the training data corresponding to the input value x becomes small. That is, when the value of the operation parameter of the specific type related to the internal combustion engine is outside the preset range, the number of nodes in the hidden layer of the neural network is increased, and the newly acquired operation of the specific type related to the internal combustion engine is used. The weights of the neural network are learned so that the difference between the output value y and the training data corresponding to the value of the specific type of operating parameter related to the internal combustion engine becomes small from the training data obtained by the actual measurement of the parameter value. Therefore, in this case, even if the engine speed N becomes higher than the preset range R, the exhaust gas loss amount can be estimated relatively accurately.

需要说明的是,本发明的第一实施例也能够应用于如图9所示的具有多个隐藏层(L=2及L=3)的神经网络。在如图9所示的神经网络中,根据输出层(L=4)的前一个隐藏层(L=3)的节点的输出值z1、z2,从输出层(L=4)输出的函数的形式确定。即,输出值y能够由何种函数表现受到输出层(L=4)的前一个隐藏层(L=3)的节点的个数支配。因此,在如图9所示的神经网络中,在增大隐藏层的节点的个数时,如图9所示,增大输出层(L=4)的前一个隐藏层(L=3)的节点的个数。It should be noted that the first embodiment of the present invention can also be applied to a neural network having multiple hidden layers (L=2 and L=3) as shown in FIG. 9 . In the neural network shown in Fig. 9, according to the output values z 1 , z 2 of the nodes of the previous hidden layer (L=3) of the output layer (L=4), the output from the output layer (L=4) The form of the function is determined. That is, what kind of function expression the output value y can represent is governed by the number of nodes in the hidden layer (L=3) preceding the output layer (L=4). Therefore, in the neural network shown in Figure 9, when the number of nodes in the hidden layer is increased, as shown in Figure 9, the previous hidden layer (L=3) of the output layer (L=4) is increased. the number of nodes.

在上述的第一实施例中,对预先设定的范围R内的各种输入值x实测出的排气损失量作为训练数据而事先求出,即,对预先设定的范围R内的与内燃机相关的特定类别的运转参数的值通过实测而事先求出了训练数据,根据这些与内燃机相关的特定类别的运转参数的值及训练数据来决定神经网络的构造,以使输出值y与对应于与内燃机相关的特定类别的运转参数的值的训练数据之差变小的方式事先学习神经网络的权重。对该预先设定的范围R内的与内燃机相关的特定类别的运转参数的值通过实测而事先求出的训练数据存储于电子控制单元30的存储部。在该第一实施例中,使用与在事先学习中使用的神经网络相同构造的神经网络,使用学习完成时的神经网络的权重,在车辆运转中以车载方式进一步进行学习。图10示出了该以车载方式进行的第一实施例的学习处理例程。需要说明的是,图10所示的学习处理例程通过每隔一定时间(例如每隔一秒)的中断来执行。In the first embodiment described above, the amount of exhaust gas loss actually measured for various input values x within the preset range R is obtained as training data in advance, that is, the sum of the values within the preset range R and the The values of the specific types of operating parameters related to the internal combustion engine are obtained in advance by actual measurement to obtain training data, and based on the values of the specific types of operating parameters related to the internal combustion engine and the training data, the structure of the neural network is determined so that the output value y corresponds to The weights of the neural network are learned in advance so that the difference between the training data of the values of the specific types of operating parameters related to the internal combustion engine becomes small. The training data obtained in advance by actual measurement of the values of the specific types of operating parameters related to the internal combustion engine within the preset range R are stored in the storage unit of the electronic control unit 30 . In this first embodiment, a neural network having the same structure as the neural network used in the prior learning is used, and the learning is further carried out on-board while the vehicle is running, using the weights of the neural network at the time of completion of the learning. FIG. 10 shows the learning processing routine of the first embodiment in the vehicle-mounted manner. It should be noted that the learning processing routine shown in FIG. 10 is executed by interruption at regular intervals (for example, every one second).

参照图10,首先,在步骤101中,读入存储于电子控制单元30的存储部的已学习的权重、在事先的学习中使用的训练数据即对预先设定的范围R内的与内燃机相关的特定类别的运转参数的值通过实测而事先求出的训练数据、表示输入数据的范围R即与内燃机相关的特定类别的运转参数的值的预先设定的范围的值A、B。该已学习的权重用作权重的初始值。接着,在步骤102中,读入在事先的学习中使用的神经网络的输出层的前一个隐藏层的节点的个数K。接着,进入步骤103,取得新的输入值x即新的与内燃机相关的特定类别的运转参数的值,该新的输入值x即新的与内燃机相关的特定类别的运转参数的值存储于电子控制单元30的存储部。而且,在步骤103中,将相对于新的输入值x的排气损出量的实测值作为训练数据存储于电子控制单元30的存储部。即,在步骤103中,将对新取得的与内燃机相关的特定类别的运转参数的值通过实测而得到的训练数据存储于电子控制单元30的存储部。Referring to FIG. 10 , first, in step 101 , the learned weights stored in the storage unit of the electronic control unit 30 and the training data used in the previous learning, that is, the internal combustion engine-related weights within the preset range R are read. The values A and B representing the range R of the input data, that is, the predetermined range of the values of the specific types of operating parameters related to the internal combustion engine, are training data obtained in advance by actual measurement. This learned weight is used as an initial value for the weight. Next, in step 102, the number K of nodes in the hidden layer immediately preceding the output layer of the neural network used in the previous learning is read. Next, proceed to step 103, acquire a new input value x, that is, a new value of a specific type of operating parameter related to the internal combustion engine, and store the new input value x, that is, a new value of a specific type of operating parameter related to the internal combustion engine in the electronic The storage part of the control unit 30 . And in step 103, the actual measurement value of the exhaust gas loss amount with respect to the new input value x is memorize|stored in the memory|storage part of the electronic control unit 30 as training data. That is, in step 103 , training data obtained by actual measurement of the newly acquired values of the operating parameters of the specific type related to the internal combustion engine are stored in the storage unit of the electronic control unit 30 .

接着,在步骤104中,判别新的输入值x即新取得的与内燃机相关的特定类别的运转参数的值是否处于表示预先设定的范围R的A、B之间,即,新的输入值x是否为A以上且B以下。在新的输入值x处于表示预先设定的范围R的A、B之间时,进入步骤105,将输入值x即新取得的与内燃机相关的特定类别的运转参数的值向神经网络的输入层的节点输入,基于从神经网络的输出层的节点输出的输出值y和对新取得的与内燃机相关的特定类别的运转参数的值通过实测而得到的训练数据,使用误差反向传播法,以使输出值y与训练数据之差变小的方式学习神经网络的权重。Next, in step 104, it is determined whether or not the new input value x, that is, the newly acquired value of the operating parameter of the specific type related to the internal combustion engine, is between A and B indicating the preset range R, that is, the new input value. Is x greater than or equal to A and less than or equal to B. When the new input value x is between A and B representing the preset range R, the process proceeds to step 105, and the input value x, that is, the newly acquired value of the operating parameter of the specific type related to the internal combustion engine, is input to the neural network The node input of the layer is based on the output value y output from the node output of the neural network and the training data obtained by the actual measurement of the newly acquired values of the operating parameters of the specific category related to the internal combustion engine, using the error back propagation method, The weights of the neural network are learned in such a way that the difference between the output value y and the training data becomes small.

另一方面,在步骤104中判别为新的输入值x即新取得的与内燃机相关的特定类别的运转参数的值不处于表示预先设定的范围R的A、B之间时,进入步骤106,更新神经网络的输出层的前一个隐藏层的节点的个数K,增大输出层的前一个隐藏层的节点的个数K。此时,在第一实施例中,将输出层的前一个隐藏层的节点的个数K增大1个。接着,在步骤107中,以使输出层的前一个隐藏层的节点的个数K增大的方式更新神经网络,接着,进入步骤105。在步骤105中,将对新的输入值x新得到的训练数据也包含于训练数据,以使输出值y与训练数据之差变小的方式学习更新后的神经网络的权重。即,在步骤105中,使用对新取得的与内燃机相关的特定类别的运转参数的值通过实测而得到的训练数据及对预先设定的范围R内的与内燃机相关的特定类别的运转参数的值通过实测而事先求出的训练数据,以使根据预先设定的范围内及预先设定的范围外的与内燃机相关的特定类别的运转参数的值而变化的输出值y与对应于与该内燃机相关的特定类别的运转参数的值的训练数据之差变小的方式学习更新后的神经网络的权重。On the other hand, when it is determined in step 104 that the new input value x, that is, the newly acquired value of the operating parameter of the specific type related to the internal combustion engine is not between A and B indicating the preset range R, the process proceeds to step 106 , update the number K of nodes in the previous hidden layer of the output layer of the neural network, and increase the number K of nodes in the previous hidden layer of the output layer. At this time, in the first embodiment, the number K of nodes in the previous hidden layer of the output layer is increased by one. Next, in step 107 , the neural network is updated so that the number K of nodes in the hidden layer immediately preceding the output layer is increased, and the process proceeds to step 105 . In step 105, the training data newly obtained for the new input value x is also included in the training data, and the weight of the updated neural network is learned so that the difference between the output value y and the training data becomes smaller. That is, in step 105, the training data obtained by actual measurement of the newly acquired values of the operating parameters of the specific type related to the internal combustion engine and the training data of the operating parameters of the specific type related to the internal combustion engine within the preset range R are used. The training data obtained in advance by actual measurement, so that the output value y that changes according to the value of the specific type of operating parameter related to the internal combustion engine within the preset range and outside the preset range corresponds to the output value y. The weights of the updated neural network are learned so that the difference between the training data of the values of the specific types of operating parameters related to the internal combustion engine becomes smaller.

在该情况下,在新取得的与内燃机相关的特定类别的运转参数的值为预先设定的范围外时,也可以在对新取得的与内燃机相关的特定类别的运转参数的值通过实测而得到的训练数据的个数为2个以上的一定个数以上的情况下,使神经网络的输出层的前一个隐藏层的节点的个数增大。因此,在该第一实施例中,在新取得的与内燃机相关的特定类别的运转参数的值为预先设定的范围外时,根据对新取得的与内燃机相关的特定类别的运转参数的值通过实测而得到的训练数据的个数的增大来增大神经网络的输出层的前一个隐藏层的节点的个数。In this case, when the newly acquired value of the operating parameter of the specific type related to the internal combustion engine is outside the preset range, the value of the newly acquired operating parameter of the specific type related to the internal combustion engine may be determined by actual measurement. When the number of obtained training data is two or more than a certain number, the number of nodes in the hidden layer immediately preceding the output layer of the neural network is increased. Therefore, in the first embodiment, when the value of the newly acquired operating parameter of the specific type related to the internal combustion engine is out of the preset range, based on the newly acquired value of the specific type of operating parameter related to the internal combustion engine The number of nodes in the previous hidden layer of the output layer of the neural network is increased by increasing the number of training data obtained through actual measurement.

另外,在对预先设定的范围外的新取得的与内燃机相关的特定类别的运转参数的值通过实测而得到的训练数据存在多个的情况下,可以与图8B中的B与C之间所示的预先设定的运转参数的值的范围中的训练数据的数据密度的增大相应地,使神经网络的输出层的前一个隐藏层的节点的个数增大。需要说明的是,在图8B中,B及C分别表示该预先设定的运转参数的值的范围的最小值和最大值,因此,准确地说,可以与将训练数据的个数除以表示运转参数的值的预先设定的范围的最大值C及最小值B的差值(C-B)而得到的数据密度的增大相应地,使神经网络的输出层的前一个隐藏层的节点的个数增大。In addition, when there is a plurality of training data obtained by actual measurement for newly acquired values of operating parameters of a specific type related to the internal combustion engine that are outside the preset range, the difference between B and C in FIG. 8B may be determined. An increase in the data density of the training data within the range of the values of the preset operating parameters shown corresponds to an increase in the number of nodes in the hidden layer immediately preceding the output layer of the neural network. It should be noted that, in FIG. 8B , B and C respectively represent the minimum value and the maximum value of the range of the values of the preset operating parameters. Therefore, to be precise, it can be expressed by dividing the number of training data by The increase in data density obtained by the difference between the maximum value C and the minimum value B (C-B) of the preset range of the values of the operating parameters correspondingly increases the number of nodes in the hidden layer immediately preceding the output layer of the neural network. number increases.

如图1所示,在本发明的实施例中使用的内燃机具备电子控制单元30,该电子控制单元30具备:参数值取得部,取得内燃机的运转参数的值;运算部,使用包含输入层、隐藏层及输出层的神经网络来进行运算;及存储部。在此,图1所示的输入端口35构成了上述的参数值取得部,CPU34构成了上述的运算部,ROM32及RAM33构成了上述的存储部。需要说明的是,在CPU34即上述的运算部中,内燃机的运转参数的值被向输入层输入,根据内燃机的运转参数的值而变化的输出值被从输出层输出。另外,针对与内燃机相关的特定类别的运转参数的值而预先设定的范围R预先存储于ROM32内即上述的存储部。而且,已学习的权重和对预先设定的范围R内的与内燃机相关的特定类别的运转参数的值通过实测而事先求出的训练数据存储于RAM33内即上述的存储部。As shown in FIG. 1 , the internal combustion engine used in the embodiment of the present invention includes an electronic control unit 30, and the electronic control unit 30 includes: a parameter value acquisition unit that acquires values of operating parameters of the internal combustion engine; The hidden layer and the neural network of the output layer perform operations; and a storage unit. Here, the input port 35 shown in FIG. 1 constitutes the above-mentioned parameter value acquisition unit, the CPU 34 constitutes the above-mentioned calculation unit, and the ROM 32 and the RAM 33 constitute the above-mentioned storage unit. Note that, in the CPU 34 , that is, the above-described calculation unit, the value of the operating parameter of the internal combustion engine is input to the input layer, and the output value that varies according to the value of the operating parameter of the internal combustion engine is output from the output layer. In addition, the range R set in advance with respect to the value of the operating parameter of the specific type related to the internal combustion engine is preliminarily stored in the ROM 32 , that is, the above-mentioned storage unit. In addition, the learned weights and the training data obtained in advance for the values of the specific types of operating parameters related to the internal combustion engine within the preset range R are stored in the RAM 33 , that is, the above-mentioned storage unit.

图11示出了在本发明的第一实施例中使用的神经网络的变形例。在该变形例中,输出层(L=4)具有两个节点。FIG. 11 shows a modification of the neural network used in the first embodiment of the present invention. In this modification, the output layer (L=4) has two nodes.

在该变形例中,与图9所示的例子同样,输入值x被设为内燃机转速N(rpm)。另一方面,在该变形例中,一方的输出量y1与图9所示的例子同样,被设为排气损失量,另一方的输出量y2被设为成为输入值x的二次函数的某种量,例如燃料消耗率。在该变形例中也是,在输出层(L=4)的各节点中,使用恒等函数作为活性化函数。在这样输出层(L=4)具有多个节点的情况下,如前所述,使用前式(8)所示的平方和误差E(来自各节点的输出值为y1、y2…,对应的训练数据为yt1、yt2…)作为误差函数E。在该情况下,从前式(7)可知,关于一方的节点,利用y1对平方和误差E进行偏微分

Figure BDA0002140171810000181
因此一方的节点处的δ(L)的值成为δ(L)=y1-yt1,关于另一方的节点,利用y2对平方和误差E进行偏微分
Figure BDA0002140171810000182
因此另一方的节点处的δ(L)的值成为δ(L)=y2-yt12。关于输出层(L=4)的各节点,当δ(L)求出后,使用前式(6)求出前层的δ(L-1)。这样,依次求出前层的δ,使用这些δ的值,根据前式(2),关于各权重w求出误差函数E的微分即梯度
Figure BDA0002140171810000191
当求出梯度
Figure BDA0002140171810000192
后,使用该梯度
Figure BDA0002140171810000193
以使误差函数E的值减小的方式更新权重w的值。In this modification, as in the example shown in FIG. 9 , the input value x is set to the engine speed N (rpm). On the other hand, in this modification, one of the output amounts y 1 is assumed to be the exhaust gas loss amount as in the example shown in FIG. 9 , and the other output amount y 2 is assumed to be the quadratic of the input value x Some quantity of a function, such as a fuel consumption rate. Also in this modification, each node of the output layer (L=4) uses the identity function as the activation function. In the case where the output layer (L=4) has a plurality of nodes, as described above, using the square sum error E (the output values from each node are y 1 , y 2 . . . ) shown in the previous equation (8), The corresponding training data are y t1 , y t2 ...) as the error function E. In this case, as can be seen from the previous equation (7), with respect to one node, the sum of squares error E is partially differentiated by y 1
Figure BDA0002140171810000181
Therefore, the value of δ (L ) at one node becomes δ (L) = y 1 -y t1 , and with respect to the other node, the square sum error E is partially differentiated by y 2
Figure BDA0002140171810000182
Therefore, the value of δ (L ) at the other node becomes δ (L) =y 2 -y t12 . For each node of the output layer (L=4), after δ (L) is obtained, δ (L-1) of the previous layer is obtained using the preceding equation (6) . In this way, the δ of the previous layer is successively obtained, and using these values of δ, the gradient of the error function E, that is, the differential of the error function E, is obtained for each weight w according to the preceding formula (2).
Figure BDA0002140171810000191
When finding the gradient
Figure BDA0002140171810000192
After that, use the gradient
Figure BDA0002140171810000193
The value of the weight w is updated in such a way that the value of the error function E decreases.

在如图11所示那样神经网络的输出层(L=4)具有多个节点的情况下也是,通过输出层(L=4)的前一个隐藏层(L=3)的节点的输出值z1、z2,从输出层(L=4)的各节点输出的函数的形式确定。即,各输出值y1、y2能够由何种函数表现受到输出层(L=4)的前一个隐藏层(L=3)的节点的个数支配。因此,在如图11所示的神经网络中,在增大隐藏层的节点的个数时,如图11所示,增大输出层(L=4)的前一个隐藏层(L=3)的节点的个数。Even when the output layer (L=4) of the neural network has a plurality of nodes as shown in FIG. 11, the output value z of the node passing through the hidden layer (L=3) preceding the output layer (L=4) 1 and z 2 are determined in the form of functions output from each node of the output layer (L=4). That is, what kind of function expression each output value y 1 , y 2 can represent is governed by the number of nodes in the hidden layer (L=3) immediately preceding the output layer (L=4). Therefore, in the neural network shown in Figure 11, when the number of nodes in the hidden layer is increased, as shown in Figure 11, the previous hidden layer (L=3) of the output layer (L=4) is increased. the number of nodes.

图12~图14B示出了本发明的机器学习装置的第二实施例。在该第二实施例中,与内燃机相关的运转参数由多个类别的运转参数构成,基于与内燃机相关的多个类别的运转参数的值来进行神经网络的权重的学习。作为具体的一例,示出了如下情况:内燃机的运转参数由内燃机转速、加速器开度(加速器踏板的踩踏量)及外气温构成,制作基于这些内燃机的运转参数的值来推定内燃机的输出转矩的神经网络模型。在该具体的一例中,如图13所示,神经网络的输入层(L=1)由3个节点构成,向各节点输入表示内燃机转速的输入值x1、表示加速器开度的输入值x2及表示外气温的输入值x3。另外,隐藏层(L=2,L=3)的层数能够设为1个或任意的个数,隐藏层(L=2,L=3)的节点的个数也能够设为任意的个数。需要说明的是,在图13所示的例子中,输出层(L=4)的节点的个数被设为1个。12 to 14B illustrate a second embodiment of the machine learning apparatus of the present invention. In the second embodiment, the operating parameters related to the internal combustion engine are composed of a plurality of categories of operating parameters, and the learning of the weights of the neural network is performed based on the values of the operating parameters related to the plurality of categories of the internal combustion engine. As a specific example, a case is shown in which the operating parameters of the internal combustion engine are composed of the engine speed, the accelerator opening (the amount of depression of the accelerator pedal), and the outside air temperature, and values based on the operating parameters of the internal combustion engine are created to estimate the output torque of the internal combustion engine. neural network model. In this specific example, as shown in FIG. 13 , the input layer (L=1) of the neural network is composed of three nodes, and an input value x 1 representing the engine speed and an input value x representing the accelerator opening are input to each node. 2 and the input value x 3 representing the outside air temperature. In addition, the number of hidden layers (L=2, L=3) can be set to one or an arbitrary number, and the number of nodes of the hidden layers (L=2, L=3) can also be set to an arbitrary number number. It should be noted that, in the example shown in FIG. 13 , the number of nodes of the output layer (L=4) is set to one.

另一方面,在图14A中,A1与B1之间即R1表示内燃机转速的预先设定的范围,A2与B2之间即R2表示加速器开度的预先设定的范围,A3与B3之间即R3表示外气温的预先设定的范围。需要说明的是,图14B也与图14A同样,A1与B1之间表示内燃机转速的预先设定的范围,A2与B2之间表示加速器开度的预先设定的范围,A3与B3之间表示外气温的预先设定的范围。需要说明的是,在该第二实施例中,加速器开度由负荷传感器41检测,外气温由大气温传感器24检测。另外,在该第二实施例中,例如,由安装于内燃机曲轴的转矩传感器实测内燃机的输出转矩,通过该实测而得到的转矩被设为训练数据。 On the other hand, in FIG. 14A, between A1 and B1, that is, R1, represents a preset range of the engine speed, and between A2 and B2 , that is, R2 , represents a preset range of the accelerator opening. Between A3 and B3 , that is, R3 represents a preset range of the outside air temperature. 14B is the same as FIG. 14A , between A 1 and B 1 represents a preset range of the engine speed, between A 2 and B 2 represents a preset range of the accelerator opening, and A 3 Between B3 and B3 indicates the preset range of the outside air temperature. It should be noted that, in the second embodiment, the accelerator opening is detected by the load sensor 41 , and the outside air temperature is detected by the atmospheric temperature sensor 24 . In addition, in the second embodiment, for example, the output torque of the internal combustion engine is actually measured by a torque sensor attached to the crankshaft of the internal combustion engine, and the torque obtained by the actual measurement is used as training data.

在该第二实施例中也是,对预先设定的范围Rn内的各种输入值xn(n=1,2,3)实测出的内燃机输出转矩作为训练数据而事先求出,即,对预先设定的范围Rn内的与内燃机相关的多个类别的运转参数的值通过实测而事先求出了训练数据,根据这些与内燃机相关的多个类别的运转参数的值及训练数据来决定神经网络的构造,以使输出值y与对应于与内燃机相关的多个类别的运转参数的值的训练数据之差变小的方式事先学习神经网络的权重。对该预先设定的范围Rn内的与内燃机相关的多个类别的运转参数的值通过实测而事先求出的训练数据存储于电子控制单元30的存储部。在该第二实施例中也是,使用与在事先学习中使用的神经网络相同构造的神经网络,使用学习完成时的神经网络的权重,在车辆运转中以车载方式进一步进行学习。图12示出了该以车载方式进行的第二实施例的学习处理例程,该学习处理例程通过每隔一定时间(例如,每隔一秒)的中断来执行。Also in this second embodiment, the output torque of the internal combustion engine actually measured for various input values x n (n=1, 2, 3) within a preset range Rn is obtained as training data in advance, that is, Training data is obtained in advance for the values of the plurality of types of operating parameters related to the internal combustion engine within the preset range Rn by actual measurement, and is determined based on the values of the plurality of types of operating parameters related to the internal combustion engine and the training data. The structure of the neural network is to learn the weights of the neural network in advance so that the difference between the output value y and the training data corresponding to the values of the plurality of categories of operating parameters related to the internal combustion engine becomes small. The training data obtained in advance by actual measurement of the values of the plurality of types of operating parameters related to the internal combustion engine within the preset range Rn are stored in the storage unit of the electronic control unit 30 . Also in this second embodiment, a neural network having the same structure as the neural network used in the previous learning is used, and the weights of the neural network at the time of completion of the learning are used, and the learning is further performed on-board while the vehicle is running. FIG. 12 shows a learning processing routine of the second embodiment performed in the vehicle-mounted manner, and the learning processing routine is executed by interruption at regular intervals (for example, every one second).

参照图12,首先,在步骤201中,读入存储于电子控制单元30的存储部的已学习的权重、在事先的学习中使用的训练数据即对预先设定的范围Rn内的与内燃机相关的多个类别的运转参数的值通过实测而事先求出的训练数据、表示输入数据的范围即与内燃机相关的多个类别的运转参数的值的预先设定的范围的值An、Bn(n=1,2,3)(图13A)。该已学习的权重用作权重的初始值。接着,在步骤202中,读入在事先的学习中使用的神经网络的输出层的前一个隐藏层的节点的个数K。接着,进入步骤203,取得新的输入值x即新的与内燃机相关的多个类别的运转参数的值,该新的输入值x即新的与内燃机相关的多个类别的运转参数的值存储于电子控制单元30的存储部。而且,在步骤203中,将相对于新的输入值x的内燃机输出转矩的实测值作为训练数据而存储于电子控制单元30的存储部。即,在步骤203中,将对新取得的与内燃机相关的多个类别的运转参数的值通过实测而得到的训练数据存储于电子控制单元30的存储部。Referring to FIG. 12 , first, in step 201 , the learned weights stored in the storage unit of the electronic control unit 30 and the training data used in the previous learning, that is, the internal combustion engine-related weights within the preset range Rn are read. The training data obtained in advance by actual measurement, the values An, Bn (n(n) representing the range of input data, that is, the preset range of the values of the operating parameters of the various categories related to the internal combustion engine. =1, 2, 3) (FIG. 13A). This learned weight is used as an initial value for the weight. Next, in step 202, the number K of nodes in the hidden layer immediately preceding the output layer of the neural network used in the previous learning is read. Next, the process proceeds to step 203, where a new input value x, that is, a new value of a plurality of categories of operating parameters related to the internal combustion engine is acquired, and the new input value x, that is, a new value of a plurality of categories of operating parameters related to the internal combustion engine, is stored in the storage unit of the electronic control unit 30 . Then, in step 203, the actual measurement value of the output torque of the internal combustion engine with respect to the new input value x is stored in the storage unit of the electronic control unit 30 as training data. That is, in step 203 , training data obtained by actual measurement of the newly acquired values of the operating parameters of a plurality of categories related to the internal combustion engine are stored in the storage unit of the electronic control unit 30 .

接着,在步骤204中,判别新的输入值xn即新取得的与内燃机相关的多个类别的运转参数的值是否处于预先设定的范围Rn(An与Bn之间)内,即,新的输入值xn是否为An以上且Bn以下。在新的输入值xn处于预先设定的范围Rn内时,进入步骤205,将各输入值xn即新取得的与内燃机相关的多个类别的运转参数的值向神经网络的输入层的对应的节点输入,基于从神经网络的输出层的节点输出的输出值y和对新取得的与内燃机相关的多个类别的运转参数的值通过实测而求出的训练数据,使用误差反向传播法,以使输出值y与训练数据之差变小的方式学习神经网络的权重。Next, in step 204, it is determined whether or not the new input value xn, that is , the newly acquired values of the plurality of types of operating parameters related to the internal combustion engine, is within the preset range Rn (between An and Bn), that is, whether the new Is the input value x n of , greater than or equal to An and less than or equal to Bn. When the new input value xn is within the preset range Rn , the process proceeds to step 205, and each input value xn, that is , the newly acquired values of the plurality of types of operating parameters related to the internal combustion engine, is sent to the input layer of the neural network. The corresponding node input is based on the output value y output from the node of the output layer of the neural network and the training data obtained by actual measurement of the newly acquired values of the operating parameters of multiple categories related to the internal combustion engine, using error back propagation method to learn the weights of the neural network in such a way that the difference between the output value y and the training data becomes smaller.

另一方面,在步骤204中判别为新的输入值xn即新取得的与内燃机相关的多个类别的运转参数的值中的至少一个类别的运转参数的值不处于预先设定的范围Rn(An与Bn之间)内时,例如,在图14B中表示内燃机转速的输入值x1处于B1~C1(B1<C1)的预先设定的范围(B1~C1)内的情况或者在图13B中表示外气温的输入值x3处于C3~A3(C3<A3)的预先设定的范围(C3~A3)内的情况下,进入步骤206。在步骤206中,首先,算出新的输入值xn所属的范围(Bn~Cn)或范围(Cn~An)内的训练数据相对于新的输入值xn的密度D(=训练数据个数/(Cn-Bn)或训练数据个数/(An-Cn))。On the other hand, in step 204, it is determined that the new input value xn, that is, the value of at least one type of the operation parameter value of the plurality of types of operation parameter values related to the internal combustion engine newly acquired is not within the preset range Rn (between An and Bn), for example, in FIG. 14B , the input value x 1 representing the engine speed is within a predetermined range (B 1 to C 1 ) of B 1 to C 1 (B 1 <C 1 ) If the input value x 3 representing the outside air temperature in FIG. 13B is within the preset range (C 3 to A 3 ) of C 3 to A 3 (C 3 <A 3 ), the process proceeds to step 206 . In step 206 , first , the density D ( = Number of training data/(C n -B n ) or number of training data/(A n -C n )).

在图14B中,B1及C1分别表示内燃机转速的预先设定的范围的最小值和最大值,即运转参数的值的预先设定的范围的最小值和最大值,训练数据密度D表示将训练数据的个数除以表示运转参数的值的预先设定的范围的最大值C1及最小值B1的差值(C1-B1)而得到的值。另外,在图14B中,C3及A3分别表示预先设定的外气温的范围的最小值和最大值,即运转参数的值的预先设定的范围的最小值和最大值,训练数据密度D表示将训练数据的个数除以表示运转参数的值的预先设定的范围的最大值C3及最小值A3的差值(C3-A3)而得到的值。在步骤206中,当算出训练数据密度D后,判别训练数据密度D是否变得比预先确定的数据密度D0高。在训练数据密度D比预先确定的数据密度D0低的情况下,完成处理循环。In FIG. 14B , B 1 and C 1 respectively represent the minimum and maximum values of the preset range of the engine speed, that is, the minimum and maximum values of the preset ranges of the values of the operating parameters, and the training data density D represents the A value obtained by dividing the number of training data by the difference (C 1 -B 1 ) between the maximum value C 1 and the minimum value B 1 in a predetermined range representing the values of the operating parameters. In addition, in FIG. 14B , C 3 and A 3 respectively indicate the minimum and maximum values of the range of the preset outside air temperature, that is, the minimum and maximum values of the preset ranges of the values of the operating parameters, and the training data density. D represents a value obtained by dividing the number of training data by the difference (C 3 -A 3 ) between the maximum value C 3 and the minimum value A 3 in a predetermined range representing the value of the operating parameter. In step 206, after the training data density D is calculated, it is determined whether the training data density D becomes higher than the predetermined data density D 0 . In the event that the training data density D is lower than the predetermined data density D 0 , the processing loop is completed.

另一方面,在步骤206中判别为训练数据密度D变得比预先确定的数据密度D0高时,进入步骤207。在该情况下,在D(=训练数据个数/(An-Cn))>D0时,通过下式算出追加节点数α。On the other hand, when it is determined in step 206 that the training data density D has become higher than the predetermined data density D 0 , the process proceeds to step 207 . In this case, when D(=number of training data/(A n −C n ))>D 0 , the number of additional nodes α is calculated by the following equation.

追加节点数α=round{(K/(Bn-An))·(An-Cn)}Number of additional nodes α=round{(K/(Bn-An))·(An-Cn)}

另一方面,在D(=训练数据个数/(Cn-Bn))>D0时,通过下式算出追加节点数α。On the other hand, when D (=number of training data/(C n −B n ))>D 0 , the number of additional nodes α is calculated by the following equation.

追加节点数α=round{(K/(Bn-An))·(Cn-Bn)}Number of additional nodes α=round{(K/(Bn-An))·(Cn-Bn)}

需要说明的是,在上式中,K表示节点的个数,round意味着四舍五入。It should be noted that, in the above formula, K represents the number of nodes, and round means rounding.

当在步骤207中算出追加节点数α后,进入步骤208,更新神经网络的输出层的前一个隐藏层的节点的个数K,使输出层的前一个隐藏层的节点的个数K增大追加节点数α(K←K+α)。这样,在该第二实施例中,当将训练数据的个数除以表示运转参数的值的预先设定的范围的最大值及最小值的差值而得到的数据密度增大时,增大神经网络的输出层的前一个隐藏层的节点的个数。即,在该第二实施例中,与将训练数据的个数除以表示运转参数的值的预先设定的范围的最大值及最小值的差值而得到的数据密度的增大相应地,增大神经网络的输出层的前一个隐藏层的节点的个数。After calculating the number of additional nodes α in step 207, go to step 208 to update the number K of nodes in the previous hidden layer of the output layer of the neural network, so that the number K of nodes in the previous hidden layer of the output layer is increased. The number of nodes α (K←K+α) is added. In this way, in the second embodiment, when the data density obtained by dividing the number of training data by the difference between the maximum value and the minimum value of the preset range representing the value of the operating parameter increases, the data density increases. The number of nodes in the previous hidden layer of the output layer of the neural network. That is, in the second embodiment, the data density is increased in accordance with the increase in the data density obtained by dividing the number of training data by the difference between the maximum value and the minimum value of the preset range representing the value of the operating parameter. Increase the number of nodes in the previous hidden layer of the output layer of the neural network.

另一方面,如上所述,从步骤206进入步骤207是在训练数据密度D达到了预先确定的数据密度D0时,因此,在步骤207中用于追加节点数α的算出的(An-Cn)的值及(Cn-Bn)的值与训练数据的个数成比例。因此,从上式可知,追加节点数α与新的输入值xn所属的范围(Bn~Cn)或范围(Cn~An)内的训练数据的个数成比例。即,在该第二实施例中,与对新取得的与内燃机相关的多个类别的运转参数的值通过实测而得到的训练数据的个数的增大相应地,增大神经网络的输出层的前一个隐藏层的节点的个数。On the other hand, as described above, when the training data density D reaches the predetermined data density D 0 from the step 206 to the step 207, the (A n − The value of C n ) and the value of (C n -B n ) are proportional to the number of training data. Therefore, as can be seen from the above formula, the number of additional nodes α is proportional to the range (B n to C n ) to which the new input value x n belongs or the number of training data in the range (C n to A n ). That is, in the second embodiment, the output layer of the neural network is increased in accordance with the increase in the number of training data obtained by actual measurement of newly acquired values of operating parameters of a plurality of categories related to the internal combustion engine. The number of nodes in the previous hidden layer.

当在步骤208中使输出层的前一个隐藏层的节点的个数K增大追加节点数α后(K←K+α),进入步骤209,以使输出层的前一个隐藏层的节点的个数K增大的方式更新神经网络。接着,进入步骤205。在步骤205中,将对新的输入值x新得到的训练数据也包含于训练数据,以使输出值y与训练数据之差变小的方式学习更新后的神经网络的权重。即,在步骤205中,使用对新取得的与内燃机相关的多个类别的运转参数的值通过实测而得到的训练数据及对预先设定的范围Rn内的与内燃机相关的多个类别的运转参数的值通过实测而事先求出的训练数据,以使根据预先设定的范围内及预先设定的范围外的与内燃机相关的多个类别的运转参数的值而变化的输出值y与对应于与该内燃机相关的多个类别的运转参数的值的训练数据之差变小的方式学习更新后的神经网络的权重。When the number K of nodes in the previous hidden layer of the output layer is increased by the number of additional nodes α in step 208 (K←K+α), go to step 209, so that the number of nodes in the previous hidden layer of the output layer is increased. The neural network is updated by increasing the number K. Next, go to step 205 . In step 205, the training data newly obtained for the new input value x is also included in the training data, and the weight of the updated neural network is learned so that the difference between the output value y and the training data becomes smaller. That is, in step 205, the training data obtained by actual measurement of the newly acquired values of the operating parameters of the plurality of categories related to the internal combustion engine and the operation of the plurality of categories related to the internal combustion engine within the preset range Rn are used. The training data for which parameter values are obtained in advance through actual measurement, and output values y that vary according to the values of various types of operating parameters related to the internal combustion engine within a preset range and outside the preset range correspond to The weights of the updated neural network are learned so that the difference between the training data of the values of the operating parameters of the plurality of categories related to the internal combustion engine becomes smaller.

在本发明的第二实施例中,表示对与内燃机相关的多个类别的运转参数的值而预先设定的范围Rn的值An、Bn预先存储于ROM32内即上述的存储部。另外,已学习的权重及对预先设定的范围Rn内的与内燃机相关的多个类别的运转参数的值通过实测而事先求出的训练数据存储于RAM33内即上述的存储部。In the second embodiment of the present invention, the values An and Bn representing the range Rn preset for the values of the plurality of types of operating parameters related to the internal combustion engine are prestored in the ROM 32 , that is, the above-mentioned storage unit. In addition, the learned weights and the training data obtained in advance for the values of the operating parameters of the plurality of categories related to the internal combustion engine within the preset range Rn are stored in the RAM 33 , that is, the above-mentioned storage unit.

图15示出了在本发明的第二实施例中使用的神经网络的变形例。在该变形例中,输出层(L=4)具有两个节点。FIG. 15 shows a modification of the neural network used in the second embodiment of the present invention. In this modification, the output layer (L=4) has two nodes.

在该变形例中,与图13所示的例子同样,输入值x1被设为内燃机转速,输入值x2被设为加速器开度,输入值x3被设为外气温。另一方面,在该变形例中,一方的输出量y1与图13所示的例子同样,被设为内燃机的输出转矩,另一方的输出量y2被设为内燃机的热效率。该热效率基于实际使内燃机运转时的内燃机转速、内燃机负荷、吸入空气压、吸入空气温、废气压、废气温、内燃机冷却水温等的检测值而算出,因此,该热效率表示通过实测而得到的值。在该变形例中也是,在输出层(L=4)的各节点中,使用恒等函数作为活性化函数。而且,在该变形例中,使用以内燃机的输出转矩的实测值及热效率的实测值为训练数据的前式(8)所示的平方和误差E作为误差函数E,使用误差反向传播法,以使误差函数E的值减小的方式更新权重w的值。In this modification, as in the example shown in FIG. 13 , the input value x 1 is the engine speed, the input value x 2 is the accelerator opening degree, and the input value x 3 is the outside air temperature. On the other hand, in this modification, one of the output amounts y 1 is assumed to be the output torque of the internal combustion engine, and the other output amount y 2 is assumed to be the thermal efficiency of the internal combustion engine, as in the example shown in FIG. 13 . This thermal efficiency is calculated based on the detected values of the engine speed, engine load, intake air pressure, intake air temperature, exhaust gas pressure, exhaust gas temperature, engine cooling water temperature, etc. when the internal combustion engine is actually operated. Therefore, the thermal efficiency represents a value obtained by actual measurement. . Also in this modification, each node of the output layer (L=4) uses the identity function as the activation function. Furthermore, in this modification example, the error back propagation method is used by using the square sum error E shown in the preceding equation (8) as the training data with the actual measured value of the output torque of the internal combustion engine and the actual measured value of the thermal efficiency as the training data. , update the value of the weight w in such a way that the value of the error function E decreases.

在如图15所示那样神经网络的输出层(L=4)具有多个节点的情况下也是,通过输出层(L=4)的前一个隐藏层(L=3)的节点的输出值z1、z2、z3、z4,从输出层(L=4)的各节点输出的函数的形式确定。即,各输出值y1、y2能够由何种函数表现受到输出层(L=4)的前一个隐藏层(L=3)的节点的个数支配。因此,在如图15所示的神经网络中,在增大隐藏层的节点的个数时,增大输出层(L=4)的前一个隐藏层(L=3)的节点的个数。Even when the output layer (L=4) of the neural network has a plurality of nodes as shown in FIG. 15, the output value z of the node passing through the hidden layer (L=3) preceding the output layer (L=4) 1 , z 2 , z 3 , and z 4 are determined in the form of functions output from each node of the output layer (L=4). That is, what kind of function expression each output value y 1 , y 2 can represent is governed by the number of nodes in the hidden layer (L=3) immediately preceding the output layer (L=4). Therefore, in the neural network shown in FIG. 15 , when the number of nodes in the hidden layer is increased, the number of nodes in the hidden layer (L=3) immediately preceding the output layer (L=4) is increased.

图16及图17示出了本发明的机器学习装置的第三实施例。在该第三实施例中也是,与内燃机相关的运转参数包含多个类别的运转参数,基于与内燃机相关的多个类别的运转参数的值来进行神经网络的权重的学习。在该第三实施例中也是,关于与内燃机相关的多个类别的运转参数的各个,预先设定有各类别的运转参数的值的范围。图17作为一例而示出了与内燃机相关的运转参数由两个类别的运转参数构成的情况,在图17中,一个类别的运转参数的值的预先设定的范围由Rx表示,另一类别的运转参数的值的预先设定的范围由Ry表示。在该第三实施例中,如图17所示,将各类别的运转参数的值的预先设定的范围Rx、Ry划分为多个,并且预先设定有通过各类别的运转参数的值的划分后的各划分范围的组合而划定的多个划分区域〔Xn,Ym〕(n=1,2…n,m=1,2…m)。16 and 17 show a third embodiment of the machine learning apparatus of the present invention. Also in this third embodiment, the operating parameters related to the internal combustion engine include a plurality of categories of operating parameters, and the learning of the weight of the neural network is performed based on the values of the operating parameters of the plural categories related to the internal combustion engine. Also in this third embodiment, a range of values of the operating parameters of each category is set in advance for each of a plurality of categories of operating parameters related to the internal combustion engine. FIG. 17 shows, as an example, a case where the operating parameters related to the internal combustion engine are composed of two types of operating parameters. In FIG. 17 , the preset range of the values of the operating parameters of one type is represented by Rx, and the other type is represented by Rx. The preset range of the value of the operating parameter is represented by Ry. In the third embodiment, as shown in FIG. 17 , the preset ranges Rx and Ry for the values of the operating parameters of each category are divided into a plurality of groups, and the ranges Rx and Ry that pass the values of the operating parameters of each category are preset. A plurality of divided regions [Xn, Ym] (n=1, 2...n, m=1, 2...m) defined by the combination of the divided division ranges.

需要说明的是,在图17中,X1、X2…Xn及Y1、Y2…Yn分别表示各类别的运转参数的值的划分范围。另外,在该第三实施例中,作为具体的一例而示出了如下情况:内燃机的运转参数由内燃机转速及外气温构成,制作基于这些内燃机的运转参数的值来推定来自内燃机的HC排出量的神经网络模型。在该情况下,X1、X2…Xn表示例如每隔1000rpm划分出的内燃机转速(1000rpm≤X1<200rpm,2000rpm≤X2<3000rpm…),Y1、Y2…Yn表示例如每隔10℃划分出的外气温(-30℃≤Y1<-20℃,-20℃≤Y2<-10℃…)。In addition, in FIG. 17, X1, X2...Xn and Y1, Y2...Yn each show the division|segmentation range of the value of each type of operation parameter. In the third embodiment, as a specific example, the operating parameters of the internal combustion engine are composed of the engine speed and the outside air temperature, and values based on the operating parameters of the internal combustion engine are created to estimate the HC discharge amount from the internal combustion engine. neural network model. In this case, X1, X2,...Xn represent, for example, the rotational speed of the internal combustion engine divided at every 1000rpm (1000rpm≤X1<200rpm, 2000rpm≤X2<3000rpm...), and Y1, Y2...Yn are, for example, the external engine speed divided at every 10°C. Air temperature (-30℃≤Y1<-20℃, -20℃≤Y2<-10℃…).

在该第三实施例中,针对各划分区域〔Xn,Ym〕分别制作有独立的神经网络。在这些神经网络中,输入层(L=1)由2个节点构成,向输入层(L=1)的各节点输入表示内燃机转速的输入值x1及表示外气温的输入值x2。另外,隐藏层(L=2,L=3)的层数能够设为1个或任意的个数,隐藏层(L=2,L=3)的节点的个数也能够设为任意的个数。需要说明的是,无论在哪个神经网络中,输出层(L=4)的节点的个数都被设为1个。需要说明的是,在该第三实施例中也是,作为变形例,可以将输出层(L=4)的节点的个数设为2个。在该情况下,例如,来自输出层(L=4)的一方的节点的输出量被设为来自内燃机的HC排出量,来自输出层(L=4)的另一方的节点的输出量被设为来自内燃机的NOX排出量。In the third embodiment, an independent neural network is created for each of the divided regions [Xn, Ym]. In these neural networks, the input layer (L=1) is composed of two nodes, and an input value x1 representing the engine speed and an input value x2 representing the outside air temperature are input to each node of the input layer (L=1). In addition, the number of hidden layers (L=2, L=3) can be set to one or an arbitrary number, and the number of nodes of the hidden layers (L=2, L=3) can also be set to an arbitrary number number. It should be noted that, in any neural network, the number of nodes of the output layer (L=4) is set to one. It should be noted that, also in the third embodiment, as a modification, the number of nodes of the output layer (L=4) may be set to two. In this case, for example, the output amount from one node of the output layer (L=4) is set to the HC discharge amount from the internal combustion engine, and the output amount from the other node of the output layer (L=4) is set to is the NOx emission from the internal combustion engine.

在该第三实施例中,隐藏层(L=3)的节点的个数针对各神经网络而不同,以下,将划分区域〔Xn,Ym〕中的神经网络的输出层的前一个隐藏层的节点的个数用Knm表示。该隐藏层的节点的个数Knm根据各划分区域〔Xn,Ym〕内的训练数据的变化相对于输入值的变化的复杂度而事先设定。需要说明的是,在该第三实施例中,取代图1所示的NOX传感器23而在排气通路内配置HC传感器。在该第三实施例中,由该HC传感器实测来自内燃机的HC排出量,通过该实测而得到的HC排出量被设为训练数据。需要说明的是,在上述的第三实施例的变形例中,除了图1所示的NOX传感器23之外还在排气通路内配置HC传感器。In this third embodiment, the number of nodes in the hidden layer (L=3) is different for each neural network. Hereinafter, the output layer of the neural network in the divided region [Xn, Ym] will be divided into the previous hidden layer of the neural network. The number of nodes is represented by Knm. The number Knm of nodes of the hidden layer is set in advance according to the complexity of the change of the training data in each of the divided regions [Xn, Ym] with respect to the change of the input value. In addition, in this 3rd Example, the HC sensor was arrange|positioned in the exhaust passage instead of the NOx sensor 23 shown in FIG. In the third embodiment, the HC discharge amount from the internal combustion engine is actually measured by the HC sensor, and the HC discharge amount obtained by the actual measurement is used as training data. In addition, in the modification of the above-mentioned third embodiment, the HC sensor is arranged in the exhaust passage in addition to the NOx sensor 23 shown in FIG. 1 .

在该第三实施例中,对在与内燃机相关的多个类别的运转参数的值的预先设定的范围Rx、Ry内形成的各划分区域〔Xn,Ym〕(n=1,2…n,m=1,2…m)内的各种输入值x1、x2实测出的HC排出量作为训练数据而事先求出,即,对预先设定的范围Rx、Ry内的与内燃机相关的多个类别的运转参数的值通过实测而事先求出了训练数据,根据这些与内燃机相关的多个类别的运转参数的值及训练数据,也包含隐藏层的节点的个数Knm而决定相对于各划分区域〔Xn,Ym〕的神经网络的构造,以使输出值y与对应于与内燃机相关的多个类别的运转参数的值2的训练数据之差变小的方式事先学习各划分区域〔Xn,Ym〕的神经网络的权重。因此,在该第三实施例中,以下,有时也将该事先进行了学习的划分区域〔Xn,Ym〕(n=1,2…n,m=1,2…m)称作已学习划分区域〔Xn,Ym〕。需要说明的是,对该预先设定的范围Rx、Ry内的与内燃机相关的多个类别的运转参数的值通过实测而事先求出的训练数据存储于电子控制单元30的存储部。在该第三实施例中也是,关于各划分区域〔Xn,Ym〕,使用与在事先学习中使用的神经网络相同构造的神经网络,使用学习完成时的神经网络的权重,在车辆运转中以车载方式进一步进行学习。图16示出了该以车载方式进行的第三实施例的学习处理例程,该学习处理例程通过每隔一定时间(例如,每隔一秒)的中断来执行。In the third embodiment, each of the divided regions [Xn, Ym] (n=1, 2 . , m=1, 2… The values of the operating parameters of each category are obtained in advance through actual measurement to obtain training data, and based on the values of the operating parameters of the plurality of categories related to the internal combustion engine and the training data, the number Knm of nodes including the hidden layer is also determined. The structure of the neural network that divides the regions [Xn, Ym] is learned in advance for each of the divided regions [Xn , Ym] the weights of the neural network. Therefore, in the third embodiment, the previously learned division region [Xn, Ym] (n=1, 2...n, m=1, 2...m) is sometimes referred to as a learned division hereinafter. Area [Xn, Ym]. It should be noted that the training data obtained in advance by actual measurement of the values of the plurality of types of operating parameters related to the internal combustion engine within the preset ranges Rx and Ry are stored in the storage unit of the electronic control unit 30 . Also in this third embodiment, a neural network having the same structure as the neural network used in the previous learning is used for each of the divided regions [Xn, Ym], and the weight of the neural network at the time of completion of the learning is used, and the weight of the neural network at the time of completion of the learning is used. In-vehicle mode for further learning. FIG. 16 shows a learning processing routine of the third embodiment performed in a vehicle-mounted manner, and the learning processing routine is executed by interruption at regular intervals (for example, every one second).

参照图16,首先,在步骤301中,读入存储于电子控制单元30的存储部的已学习的权重、在事先的学习中使用的训练数据即对预先设定的范围Rx、Ry内的与内燃机相关的多个类别的运转参数的值通过实测而事先求出的训练数据、各已学习划分区域〔Xn,Ym〕(n=1,2…n,m=1,2…m)。该已学习的权重用作权重的初始值。接着,在步骤302中,读入对各已学习划分区域〔Xn,Ym〕在事先的学习中使用的输出层的前一个隐藏层的节点的个数Knm。接着,进入步骤303,取得新的输入值x1、x2即新的与内燃机相关的多个类别的运转参数的值,该新的输入值x1、x2即新的与内燃机相关的多个类别的运转参数的值存储于电子控制单元30的存储部。而且,在步骤303中,将相对于新的输入值x1、x2的HC排出量的实测值作为训练数据而存储于电子控制单元30的存储部。即,在步骤303中,将对新取得的与内燃机相关的多个类别的运转参数的值通过实测而得到的训练数据存储于电子控制单元30的存储部。Referring to FIG. 16 , first, in step 301, the learned weights stored in the storage unit of the electronic control unit 30 and the training data used in the previous learning, that is, the sum of the preset ranges Rx and Ry, are read. The training data obtained in advance by actual measurement for the values of the operating parameters of a plurality of categories related to the internal combustion engine, and each learned divided area [Xn, Ym] (n=1, 2...n, m=1, 2...m). This learned weight is used as an initial value for the weight. Next, in step 302, the number Knm of nodes in the hidden layer immediately preceding the output layer used in the previous learning for each learned partition region [Xn, Ym] is read. Next, the process proceeds to step 303, and new input values x1, x2, that is, new values of the operation parameters of the plurality of categories related to the internal combustion engine are acquired, and the new input values x1, x2 are the new operation parameters of the plurality of categories related to the internal combustion engine. The value of the parameter is stored in the storage unit of the electronic control unit 30 . And in step 303, the actual measurement value of HC discharge|emission amount with respect to the new input value x1, x2 is memorize|stored in the memory|storage part of the electronic control unit 30 as training data. That is, in step 303 , training data obtained by actual measurement of the newly acquired values of a plurality of types of operating parameters related to the internal combustion engine are stored in the storage unit of the electronic control unit 30 .

接着,在步骤304中,判别新的输入值x1、x2是否处于已学习划分区域〔Xn,Ym〕内,即,新取得的与内燃机相关的多个类别的运转参数的值是否处于预先设定的范围Rx、Ry内。在新的输入值x1、x2处于已学习划分区域〔Xn,Ym〕内时,即,在新取得的与内燃机相关的多个类别的运转参数的值处于预先设定的范围Rx、Ry内时,进入步骤305,将输入值x1、x2即新取得的与内燃机相关的多个类别的运转参数的值向新取得的与内燃机相关的多个类别的运转参数的值所属的已学习划分区域〔Xn,Ym〕的神经网络的输入层的各节点输入,基于从神经网络的输出层的节点输出的输出值y和对新取得的与内燃机相关的多个类别的运转参数的值通过实测而求出的训练数据,使用误差反向传播法,以使输出值y与训练数据之差变小的方式进一步学习新取得的与内燃机相关的多个类别的运转参数的值所属的已学习划分区域〔Xn,Ym〕的神经网络的权重。Next, in step 304, it is determined whether or not the new input values x1 and x2 are within the learned division area [Xn, Ym], that is, whether or not the newly acquired values of the operating parameters of a plurality of categories related to the internal combustion engine are within the preset values within the range of Rx and Ry. When the new input values x1 and x2 are within the learned division region [Xn, Ym], that is, when the newly acquired values of the operating parameters of a plurality of categories related to the internal combustion engine are within the preset ranges Rx and Ry , proceed to step 305, and transfer the input values x1, x2, that is, the newly acquired values of the operating parameters of multiple categories related to the internal combustion engine, to the learned division area to which the newly acquired values of the operating parameters of multiple categories related to the internal combustion engine belong [ The input to each node of the input layer of the neural network of Xn, Ym] is obtained by actual measurement based on the output value y output from the node of the output layer of the neural network and the newly acquired values of the operating parameters of the plurality of categories related to the internal combustion engine. The obtained training data, using the error back-propagation method, further learns the learned division area to which the values of the newly acquired operating parameters of multiple categories related to the internal combustion engine belong in a way to make the difference between the output value y and the training data smaller [ Xn, Ym] the weights of the neural network.

另一方面,在步骤304中判别为新的输入值x1、x2不处于已学习划分区域〔Xn,Ym〕内时,例如,在图17中输入值x1、x2所属且通过输入值x1、x2的预先设定的范围的组合而划定的未学习区域〔Xa,Yb〕被设定于已学习划分区域〔Xn,Ym〕外。即,换言之,在新取得的与内燃机相关的多个类别的运转参数中的至少一个类别的运转参数的值为预先设定的范围Rx、Ry外时,各类别的运转参数的值所属且通过各类别的运转参数的值的预先设定的范围的组合而划定的未学习区域〔Xa,Yb〕被设定于预先设定的范围Rx、Ry外。On the other hand, when it is determined in step 304 that the new input values x1 and x2 are not within the learned division area [Xn, Ym], for example, in FIG. 17 , the input values x1 and x2 belong to the input values x1 and x2 The unlearned area [Xa, Yb] delimited by the combination of the preset ranges is set outside the learned divided area [Xn, Ym]. That is, in other words, when the value of at least one of the newly acquired operating parameters of the internal combustion engine is outside the preset ranges Rx and Ry, the value of the operating parameter of each category belongs to and passes through The unlearned region [Xa, Yb] defined by the combination of the preset ranges of the values of the operating parameters of each category is set outside the preset ranges Rx, Ry.

图17所示的例子示出了如下情况:新取得的与内燃机相关的一个类别的运转参数的值为预先设定的范围Rx外,新取得的与内燃机相关的另一类别的运转参数的值属于预先设定的范围Ry内的划分范围Y2,在该情况下,该未学习区域〔Xa,Yb〕在对于一个类别的运转参数的值而处于预先设定的范围Rx外且另一类别的运转参数的值所属的划分范围Y2内,与该划分范围Y2内的已学习划分区域〔Xn,Ym〕相邻地设定。The example shown in FIG. 17 shows the case where the newly acquired value of the operating parameter of one category related to the internal combustion engine is outside the preset range Rx, and the value of the operating parameter of another category related to the internal combustion engine is newly acquired It belongs to the division range Y2 within the preset range Ry, and in this case, the unlearned area [Xa, Yb] is outside the preset range Rx for the value of the operating parameter of one category and is in the other category. Within the division range Y2 to which the value of the operating parameter belongs, it is set adjacent to the learned division area [Xn, Ym] in the division range Y2.

当设定未学习区域〔Xa,Yb〕后,进入步骤306。在步骤306中,首先,算出新的输入值x1、x2所属的未学习区域〔Xa,Yb〕内的训练数据密度D。该训练数据密度D(=训练数据个数/〔Xa,Yb〕)表示将训练数据的个数除以未学习区域〔Xa,Yb〕的面积即各类别的运转参数的值的预先设定的范围之积而得到的值。接着,判别训练数据密度D是否变得比预先确定的数据密度D0高及新的输入值x1、x2所属的新区域〔Xa,Yb〕内的训练数据的方差S2是否比预先确定的方差S2 0大。在训练数据密度D比预先确定的数据密度D0低的情况或训练数据的方差S2比预先确定的方差S2 0小的情况下,完成处理循环。When the unlearned area [Xa, Yb] is set, the process proceeds to step 306 . In step 306, first, the training data density D in the unlearned area [Xa, Yb] to which the new input values x1 and x2 belong is calculated. The training data density D (=the number of training data/[Xa, Yb]) represents a preset value of the operating parameter value of each category divided by the number of training data divided by the area of the unlearned region [Xa, Yb] The value obtained by multiplying the ranges. Next, it is determined whether the training data density D becomes higher than the predetermined data density D 0 and whether the variance S 2 of the training data in the new area [Xa, Yb] to which the new input values x1 and x2 belong is higher than the predetermined variance S 20 large . In the case where the training data density D is lower than the predetermined data density D 0 or in the case where the variance S 2 of the training data is smaller than the predetermined variance S 2 0 , the processing loop is completed.

另一方面,在步骤306中判别为训练数据密度D比预先确定的数据密度D0高且训练数据的方差S2比预先确定的方差S2 0大时,进入步骤307。需要说明的是,在步骤306中,也可以省略方差S2是否比预先确定的方差S2 0大的判别,仅判断训练数据密度D是否变得比预先确定的数据密度D0高。在该情况下,在训练数据密度D比预先确定的数据密度D0低的情况下完成处理循环,在判别为训练数据密度D比预先确定的数据密度D0高时进入步骤307。在步骤307中,基于下述的节点数算出式,根据未学习区域〔Xa,Yb〕周围的已学习划分区域〔Xn,Ym〕中的节点数Knm的平均值来算出相对于未学习区域〔Xa,Yb〕的节点数Kab。On the other hand, when it is determined in step 306 that the training data density D is higher than the predetermined data density D 0 and the variance S 2 of the training data is larger than the predetermined variance S 2 0 , the process proceeds to step 307 . It should be noted that, in step 306, the determination of whether the variance S 2 is greater than the predetermined variance S 2 0 may be omitted, and only the training data density D becomes higher than the predetermined data density D 0 . In this case, when the training data density D is lower than the predetermined data density D 0 , the processing loop is completed, and when it is determined that the training data density D is higher than the predetermined data density D 0 , the process proceeds to step 307 . In step 307, based on the following formula for calculating the number of nodes, the relative value to the unlearned area [ Xa, Yb] the number of nodes Kab.

节点数Kab=1/NΣΣKij(i=(a-1)~(a+1),j=(b-1)~(b+1))Number of nodes Kab=1/NΣΣKij (i=(a-1)~(a+1), j=(b-1)~(b+1))

需要说明的是,在上式中,N表示在未学习区域〔Xa,Yb〕周围相邻存在的已学习划分区域〔Xn,Ym〕的个数。在该情况下,在未学习区域〔Xa,Yb〕周围的相邻的划分区域〔Xn,Ym〕中存在还未使用的划分区域〔Xn,Ym〕即不存在节点数Knm的划分区域〔Xn,Ym〕的情况下,该划分区域〔Xn,Ym〕被从个数N的算出排除。例如,若以图15所示的例子来说,则未学习区域〔Xa,Yb〕周围的相邻的已学习划分区域〔Xn,Y1〕的节点数Kn1、已学习划分区域〔Xn,Y2〕的节点数Kn2及已学习划分区域〔Xn,Y3〕的节点数Kn3的平均值被设为相对于未学习区域〔Xa,Yb〕的节点数Kab。It should be noted that, in the above formula, N represents the number of learned divided regions [Xn, Ym] that exist adjacent to the unlearned region [Xa, Yb]. In this case, there is an unused divided area [Xn, Ym] in the adjacent divided areas [Xn, Ym] around the unlearned area [Xa, Yb], that is, there is no divided area [Xn with the number of nodes Knm] , Ym], the division area [Xn, Ym] is excluded from the calculation of the number N. For example, taking the example shown in FIG. 15, the number of nodes Kn1 of the adjacent learned divided areas [Xn, Y1] around the unlearned area [Xa, Yb], and the learned divided areas [Xn, Y2] The average value of the number of nodes Kn2 of , and the number of nodes Kn3 of the learned divided area [Xn, Y3] is set to Kab with respect to the number of nodes of the unlearned area [Xa, Yb].

在各划分区域〔Xn,Ym〕内的训练数据的变化相对于输入值的变化的关系单纯的情况下,即使使隐藏层的节点的个数Knm少也能够充分进行学习,但在各划分区域〔Xn,Ym〕内的训练数据的变化相对于输入值的变化的关系复杂的情况下,若不使隐藏层的节点的个数Knm多则无法进行充分的学习。因此,如前所述,已学习划分区域〔Xn,Ym〕中的神经网络的隐藏层的节点的个数Knm根据各已学习划分区域〔Xn,Ym〕内的训练数据的变化相对于输入值的变化的复杂度而设定。在两个划分区域〔Xn,Ym〕接近的情况下,在这些划分区域〔Xn,Ym〕之间,训练数据的变化相对于输入值的变化的关系相似,因此,在两个划分区域〔Xn,Ym〕接近的情况下,能够使用相同个数作为隐藏层的节点的个数Knm。因此,在该第三实施例中,未学习区域〔Xa,Yb〕周围的相邻的已学习划分区域〔Xn,Ym〕中的节点数Knm的平均值被设为相对于未学习区域〔Xa,Yb〕的节点数Kab。When the relationship between the change of the training data and the change of the input value in each of the divided regions [Xn, Ym] is simple, sufficient learning can be performed even if the number Knm of nodes in the hidden layer is small, but in each of the divided regions When the relationship between changes in training data in [Xn, Ym] with respect to changes in input values is complicated, sufficient learning cannot be performed unless the number Knm of nodes in the hidden layer is increased. Therefore, as described above, the number of nodes Knm of the hidden layer nodes of the neural network in the learned division area [Xn, Ym] is relative to the input value according to the change of the training data in each learned division area [Xn, Ym] The complexity of the change is set. In the case where the two divided areas [Xn, Ym] are close, the relationship between the change of the training data and the change of the input value is similar between these divided areas [Xn, Ym]. Therefore, between the two divided areas [Xn, Ym] , Ym] are close to each other, the same number can be used as the number Knm of nodes of the hidden layer. Therefore, in this third embodiment, the average value of the number of nodes Knm in the adjacent learned divided regions [Xn, Ym] around the unlearned region [Xa, Yb] is set relative to the unlearned region [Xa , Yb] the number of nodes Kab.

在此,作为第三实施例的变形例,对将未学习区域〔Xa,Yb〕中的训练数据的个数纳入考虑来求出相对于未学习区域〔Xa,Yb〕的节点数Kab的方法进行简单说明。即,在未学习区域〔Xa,Yb〕中的训练数据个数比未学习区域〔Xa,Yb〕周围的相邻的已学习划分区域〔Xn,Ym〕中的训练数据个数多的情况下,相对于未学习区域〔Xa,Yb〕的节点数Kab优选比未学习区域〔Xa,Yb〕周围的相邻的已学习划分区域〔Xn,Ym〕中的节点数Knm多。因此,在该变形例中,求出未学习区域〔Xa,Yb〕周围的相邻的已学习区域〔Xn,Ym〕中的训练数据个数的平均值MD,通过将未学习区域〔Xa,Yb〕中的输入数据个数MN除以平均值MD来求出节点数增大率RK(=MN/MD),对根据上述的节点数算出式求出的相对于未学习区域〔Xa,Yb〕的节点数Kab乘以该节点数增大率RK,设为最终的相对于未学习区域〔Xa,Yb〕的节点数Kab。Here, as a modification of the third embodiment, a method of obtaining the number of nodes Kab relative to the unlearned area [Xa, Yb] considering the number of training data in the unlearned area [Xa, Yb] is considered Give a brief explanation. That is, when the number of training data in the unlearned region [Xa, Yb] is larger than the number of training data in the adjacent learned divided regions [Xn, Ym] around the unlearned region [Xa, Yb] , the number of nodes Kab relative to the unlearned area [Xa, Yb] is preferably larger than the number of nodes Knm in the adjacent learned divided areas [Xn, Ym] around the unlearned area [Xa, Yb]. Therefore, in this modification, the average value MD of the number of training data in the adjacent learned regions [Xn, Ym] around the unlearned region [Xa, Yb] is obtained, and the unlearned region [Xa, Ym] is calculated by dividing the unlearned region [Xa, Ym] The number of input data MN in Yb] is divided by the average value MD to obtain the node number increase rate RK (=MN/MD). ] is multiplied by the node number increase rate RK to obtain the final node number Kab with respect to the unlearned area [Xa, Yb].

当在步骤307中算出相对于未学习区域〔Xa,Yb〕的节点数Kab后,进入步骤308,制作相对于未学习区域〔Xa,Yb〕的新的神经网络。在该新的神经网络中,节点的个数关于输入层被设为2个,关于输出层的前一个隐藏层被设为Kab个,关于输出层被设为1个或多个。接着,进入步骤305。在步骤305中,关于未学习区域〔Xa,Yb〕,以使输出值y与训练数据之差变小的方式,学习对未学习区域〔Xa,Yb〕制作出的神经网络的权重。After calculating the number of nodes Kab with respect to the unlearned area [Xa, Yb] in step 307, the process proceeds to step 308, and a new neural network with respect to the unlearned area [Xa, Yb] is created. In this new neural network, the number of nodes is set to two for the input layer, Kab for the previous hidden layer of the output layer, and one or more for the output layer. Next, go to step 305 . In step 305, regarding the unlearned area [Xa, Yb], the weight of the neural network created for the unlearned area [Xa, Yb] is learned so that the difference between the output value y and the training data becomes small.

接着,参照图18A~图19B,对将本发明的机器学习装置应用于低负荷用的特殊的内燃机的情况的具体例进行说明。在该具体例中,如图12所示,使用隐藏层(L=3)具有4个节点的神经网络,制作根据节气门12的开度、内燃机转速及点火正时来输出表示NOX排出量的输出值y的模型。需要说明的是,在该具体例中使用的内燃机中,节气门12的开度的使用范围被设定为5.5°~11.5°(将最大闭阀位置下的节气门12的开度设为0°)之间,内燃机转速的使用范围被设定为1600(rpm)~3000(rpm)之间,点火正时的使用范围被设定为0°(压缩上止点)~ATDC(压缩上止点前)40°之间。Next, a specific example of a case where the machine learning apparatus of the present invention is applied to a special low-load internal combustion engine will be described with reference to FIGS. 18A to 19B . In this specific example, as shown in FIG. 12 , a neural network having 4 nodes in the hidden layer (L=3) is used to create an output representing the NOx emission amount according to the opening degree of the throttle valve 12 , the engine speed and the ignition timing. The output value y of the model. It should be noted that, in the internal combustion engine used in this specific example, the use range of the opening degree of the throttle valve 12 is set to 5.5° to 11.5° (the opening degree of the throttle valve 12 at the maximum valve closing position is set to 0 °), the use range of the engine speed is set between 1600 (rpm) and 3000 (rpm), and the use range of the ignition timing is set from 0° (compression top dead center) to ATDC (compression top dead center) point) between 40°.

图18A示出了训练数据相对于点火正时和内燃机转速的分布,图18B示出了训练数据相对于节气门开度和点火正时的分布。需要说明的是,在图18A及图18B中,黑圆表示事先取得的训练数据的存在场所,三角记号表示事先未取得训练数据的场所。从图18A及图18B可知对于何种节气门开度、何种内燃机转速及何种点火正时事先取得了训练数据。例如,可知,在图18A中,在内燃机转速N为2000(rpm)且点火正时为ATDC20°时,事先取得了训练数据,如图18B所示,在点火正时为ATDC20°时,对于各种节气门开度,事先取得了训练数据。FIG. 18A shows the distribution of the training data with respect to the ignition timing and the engine speed, and FIG. 18B shows the distribution of the training data with respect to the throttle valve opening and the ignition timing. In addition, in FIG. 18A and FIG. 18B, the black circle shows the place where the training data acquired in advance exists, and the triangle mark shows the place where the training data is not acquired beforehand. It can be seen from FIGS. 18A and 18B that the training data has been acquired in advance for which throttle valve opening, which engine speed, and which ignition timing. For example, in FIG. 18A , when the engine speed N is 2000 (rpm) and the ignition timing is ATDC 20°, the training data is acquired in advance. As shown in FIG. 18B , when the ignition timing is ATDC 20°, for each There are various kinds of throttle opening, and the training data is obtained in advance.

另一方面,在该具体例中,将节气门开度、内燃机转速及点火正时向神经网络的输入层(L=1)的各节点输入,以使输出值y与表示由NOX传感器23检测到的NOX排出量的训练数据之差变小的方式学习神经网络的权重。学习后的输出值y与训练数据的关系示于图18C、图19A及图19B,需要说明的是,在图18C、图19A及图19B中,学习后的输出值y及训练数据的值以使最大值成为1的方式标准化而示出。On the other hand, in this specific example, the throttle valve opening, the engine speed, and the ignition timing are input to each node of the input layer (L=1) of the neural network, so that the output value y and the output value y indicated by the NOx sensor 23 The weights of the neural network are learned such that the difference between the detected NOx emission and the training data becomes smaller. The relationship between the learned output value y and the training data is shown in FIGS. 18C , 19A and 19B . It should be noted that in FIGS. 18C , 19A and 19B , the learned output value y and the value of the training data are equal to each other. It is shown normalized so that the maximum value becomes 1.

如上所述,在该具体例中使用的内燃机中,节气门12的开度的使用范围被设定为5.5°~11.5°之间,内燃机转速N的使用范围被设定为1600(rpm)~3000(rpm)之间,点火正时的使用范围被设定为0°(压缩上止点)~ATDC40°之间。在图18C中,将在这些使用范围内使用了节气门开度、内燃机转速N及点火正时时的NOX排出量作为训练数据而事先取得,以使输出值y与事先取得的训练数据之差变小的方式学习了神经网络的权重时的学习后的输出值y与训练数据的关系用圆形记号示出。As described above, in the internal combustion engine used in this specific example, the use range of the opening degree of the throttle valve 12 is set to be between 5.5° and 11.5°, and the use range of the engine speed N is set to be between 1600 (rpm) and 1600 (rpm). Between 3000 (rpm), the use range of the ignition timing is set between 0° (compression top dead center) and ATDC40°. In FIG. 18C , the NOx emission amount using the throttle valve opening, the engine speed N, and the ignition timing within these use ranges is acquired as training data in advance, so that the difference between the output value y and the previously acquired training data is obtained. The relationship between the learned output value y and the training data when the weight of the neural network has been learned in a smaller manner is shown by a circle.

如图18C所示,表示学习后的输出值y与训练数据的关系的圆形记号集中于一直线上,因此可知,学习后的输出值y与训练数据一致。例如,若举出节气门12的开度为例,则因发动机的个体差异或历时变化,节气门12的开度会从标准的开度偏离,即使节气门12的开度的使用范围被设定为5.5°~11.5°之间,实际上,节气门12的开度有时也会超过预先设定的使用范围。图18A及图18B所示的三角标记表示在节气门12的开度超过预先设定的使用范围而成为了13.5°时新取得的训练数据的场所。As shown in FIG. 18C , the circle marks representing the relationship between the output value y after learning and the training data are concentrated on a straight line, so it can be seen that the output value y after learning agrees with the training data. For example, taking the opening degree of the throttle valve 12 as an example, the opening degree of the throttle valve 12 may deviate from the standard opening degree due to individual differences of the engine or changes over time, even if the usage range of the opening degree of the throttle valve 12 is set It is set to be between 5.5° and 11.5°. Actually, the opening degree of the throttle valve 12 may exceed the preset usage range. The triangular mark shown in FIGS. 18A and 18B indicates a place where training data is newly acquired when the opening degree of the throttle valve 12 exceeds a preset usage range and becomes 13.5°.

图18C的三角标记示出了在这样节气门12的开度超过预先设定的使用范围而成为了13.5°时不使用新取得的训练数据而仅使用事先取得的训练数据进行了神经网络的权重的学习的情况。在该情况下,可知,节气门12的开度超过预先设定的使用范围而成为了13.5°时的NOX排出量的推定值会从实测值大幅偏离。另一方面,图19A的圆形记号示出了使用在这样节气门12的开度超过预先设定的使用范围而成为了13.5°时取得的新的训练数据和事先取得的训练数据这双方的训练数据进行了神经网络的权重的学习的情况。在该情况下,可知,NOX排出量的推定值会整体从实测值偏离。The triangular mark in FIG. 18C shows that when the opening degree of the throttle valve 12 exceeds the preset usage range and becomes 13.5° in this way, the weight of the neural network is performed not using the newly acquired training data but only using the previously acquired training data situation of learning. In this case, it can be seen that the estimated value of the NO X emission amount when the opening degree of the throttle valve 12 exceeds the preset use range and becomes 13.5° greatly deviates from the actual measurement value. On the other hand, the circle mark in FIG. 19A shows the use of both the new training data acquired when the opening degree of the throttle valve 12 exceeds the preset usage range and becomes 13.5° and the previously acquired training data. The training data is used to learn the weights of the neural network. In this case, it can be seen that the estimated value of the NOx emission amount deviates from the actual measured value as a whole.

相对于此,图19B的圆形记号示出了与图19A同样地使用在节气门12的开度超过预先设定的使用范围而成为了13.5°时取得的新的训练数据和事先取得的训练数据这双方的训练数据,不同于图19A,将神经网络的隐藏层(L=3)的节点的个数从4个增大为7个后进行了神经网络的权重的学习的情况。在该情况下,可知,NOX排出量的推定值与实测值高精度地一致。这样,在新取得的内燃机的运转参数的值为预先设定的范围外时,通过使神经网络的输出层的前一个隐藏层的节点的个数增大,能够提高推定精度。On the other hand, the circle mark in FIG. 19B shows the use of new training data and previously acquired training obtained when the opening degree of the throttle valve 12 exceeds the preset usage range and becomes 13.5°, as in FIG. 19A . The training data of both the data is different from the case in which the weight of the neural network is learned after the number of nodes in the hidden layer (L=3) of the neural network is increased from four to seven, different from FIG. 19A . In this case, it can be seen that the estimated value of the NO X emission amount and the actual measurement value agree with high accuracy. In this way, when the value of the newly acquired operating parameter of the internal combustion engine is outside the preset range, the estimation accuracy can be improved by increasing the number of nodes in the hidden layer immediately preceding the output layer of the neural network.

图20~图25示出了将本发明的机器学习装置应用于空调(空调机)的自动调整的情况的第四实施例。在该实施例中,根据气温、湿度、位置及安装有空调的房间的大小,自动地设定最佳的空调的风量、风向及运转时间。在该情况下,空调的使用的条件、场所的范围即气温、湿度、位置及安装有空调的房间的大小等运转参数的值的使用范围能够根据空调的种类而预先设想,因此,通常,对空调的运转参数的值的预先设定的范围,以使神经网络的输出值与最佳的空调的风量、风向及运转时间之差变小的方式预先学习神经网络的权重。20 to 25 show a fourth example of a case where the machine learning apparatus of the present invention is applied to automatic adjustment of an air conditioner (air conditioner). In this embodiment, the optimal air volume, air direction, and operation time of the air conditioner are automatically set according to the temperature, humidity, location, and the size of the room in which the air conditioner is installed. In this case, the conditions for the use of the air conditioner and the range of locations, that is, the range of use of the values of the operating parameters such as temperature, humidity, location, and the size of the room where the air conditioner is installed, can be preconceived according to the type of the air conditioner. The predetermined range of the values of the operating parameters of the air conditioner is to learn the weights of the neural network in advance so that the difference between the output value of the neural network and the optimal air volume, air direction, and operation time of the air conditioner becomes small.

然而,在该情况下,空调的运转参数的值有时也会成为预先设定的范围外,在该情况下,对于预先设定的范围外,由于未进行基于实际的值的学习,所以使用神经网络运算出的输出值会成为从实际的值大幅背离的值,于是,在该实施例中也是,在新取得的与空调相关的运转参数的值为预先设定的范围外时,使神经网络的输出层的前一个隐藏层的节点的个数增大,或者增大神经网络的个数,使用对新取得的与空调相关的运转参数的值得到的训练数据及对预先设定的范围内的与空调相关的运转参数的值得到的训练数据来学习神经网络的权重。However, in this case, the value of the operating parameter of the air conditioner may be outside the preset range. In this case, since learning based on actual values is not performed outside the preset range, neural Since the output value calculated by the network will be a value that deviates greatly from the actual value, in this embodiment also, when the value of the newly acquired operating parameter related to the air conditioner is outside the preset range, the neural network is set to The number of nodes in the previous hidden layer of the output layer is increased, or the number of neural networks is increased, and the training data obtained from the newly acquired values of the operating parameters related to the air conditioner and the preset range are used. The weights of the neural network are learned from the training data obtained by the values of the operating parameters related to the air conditioner.

接着,对该第四实施例进行具体说明。参照图20,50表示空调主体,51表示配置于空调主体50内的送风电动机,52表示配置于空调主体50内的风向调整电动机,53表示用于检测气温的温度计,54表示用于检测大气的湿度的湿度计,55表示用于检测空调的设置位置的GPS,56表示具有与图1所示的电子控制单元30同样的结构的电子控制单元。如图20所示,由温度计53检测到的气温、由湿度计54检测到的大气的湿度及由GPS55检测到的位置信息向电子控制单元56输入,从电子控制单元56输出用于得到最佳的空调的风量的送风电动机51的驱动信号及用于得到最佳的空调的风向的风向调整电动机52的驱动信号。需要说明的是,安装有空调的房间的大小例如向电子控制单元56手动输入。Next, the fourth embodiment will be specifically described. 20, 50 denotes an air conditioner main body, 51 denotes a blower motor disposed in the air conditioner main body 50, 52 denotes an air direction adjusting motor disposed in the air conditioner main body 50, 53 denotes a thermometer for detecting the air temperature, and 54 denotes a thermometer for detecting the atmosphere A hygrometer of humidity, 55 denotes a GPS for detecting the installation position of the air conditioner, and 56 denotes an electronic control unit having the same structure as that of the electronic control unit 30 shown in FIG. 1 . As shown in FIG. 20, the temperature detected by the thermometer 53, the humidity of the atmosphere detected by the hygrometer 54, and the position information detected by the GPS 55 are input to the electronic control unit 56, and output from the electronic control unit 56 is used to obtain the optimum The drive signal of the blower motor 51 for the air volume of the air conditioner and the drive signal of the wind direction adjustment motor 52 for obtaining the optimum air direction of the air conditioner. In addition, the size of the room in which the air conditioner is installed is manually input to the electronic control unit 56, for example.

图21示出了在该第四实施例中使用的神经网络。在该第四实施例中,如图21所示,神经网络的输入层(L=1)由4个节点构成,向各节点输入表示气温的输入值x1、表示湿度的输入值x2、表示位置的输入值x3及表示安装有空调的房间的大小的输入值x4。另外,隐藏层(L=2,L=3)的层数可以设为1个或任意的个数,隐藏层(L=2,L=3)的节点的个数也可以设为任意的个数。另外,在该第四实施例中,输出层(L=4)由3个节点构成,从各节点输出表示空调的风量的输出值y1、表示空调的风向的输出值y2及表示空调的运转时间的输入值y3FIG. 21 shows the neural network used in this fourth embodiment. In this fourth embodiment, as shown in FIG. 21 , the input layer (L=1) of the neural network consists of four nodes, and input values x 1 representing air temperature, input values x 2 representing humidity, The input value x 3 representing the position and the input value x 4 representing the size of the room where the air conditioner is installed are represented. In addition, the number of hidden layers (L=2, L=3) can be set to one or an arbitrary number, and the number of nodes of the hidden layers (L=2, L=3) can also be set to any number number. In addition, in the fourth embodiment, the output layer (L=4) is composed of three nodes, and the output value y 1 representing the air volume of the air conditioner, the output value y 2 representing the air direction of the air conditioner, and the output value y 2 representing the air conditioner are output from each node. The input value y 3 of the operating time.

另一方面,在图22A中,A1与B1之间即R1表示气温的预先设定的范围(例如,-5℃~40℃),A2与B2之间即R2表示湿度的预先设定的范围(例如,30%~90%),A3与B3之间即R3表示位置(例如,北纬20度~46度之间)的预先设定的范围,A4与B4之间即R4表示安装有空调的房间的大小的预先设定的范围。需要说明的是,图22B也与图22A同样,A1~B1之间表示气温的预先设定的范围,A2与B2之间表示湿度的预先设定的范围,A3与B3之间表示位置的预先设定的范围,A4与B4之间表示安装有空调的房间的大小的预先设定的范围。 On the other hand, in FIG. 22A, between A1 and B1, that is, R1, represents a preset range of air temperature (for example, -5°C to 40°C ) , and between A2 and B2 , that is, R2 , represents humidity. The pre-set range (for example, 30% to 90%), between A 3 and B 3 , that is, R 3 represents the pre-set range of the position (for example, between 20 degrees to 46 degrees north latitude), A 4 and B Between B 4 ie R 4 represents a preset range of the size of the room where the air conditioner is installed. 22B is the same as FIG. 22A , between A 1 to B 1 shows a preset range of air temperature, between A 2 and B 2 shows a preset range of humidity, A 3 and B 3 Between represents the preset range of the position, and between A 4 and B 4 represents the preset range of the size of the room where the air conditioner is installed.

在该第四实施例中也是,对预先设定的范围Rn内的各种输入值xn(n=1,2,3,4)实测出的最佳的空调的风量、风向及运转时间作为训练数据而事先求出,即,对预先设定的范围Rn内的与空调相关的多个类别的运转参数的值通过实测而事先求出了训练数据,根据这些与空调相关的多个类别的运转参数的值及训练数据来决定神经网络的构造,以使各输出值y1、y2、y3与对应于与空调相关的多个类别的运转参数的值的训练数据之差变小的方式事先学习神经网络的权重。对该预先设定的范围Rn内的与空调相关的多个类别的运转参数的值通过实测而事先求出的训练数据存储于电子控制单元56的存储部。Also in this fourth embodiment, the air volume, air direction and operation time of the optimal air conditioner actually measured for various input values x n (n=1, 2, 3, 4) within the preset range Rn are used as The training data is obtained in advance by training data, that is, the values of the operating parameters of the plurality of categories related to the air conditioner within the preset range Rn are obtained in advance through actual measurement. The values of the operating parameters and the training data determine the structure of the neural network so that the difference between the output values y 1 , y 2 , and y 3 and the training data corresponding to the values of the operating parameters of a plurality of categories related to the air conditioner becomes small. way to learn the weights of the neural network in advance. The training data obtained in advance by actual measurement of the values of the plurality of types of operating parameters related to the air conditioner within the preset range Rn are stored in the storage unit of the electronic control unit 56 .

在该第四实施例中也是,使用与在事先学习中使用的神经网络相同构造的神经网络,使用学习完成时的神经网络的权重,在车辆运转中以车载方式进一步进行学习。图23示出了该以车载方式进行的第四实施例的学习处理例程,该学习处理例程通过每隔一定时间(例如,每隔一秒)的中断来执行。需要说明的是,在图23所示的学习处理例程的各步骤中进行的处理除了输入值的类别和输入值的个数及输出值的类别和输出值的个数不同这一点之外,与在图12所示的学习处理例程的各步骤中进行的处理相同。Also in this fourth embodiment, a neural network having the same structure as the neural network used in the previous learning is used, and the weight of the neural network at the time of completion of the learning is used, and the learning is further performed on-board while the vehicle is running. FIG. 23 shows a learning processing routine of the fourth embodiment carried out in the vehicle-mounted manner, and the learning processing routine is executed by interruption at regular intervals (for example, every one second). It should be noted that the processing performed in each step of the learning processing routine shown in FIG. 23 is different from the type of input value and the number of input values, and the type of output value and the number of output values. The processing is the same as that performed in each step of the learning processing routine shown in FIG. 12 .

即,参照图23,首先,在步骤401中,读入存储于电子控制单元56的存储部的已学习的权重、在事先的学习中使用的训练数据即对预先设定的范围Rn内的与空调相关的多个类别的运转参数的值通过实测而事先求出的训练数据、表示输入数据的范围即与空调相关的多个类别的运转参数的值的预先设定的范围的值An、Bn(n=1,2,3,4)(图22A)。该已学习的权重用作权重的初始值。接着,在步骤402中,读入在事先的学习中使用的神经网络的输出层的前一个隐藏层的节点的个数K。接着,进入步骤403,取得新的输入值x即新的与空调相关的多个类别的运转参数的值,该新的输入值x即新的与空调相关的多个类别的运转参数的值存储于电子控制单元56的存储部。而且,在步骤403中,将相对于新的输入值x的空调的风量、风向及运转时间的实测值作为训练数据而存储于电子控制单元56的存储部。即,在步骤403中,将对新取得的与空调相关的多个类别的运转参数的值通过实测而得到的训练数据存储于电子控制单元56的存储部。23 , first, in step 401, the learned weights stored in the storage unit of the electronic control unit 56 and the training data used in the previous learning, that is, the sum of the weights within the preset range Rn, are read. Training data obtained in advance by actual measurement for the values of the operating parameters of the plurality of categories related to the air conditioner, and values An and Bn indicating the range of the input data, that is, the preset range of the values of the operating parameters of the plurality of categories related to the air conditioner (n=1, 2, 3, 4) (FIG. 22A). This learned weight is used as an initial value for the weight. Next, in step 402, the number K of nodes in the hidden layer immediately preceding the output layer of the neural network used in the previous learning is read. Next, the process proceeds to step 403, where a new input value x, that is, new values of operating parameters of a plurality of categories related to the air conditioner is acquired, and the new input value x, that is, a new value of the operating parameters of a plurality of categories related to the air conditioner, is stored in the storage unit of the electronic control unit 56 . Then, in step 403, the actual measurement values of the air volume, air direction, and operation time of the air conditioner with respect to the new input value x are stored in the storage unit of the electronic control unit 56 as training data. That is, in step 403 , training data obtained by actual measurement of the newly acquired values of the operating parameters of the plurality of categories related to the air conditioner are stored in the storage unit of the electronic control unit 56 .

接着,在步骤404中,判别新的输入值xn即新取得的与空调相关的多个类别的运转参数的值是否处于预先设定的范围Rn(An与Bn之间)内,即新的输入值xn是否为An以上且Bn以下。在新的输入值xn处于预先设定的范围Rn内时,进入步骤405,将各输入值xn即新取得的与空调相关的多个类别的运转参数的值向神经网络的输入层的对应的节点输入,基于从神经网络的输出层的节点输出的输出值y1、y2、y3和对新取得的与空调相关的多个类别的运转参数的值通过实测而求出的训练数据,使用误差反向传播法,以使输出值y1、y2、y3与训练数据之差变小的方式学习神经网络的权重。Next, in step 404, it is determined whether the new input value xn, that is , the newly acquired values of the operating parameters of a plurality of categories related to the air conditioner, is within the preset range Rn (between An and Bn), that is, whether the new Is the input value x n greater than or equal to An and less than or equal to Bn. When the new input value xn is within the preset range Rn , the process proceeds to step 405, and each input value xn, that is , the newly acquired values of the operating parameters of the plurality of categories related to the air conditioner, is sent to the input layer of the neural network. The corresponding node input is based on the output values y 1 , y 2 , and y 3 output from the nodes of the output layer of the neural network, and the training obtained by the actual measurement of the newly acquired values of the operating parameters of a plurality of categories related to the air conditioner The weights of the neural network are learned in such a way that the difference between the output values y 1 , y 2 , and y 3 and the training data becomes smaller by using the error back-propagation method.

另一方面,在步骤404中判别为新的输入值xn即新取得的与空调相关的多个类别的运转参数的值中的至少一个类别的运转参数的值不处于预先设定的范围Rn(An与Bn之间)内时,例如,在图22B中表示气温的输入值x1处于B1~C1(B1<C1)的预先设定的范围(B1~C1)内的情况或在图22B中表示位置的输入值x3处于C3~A3(C3<A3)的预先设定的范围(C3~A3)内的情况下,进入步骤406。在步骤406中,首先,算出新的输入值xn所属的范围(Bn~Cn)或范围(Cn~An)内的训练数据相对于新的输入值xn的密度D(=训练数据个数/(Cn-Bn)或训练数据个数/(An-Cn))。关于该训练数据密度D的定义,如前所述。在步骤406中,当算出训练数据密度D后,判别训练数据密度D是否变得比预先确定的数据密度D0高。在训练数据密度D比预先确定的数据密度D0低的情况下,完成处理循环。On the other hand, it is determined in step 404 that the new input value xn, that is, the value of the operating parameter of at least one category among the values of the operating parameters of the plurality of categories newly acquired related to the air conditioner is not within the preset range Rn (between An and Bn), for example, in FIG. 22B , the input value x 1 representing the air temperature is within a predetermined range (B 1 to C 1 ) of B 1 to C 1 (B 1 <C 1 ) 22B , or when the input value x 3 representing the position is within a preset range (C 3 to A 3 ) of C 3 to A 3 (C 3 <A 3 ), the process proceeds to step 406 . In step 406 , first , the density D ( = Number of training data/(C n -B n ) or number of training data/(A n -C n )). The definition of the training data density D is as described above. In step 406, after the training data density D is calculated, it is determined whether the training data density D becomes higher than the predetermined data density D 0 . In the event that the training data density D is lower than the predetermined data density D 0 , the processing loop is completed.

另一方面,在步骤406中判别为训练数据密度D变得比预先确定的数据密度D0高时,进入步骤407。在该情况下,在D(=训练数据个数/(An-Cn))>D0时,通过下式来算出追加节点数α。On the other hand, when it is determined in step 406 that the training data density D has become higher than the predetermined data density D 0 , the process proceeds to step 407 . In this case, when D(=number of training data/(A n -C n ))>D 0 , the number of additional nodes α is calculated by the following equation.

追加节点数α=round{(K/(Bn-An))·(An-Cn)}Number of additional nodes α=round{(K/(Bn-An))·(An-Cn)}

另一方面,在D(=训练数据个数/(Cn-Bn))>D0时,通过下式来算出追加节点数α。On the other hand, when D (=number of training data/(C n -B n ))>D 0 , the number of additional nodes α is calculated by the following equation.

追加节点数α=round{(K/(Bn-An))·(Cn-Bn)}Number of additional nodes α=round{(K/(Bn-An))·(Cn-Bn)}

需要说明的是,在上式中,K表示节点的个数,round意味着四舍五入。It should be noted that, in the above formula, K represents the number of nodes, and round means rounding.

当在步骤407中算出追加节点数α后,进入步骤408,更新神经网络的输出层的前一个隐藏层的节点的个数K,使输出层的前一个隐藏层的节点的个数K增大追加节点数α(K←K+α)。这样,在该第四实施例中,当将训练数据的个数除以表示运转参数的值的预先设定的范围的最大值及最小值的差值而得到的数据密度增大时,增大神经网络的输出层的前一个隐藏层的节点的个数。即,在该第四实施例中,与将训练数据的个数除以表示运转参数的值的预先设定的范围的最大值及最小值的差值而得到的数据密度的增大相应地,增大神经网络的输出层的前一个隐藏层的节点的个数。After calculating the number of additional nodes α in step 407, go to step 408, update the number K of nodes in the previous hidden layer of the output layer of the neural network, and increase the number K of nodes in the previous hidden layer of the output layer The number of nodes α (K←K+α) is added. In this way, in the fourth embodiment, when the data density obtained by dividing the number of training data by the difference between the maximum value and the minimum value of the preset range representing the value of the operating parameter increases, the data density increases. The number of nodes in the previous hidden layer of the output layer of the neural network. That is, in the fourth embodiment, the data density is increased in accordance with the increase in the data density obtained by dividing the number of training data by the difference between the maximum value and the minimum value of the preset range of values representing the operating parameters. Increase the number of nodes in the previous hidden layer of the output layer of the neural network.

当在步骤408中使输出层的前一个隐藏层的节点的个数K增大追加节点数α后(K←K+α),进入步骤409,以使输出层的前一个隐藏层的节点的个数K增大的方式更新神经网络。接着,进入步骤405。在步骤405中,将对新的输入值x新得到的训练数据也包含于训练数据,以使输出值y1、y2、y3与训练数据之差变小的方式学习更新后的神经网络的权重。即,在步骤405中,使用对新取得的与空调相关的多个类别的运转参数的值通过实测而得到的训练数据及对预先设定的范围Rn内的与空调相关的多个类别的运转参数的值通过实测而事先求出的训练数据,以使根据预先设定的范围内及预先设定的范围外的与空调相关的多个类别的运转参数的值而变化的输出值y1、y2、y3与对应于与该空调相关的多个类别的运转参数的值的训练数据之差变小的方式学习更新后的神经网络的权重。When the number K of nodes in the previous hidden layer of the output layer is increased by the number of additional nodes α in step 408 (K←K+α), go to step 409, so that the number of nodes in the previous hidden layer of the output layer is increased. The neural network is updated by increasing the number K. Next, go to step 405 . In step 405, the training data newly obtained for the new input value x is also included in the training data, so that the difference between the output values y 1 , y 2 , y 3 and the training data becomes smaller to learn the updated neural network the weight of. That is, in step 405, the training data obtained by actual measurement of the newly acquired values of the operating parameters of the plurality of categories related to the air conditioner and the operation of the plurality of categories related to the air conditioner within the preset range Rn are used. The training data for which the parameter values are obtained in advance by actual measurement, and the output values y 1 , y 1 , and The weights of the updated neural network are learned so that the difference between y 2 and y 3 and the training data corresponding to the values of the operating parameters of the plurality of categories related to the air conditioner becomes smaller.

图24及图25示出了第四实施例的变形例。在该变形例中,与空调相关的各类别的运转参数的值的预先设定的范围被划分为多个。即,图24中的Rw、Rx、Ry、Rz分别表示气温、湿度、位置及安装有空调的房间的大小的预先设定的范围,如图24所示,对气温、湿度、位置及安装有空调的房间的大小预先设定的范围被划分为多个。需要说明的是,在图24中,W1、W2…Wn,X1、X2…Xn,Y1、Y2…Yn,Z1、Z2…Zn分别表示各类别的运转参数的值的划分范围。24 and 25 show a modification of the fourth embodiment. In this modification, the preset range of the value of each type of operating parameter related to the air conditioner is divided into a plurality of groups. That is, Rw, Rx, Ry, and Rz in FIG. 24 represent the preset ranges of temperature, humidity, position, and the size of the room where the air conditioner is installed, respectively. As shown in FIG. The predetermined range of the size of the air-conditioned room is divided into a plurality of parts. It should be noted that, in FIG. 24 , W 1 , W 2 . . . W n , X 1 , X 2 . The division range of the value of the operating parameter.

而且,在该变形例中,预先设定有通过各类别的运转参数的值的划分后的各划分范围的组合而划定的多个划分区域〔Wi,Xj,Yk,Zl〕(i=1,2…n,j=1,2…n,k=1,2…n,l=1,2…n),针对各划分区域〔Wi,Xj,Yk,Zl〕分别制作有独立的神经网络。这些神经网络具有图22所示的构造。在该情况下,隐藏层(L=3)的节点的个数针对各神经网络而不同,以下,将划分区域〔Wi,Xj,Yk,Zl〕中的神经网络的输出层的前一个隐藏层的节点的个数用Ki、j、k、l表示。该隐藏层的节点的个数Ki、j、k、l根据各划分区域〔Wi,Xj,Yk,Zl〕内的训练数据的变化相对于输入值的变化的复杂度而事先设定。Furthermore, in this modification, a plurality of divided regions [Wi, Xj, Yk, Zl] (i=1) demarcated by the combination of the divided ranges obtained by dividing the values of the operating parameters of each category are preset. , 2...n, j=1, 2...n, k=1, 2...n, l=1, 2...n), an independent neural network is created for each divided area [Wi, Xj, Yk, Zl] . These neural networks have the configuration shown in FIG. 22 . In this case, the number of nodes in the hidden layer (L=3) is different for each neural network. Hereinafter, the hidden layer before the output layer of the neural network in the region [Wi, Xj, Yk, Zl] will be divided The number of nodes is represented by Ki, j, k, l. The numbers Ki, j, k, and l of nodes in the hidden layer are set in advance according to the complexity of the change of the training data in each of the divided regions [Wi, Xj, Yk, Zl] with respect to the change of the input value.

在该变形例中,对在预先设定的范围Rw、Rx、Ry、Rz内形成的各划分区域〔Wi,Xj,Yk,Zl〕内的各种输入值x1、x2、x3、x4即气温、湿度、位置及安装有空调的房间的大小实测出的空调的风量、风向及运转时间作为训练数据而事先求出,即,对预先设定的范围Rw、Rx、Ry、Rz内的与空调相关的多个类别的运转参数的值通过实测而事先求出了训练数据,根据这些与空调相关的多个类别的运转参数的值及训练数据,也包含隐藏层的节点的个数Ki、j、k、l而决定相对于各划分区域〔Wi,Xj,Yk,Zl〕的神经网络的构造,以使输出值y1、y2、y3与对应的训练数据之差变小的方式事先学习各划分区域〔Wi,Xj,Yk,Zl〕的神经网络的权重。In this modification, various input values x1, x2, x3, and x4, ie, air temperatures, are applied to each of the divided regions [Wi, Xj, Yk, Zl] formed in the preset ranges Rw, Rx, Ry, and Rz. , humidity, location, and the size of the room where the air conditioner is installed. The measured air volume, wind direction, and operating time of the air conditioner are obtained as training data in advance. The values of the operating parameters of the plurality of categories related to the air conditioner are obtained in advance by actual measurement, and the training data is obtained in advance. Based on the values of the operating parameters of the plurality of categories related to the air conditioner and the training data, the numbers Ki and j of the nodes in the hidden layer are also included. , k, and l to determine the structure of the neural network for each division region [Wi, Xj, Yk, Zl] so that the difference between the output values y 1 , y 2 , y 3 and the corresponding training data is reduced in advance Learn the weights of the neural network for each divided region [Wi, Xj, Yk, Zl].

因此,在该变形例中,以下,将该事先进行了学习的划分区域〔Wi,Xj,Yk,Zl〕也称作已学习划分区域〔Wi,Xj,Yk,Zl〕。需要说明的是,对该预先设定的范围Rw、Rx、Ry、Rz内的与空调相关的多个类别的运转参数的值通过实测而事先求出的训练数据存储于电子控制单元56的存储部。在该变形例中也是,关于各划分区域〔Wi,Xj,Yk,Zl〕,使用与在事先学习中使用的神经网络相同构造的神经网络,使用学习完成时的神经网络的权重,在车辆运转中以车载方式进一步进行学习。图25示出了该以车载方式进行的变形例的学习处理例程,该学习处理例程通过每隔一定时间(例如,每隔一秒)的中断来执行。Therefore, in this modification, the previously learned division area [Wi, Xj, Yk, Zl] is also referred to as a learned division area [Wi, Xj, Yk, Zl] hereinafter. It should be noted that the training data obtained in advance by actual measurement of the values of the operating parameters of the plurality of categories related to the air conditioner within the preset ranges Rw, Rx, Ry, and Rz are stored in the memory of the electronic control unit 56 . department. Also in this modification, for each of the divided regions [Wi, Xj, Yk, Zl], a neural network having the same structure as the neural network used in the previous learning is used, and the weights of the neural network at the time of completion of the learning are used, and the vehicle is running. Further learning is carried out in the vehicle mode. FIG. 25 shows a learning processing routine of this modification carried out in the vehicle-mounted manner, and the learning processing routine is executed by interruption at regular intervals (for example, every one second).

参照图25,首先,在步骤500中,读入存储于电子控制单元56的存储部的已学习的权重、在事先的学习中使用的训练数据即对预先设定的范围Rw、Rx、Ry、Rz内的与空调相关的多个类别的运转参数的值通过实测而事先求出的训练数据、各已学习划分区域〔Wi,Xj,Yk,Zl〕。该已学习的权重用作权重的初始值。接着,在步骤501中,读入对各已学习划分区域〔Wi,Xj,Yk,Zl〕在事先的学习中使用的输出层的前一个隐藏层的节点的个数Ki、j、k、l。接着,进入步骤502,取得新的输入值x1、x2、x3、x4即气温、湿度、位置及安装有空调的房间的大小,该新的输入值x1、x2、x3、x4即新的与空调相关的多个类别的运转参数的值存储于电子控制单元56的存储部。而且,在步骤502中,将相对于新的输入值x1、x2、x3、x4的空调的风量、风向及运转时间作为训练数据而存储于电子控制单元56的存储部。即,在步骤502中,将对新取得的与空调相关的多个类别的运转参数的值通过实测而得到的训练数据存储于电子控制单元56的存储部。Referring to FIG. 25 , first, in step 500, the learned weights stored in the storage unit of the electronic control unit 56 and the training data used in the previous learning, that is, the preset ranges Rw, Rx, Ry, The training data obtained in advance by the actual measurement of the values of the operating parameters of the plurality of categories related to the air conditioner in Rz, and each learned divided area [Wi, Xj, Yk, Zl]. This learned weight is used as an initial value for the weight. Next, in step 501, the number Ki, j, k, l of nodes in the previous hidden layer of the output layer used in the previous learning for each learned partition region [Wi, Xj, Yk, Zl] is read in . Next, go to step 502, obtain new input values x1, x2, x3, x4, namely the temperature, humidity, location and the size of the room where the air conditioner is installed, and the new input values x1, x2, x3, x4 are the new and air conditioners The values of the operation parameters of the related plurality of categories are stored in the storage unit of the electronic control unit 56 . Then, in step 502, the air volume, air direction, and operation time of the air conditioner with respect to the new input values x1, x2, x3, and x4 are stored in the storage unit of the electronic control unit 56 as training data. That is, in step 502 , training data obtained by actual measurement of the newly acquired values of the operating parameters of the plurality of categories related to the air conditioner are stored in the storage unit of the electronic control unit 56 .

接着,在步骤503中,判别新的输入值x1、x2、x3、x4是否处于已学习划分区域〔Wi,Xj,Yk,Zl〕内,即新取得的与空调相关的多个类别的运转参数的值是否处于预先设定的范围Rw、Rx、Ry、Rz内。在新的输入值x1、x2、x3、x4处于已学习划分区域Rw、Rx、Ry、Rz内时,即,在新取得的与空调相关的多个类别的运转参数的值处于预先设定的范围Rw、Rx、Ry、Rz内时,进入步骤504,将新的输入值x1、x2、x3、x4即新取得的与空调相关的多个类别的运转参数的值向新取得的与空调相关的多个类别的运转参数的值所属的已学习划分区域〔Wi,Xj,Yk,Zl〕的神经网络的输入层的各节点输入,基于从神经网络的输出层的节点输出的输出值y1、y2、y3和对新取得的与空调相关的多个类别的运转参数的值通过实测而求出的训练数据,使用误差反向传播法,以使输出值y1、y2、y3与训练数据之差变小的方式进一步学习新取得的与内燃机相关的多个类别的运转参数的值所属的已学习划分区域〔Wi,Xj,Yk,Zl〕的神经网络的权重。Next, in step 503, it is determined whether or not the new input values x1, x2, x3, and x4 are within the learned division area [Wi, Xj, Yk, Z1], that is, the newly acquired operation parameters of a plurality of categories related to the air conditioner Whether the value of is within the preset ranges Rw, Rx, Ry, Rz. When the new input values x1, x2, x3, and x4 are within the learned division regions Rw, Rx, Ry, and Rz, that is, when the newly acquired values of the operating parameters of the plurality of categories related to the air conditioner are within the preset values If it is within the ranges Rw, Rx, Ry, and Rz, the process proceeds to step 504, and the new input values x1, x2, x3, and x4, that is, the values of the newly acquired operating parameters of multiple categories related to the air conditioner, are converted to the newly acquired values related to the air conditioner. The input of each node of the input layer of the neural network of the learned partition region [Wi, Xj, Yk, Zl] to which the values of the operating parameters of the plurality of categories belong, is based on the output value y 1 output from the node of the output layer of the neural network , y 2 , y 3 , and training data obtained by actual measurement for the newly acquired values of the operating parameters of a plurality of categories related to the air conditioner, using the error back propagation method, so that the output values y 1 , y 2 , y 3. The weights of the neural network of the learned division area [Wi, Xj, Yk, Zl] to which the newly acquired values of the operating parameters of the plurality of categories related to the internal combustion engine belong are further learned so that the difference from the training data becomes smaller.

另一方面,在步骤503中判别为新的输入值x1、x2、x3、x4不处于已学习划分区域〔Wi,Xj,Yk,Zl〕内时,进入步骤505,首先,在预先设定的范围Rw、Rx、Ry、Rz外设定由新的输入值x1、x2、x3、x4划定的未学习区域。例如,在判别为新的输入值x2、x3、x4处于对应的预先设定的范围Rx、Ry、Rz内且新的输入值x1不处于对应的预先设定的范围Rw内时,若将新的输入值x1所属的范围设为Wa,则设定由新的输入值x1、x2、x3、x4划定的未学习区域〔Wa,Xj,Yk,Zl〕。另外,在判别为新的输入值x3、x4处于对应的预先设定的范围Ry、Rz内且新的输入值x1、x2不处于对应的预先设定的范围Rw、Rx内时,若将新的输入值x1所属的范围设为Wa且将新的输入值x2所属的范围设为Xb,则设定由新的输入值x1、x2、x3、x4划定的未学习区域〔Wa,Xb,Yk,Zl〕。On the other hand, when it is determined in step 503 that the new input values x1, x2, x3, and x4 are not within the learned division area [Wi, Xj, Yk, Z1], the process proceeds to step 505. Outside the ranges Rw, Rx, Ry, and Rz, an unlearned area defined by the new input values x1, x2, x3, and x4 is set. For example, when it is determined that the new input values x2, x3, and x4 are within the corresponding preset ranges Rx, Ry, and Rz and the new input value x1 is not within the corresponding preset range Rw, if The range to which the new input value x1 belongs is set to Wa, and the unlearned area [Wa, Xj, Yk, Zl] delimited by the new input values x1, x2, x3, and x4 is set. In addition, when it is determined that the new input values x3 and x4 are within the corresponding preset ranges Ry and Rz and the new input values x1 and x2 are not within the corresponding preset ranges Rw and Rx, if the new input values The range to which the new input value x1 belongs is set to Wa and the range to which the new input value x2 belongs is set to Xb, then the unlearned area delimited by the new input values x1, x2, x3, and x4 is set [Wa, Xb, Yk, Zl].

接着,在步骤505中,制作相对于未学习区域〔Xa,Yb〕的新的神经网络。当在步骤505中制作新的神经网络后,进入步骤504。在步骤504中,关于未学习区域,以使输出值y1、y2、y3与训练数据之差变小的方式学习对未学习区域制作出的新的神经网络的权重。Next, in step 505, a new neural network for the unlearned region [Xa, Yb] is created. After creating a new neural network in step 505 , go to step 504 . In step 504, with regard to the unlearned area, the weights of the new neural network created for the unlearned area are learned so that the difference between the output values y 1 , y 2 , and y 3 and the training data becomes small.

图26~图33示出了将本发明的机器学习装置应用于二次电池的劣化度的推定的情况的第五实施例。在该实施例中,根据气温、二次电池的温度、二次电池的放电时间及二次电池的每单位时间的放电能量来检测二次电池的劣化度。在该情况下,二次电池的使用条件及使用方式的范围即气温、二次电池的温度、二次电池的放电时间及二次电池的每单位时间的放电能量等二次电池的运转参数的值的使用范围能够根据二次电池的种类而预先设想,因此,通常,对二次电池的运转参数的值的预先设定的范围,以使神经网络的输出值与实测出的二次电池的劣化度之差变小的方式预先学习神经网络的权重。26 to 33 show a fifth example of a case where the machine learning apparatus of the present invention is applied to estimation of the degree of deterioration of a secondary battery. In this embodiment, the degree of deterioration of the secondary battery is detected based on the air temperature, the temperature of the secondary battery, the discharge time of the secondary battery, and the discharge energy per unit time of the secondary battery. In this case, the range of use conditions and modes of use of the secondary battery, that is, the operating parameters of the secondary battery, such as air temperature, temperature of the secondary battery, discharge time of the secondary battery, and discharge energy per unit time of the secondary battery, etc. The use range of the value can be pre-estimated according to the type of the secondary battery. Therefore, the range of the value of the operating parameter of the secondary battery is usually set in advance so that the output value of the neural network and the actual measured value of the secondary battery are set in advance. The weights of the neural network are learned in advance so that the difference in the degree of deterioration becomes smaller.

然而,在该情况下,二次电池的运转参数的值有时也会成为预先设定的范围外,在该情况下,对于预先设定的范围外,由于未进行基于实际的值的学习,所以使用神经网络运算出的输出值会成为从实际的值大幅背离的值,于是,在该实施例中也是,在新取得的与二次电池相关的运转参数的值为预先设定的范围外时,使神经网络的输出层的前一个隐藏层的节点的个数增大,或者增大神经网络的个数,使用对新取得的与二次电池相关的运转参数的值得到的训练数据及对预先设定的范围内的与二次电池相关的运转参数的值得到的训练数据来学习神经网络的权重。However, in this case, the value of the operating parameter of the secondary battery may be outside the preset range. In this case, learning based on the actual value is not performed outside the preset range. Since the output value calculated using the neural network is a value that deviates significantly from the actual value, even in this embodiment, when the newly acquired value of the operating parameter related to the secondary battery is outside the preset range , increase the number of nodes in the previous hidden layer of the output layer of the neural network, or increase the number of neural networks, using the training data obtained from the newly acquired values of the operating parameters related to the secondary battery and the The weights of the neural network are learned from the training data obtained from the values of the operating parameters related to the secondary battery within the preset range.

接着,对该第五实施例进行具体说明。参照图26,60表示二次电池,61表示电动电动机,62表示电动电动机61的驱动控制装置,63表示用于检测二次电池60的输出端子间电压的电压计,64表示用于检测从二次电池60经由驱动控制装置62而向电动电动机61供给的电流的电流计,65表示用于检测气温的温度计,66表示用于检测二次电池60的温度的温度传感器,67表示具有与图1所示的电子控制单元30同样的结构的电子控制单元。如图26所示,由电流计53检测到的向电动电动机61的供给电流、由电压计64检测到的二次电池60的输出端子间电压、由温度计65检测到的气温及由温度传感器66检测到的二次电池60的温度向电子控制单元56输入,在电子控制单元56内,算出二次电池60的劣化度的推定值。需要说明的是,在电子控制单元56内,基于电流计64的检测值来求出二次电池60的放电时间,基于电流计64的检测值及电压计63的检测值来求出二次电池60的每单位时间的放电能量(电流·电压)。Next, the fifth embodiment will be specifically described. 26, 60 denotes a secondary battery, 61 denotes an electric motor, 62 denotes a drive control device for the electric motor 61, 63 denotes a voltmeter for detecting the voltage between the output terminals of the secondary battery 60, and 64 denotes a voltmeter for detecting the voltage between the output terminals of the secondary battery 60. The ammeter for the current supplied from the secondary battery 60 to the electric motor 61 via the drive control device 62; The electronic control unit 30 shown is an electronic control unit of the same structure. As shown in FIG. 26 , the supply current to the electric motor 61 detected by the ammeter 53 , the voltage between the output terminals of the secondary battery 60 detected by the voltmeter 64 , the air temperature detected by the thermometer 65 , and the temperature sensor 66 The detected temperature of the secondary battery 60 is input to the electronic control unit 56 , and the electronic control unit 56 calculates an estimated value of the degree of deterioration of the secondary battery 60 . In the electronic control unit 56 , the discharge time of the secondary battery 60 is obtained based on the detection value of the ammeter 64 , and the secondary battery is obtained based on the detection value of the ammeter 64 and the detection value of the voltmeter 63 . Discharge energy (current/voltage) per unit time of 60.

图27示出了在该第五实施例中使用的神经网络。在该第五实施例中,如图27所示,神经网络的输入层(L=1)由4个节点构成,向各节点输入表示气温的输入值x1、表示二次电池60的温度的输入值x2、二次电池60的放电时间x3及表示二次电池60的每单位时间的放电能量的输入值x4。另外,隐藏层(L=2,L=3)的层数可以设为1个或任意的个数,隐藏层(L=2,L=3)的节点的个数也可以设为任意的个数。另外,在该第四实施例中,输出层(L=4)的节点被设为1个,从该节点输出表示二次电池20的劣化度的输出值y。FIG. 27 shows the neural network used in this fifth embodiment. In this fifth embodiment, as shown in FIG. 27 , the input layer (L=1) of the neural network is composed of four nodes, and input value x 1 representing the temperature and an input value x 1 representing the temperature of the secondary battery 60 are input to each node. The input value x 2 , the discharge time x 3 of the secondary battery 60 , and the input value x 4 representing the discharge energy per unit time of the secondary battery 60 are input. In addition, the number of hidden layers (L=2, L=3) can be set to one or an arbitrary number, and the number of nodes of the hidden layers (L=2, L=3) can also be set to any number number. In addition, in this fourth embodiment, the node of the output layer (L=4) is set to one, and the output value y indicating the degree of deterioration of the secondary battery 20 is output from the node.

另一方面,在图28A中,A1与B1之间即R1表示气温的预先设定的范围(例如,-5℃~40℃),A2与B2之间即R2表示二次电池60的温度的预先设定的范围(例如,-40℃~40℃),A3与B3之间即R3表示二次电池60的放电时间的预先设定的范围,A4与B4之间即R4表示二次电池60的每单位时间的放电能量的预先设定的范围。需要说明的是,图28B也与图28A同样,A1与B1之间表示气温的预先设定的范围,A2与B2之间表示二次电池60的温度的预先设定的范围,A3与B3之间表示二次电池60的放电时间的预先设定的范围,A4与B4之间表示二次电池60的每单位时间的放电能量的预先设定的范围。 On the other hand, in FIG. 28A , between A1 and B1, that is, R1 represents a preset range of air temperature ( for example, -5°C to 40°C ) , and between A2 and B2, that is, R2 represents two The pre-set range of the temperature of the secondary battery 60 (for example, -40° C. to 40 ° C.), between A3 and B3 , that is, R3 represents the pre - set range of the discharge time of the secondary battery 60, and A4 and B3 Between B 4 , that is, R 4 represents a predetermined range of the discharge energy per unit time of the secondary battery 60 . 28B is also the same as FIG. 28A , between A1 and B1 shows the preset range of the air temperature, and between A2 and B2 shows the preset range of the temperature of the secondary battery 60, The interval between A3 and B3 represents a preset range of the discharge time of the secondary battery 60 , and the interval between A4 and B4 represents a preset range of the discharge energy per unit time of the secondary battery 60 .

在此,对气温、二次电池60的温度、二次电池60的放电时间及二次电池60的每单位时间的放电能量与二次电池60的劣化度的关系进行简单说明。二次电池60越劣化则内部电阻越高,因此,能够根据内部电阻的变化来推定二次电池60的劣化度。然而,实际上,检测内部电阻是困难的。另一方面,在放电电流一定的情况下,内部电阻越高则二次电池60的发热量越增大,因此,内部电阻越高,即二次电池60越劣化,则二次电池60的温度越高。因此,基于二次电池60的温度上升量,能够推定二次电池60的劣化度。在该情况下,二次电池60的温度上升量受到气温的影响,另外,受到二次电池60的放电时间及二次电池60的每单位时间的放电能量左右。因此,二次电池60的劣化度根据气温、二次电池60的温度、二次电池60的放电时间及二次电池60的每单位时间的放电能量而求出,因此,根据气温、二次电池60的温度、二次电池60的放电时间及二次电池60的每单位时间的放电能量,能够推定二次电池60的劣化度。Here, the relationship between the air temperature, the temperature of the secondary battery 60, the discharge time of the secondary battery 60, the discharge energy per unit time of the secondary battery 60, and the degree of deterioration of the secondary battery 60 will be briefly described. As the secondary battery 60 deteriorates, the internal resistance increases. Therefore, the degree of deterioration of the secondary battery 60 can be estimated from the change in the internal resistance. In practice, however, it is difficult to detect the internal resistance. On the other hand, when the discharge current is constant, the higher the internal resistance, the higher the calorific value of the secondary battery 60. Therefore, the higher the internal resistance, that is, the more the secondary battery 60 is degraded, the temperature of the secondary battery 60 increases. higher. Therefore, the degree of deterioration of the secondary battery 60 can be estimated based on the temperature rise amount of the secondary battery 60 . In this case, the amount of temperature increase of the secondary battery 60 is affected by the air temperature, and is also affected by the discharge time of the secondary battery 60 and the discharge energy per unit time of the secondary battery 60 . Therefore, the degree of deterioration of the secondary battery 60 is obtained from the air temperature, the temperature of the secondary battery 60 , the discharge time of the secondary battery 60 , and the discharge energy per unit time of the secondary battery 60 . The temperature of the secondary battery 60 , the discharge time of the secondary battery 60 , and the discharge energy per unit time of the secondary battery 60 can estimate the degree of deterioration of the secondary battery 60 .

另一方面,当二次电池60劣化时,向二次电池60的充电量减少。在该情况下,在向二次电池60的充电刚完成后使二次电池60的电路成为了闭环时,在二次电池60的输出端子间会出现与向二次电池60的充电量成比例的电压。即,在向二次电池60的充电刚完成后由电压计64检测到的二次电池60的输出端子间电压与向二次电池60的充电量成比例。因此,根据向二次电池60的充电刚完成后的电压计64的检测电压,能够检测二次电池60的劣化度。因此,在该第五实施例中,使用根据向二次电池60的充电刚完成后的电压计64的检测电压而检测到的二次电池60的劣化度作为输出值y的训练数据。On the other hand, when the secondary battery 60 is degraded, the amount of charge to the secondary battery 60 decreases. In this case, when the circuit of the secondary battery 60 is closed immediately after the charging of the secondary battery 60 is completed, a phenomenon proportional to the amount of charging to the secondary battery 60 occurs between the output terminals of the secondary battery 60 . voltage. That is, the voltage between the output terminals of the secondary battery 60 detected by the voltmeter 64 immediately after the charging of the secondary battery 60 is completed is proportional to the amount of charging of the secondary battery 60 . Therefore, the degree of deterioration of the secondary battery 60 can be detected based on the detection voltage of the voltmeter 64 immediately after the charging of the secondary battery 60 is completed. Therefore, in this fifth embodiment, the degree of deterioration of the secondary battery 60 detected from the detected voltage of the voltmeter 64 immediately after the charging of the secondary battery 60 is completed is used as training data for the output value y.

接着,参照图29及图30,对在电子控制单元67内执行的二次电池60的放电时间等的算出例程和训练数据的取得处理例程进行说明。参照示出算出例程的图29,在步骤600中,根据电流计64的输出值来算出二次电池60的放电时间。接着,在步骤601中,根据电流计64的输出值及电压计63的输出值来算出二次电池60的每单位时间的放电能量。Next, with reference to FIGS. 29 and 30 , a routine for calculating the discharge time of the secondary battery 60 and the like and a routine for acquiring training data, which are executed in the electronic control unit 67 , will be described. Referring to FIG. 29 showing the calculation routine, in step 600 , the discharge time of the secondary battery 60 is calculated from the output value of the ammeter 64 . Next, in step 601 , the discharge energy per unit time of the secondary battery 60 is calculated from the output value of the ammeter 64 and the output value of the voltmeter 63 .

另一方面,参照示出训练数据的取得处理例程的图30,首先,在步骤610中,判别是否进行着向二次电池60的充电处理。在未进行向二次电池60的充电处理时,完成处理循环。相对于此,在进行着向二次电池60的充电处理时,进入步骤611而判别向二次电池60的充电是否已完成。在判别为向二次电池60的充电已完成时,进入步骤612,判别是否设置有在要求训练数据时设置的训练数据要求标志。关于该训练数据要求标志后述。在未设置训练数据要求标志时,完成处理循环。相对于此,在设置有训练数据要求标志时,进入步骤613,根据电压计64的检测电压来检测二次电池60的劣化度。接着,进入步骤614,设置追加学习标志。On the other hand, referring to FIG. 30 showing the training data acquisition processing routine, first, in step 610 , it is determined whether or not the charging process to the secondary battery 60 is being performed. When the charging process to the secondary battery 60 is not performed, the process cycle is completed. On the other hand, when the charging process to the secondary battery 60 is being performed, the process proceeds to step 611 to determine whether or not the charging of the secondary battery 60 has been completed. When it is determined that the charging of the secondary battery 60 has been completed, the process proceeds to step 612, and it is determined whether or not the training data request flag set when the training data is requested is set. The training data request flag will be described later. The processing loop completes when the training data required flag is not set. On the other hand, when the training data request flag is set, the process proceeds to step 613 , and the degree of deterioration of the secondary battery 60 is detected based on the detected voltage of the voltmeter 64 . Next, the process proceeds to step 614, and the additional learning flag is set.

在该第五实施例中也是,相对于预先设定的范围Rn内的各种输入值xn(n=1,2,3,4)即气温、二次电池60的温度、二次电池60的放电时间及二次电池60的每单位时间的放电能量的二次电池60的劣化度作为训练数据而事先求出,即,对预先设定的范围Rn内的与二次电池60相关的多个类别的运转参数的值通过实测而事先求出了训练数据,根据这些与二次电池60相关的多个类别的运转参数的值及训练数据来决定神经网络的构造,以使输出值y与对应于与二次电池60相关的多个类别的运转参数的值的训练数据之差变小的方式事先学习神经网络的权重。对该预先设定的范围Rn内的与二次电池60相关的多个类别的运转参数的值通过实测而事先求出的训练数据存储于电子控制单元67的存储部。Also in this fifth embodiment, with respect to various input values xn ( n =1, 2, 3, 4) within a preset range Rn, that is, the air temperature, the temperature of the secondary battery 60, the secondary battery 60 The degree of deterioration of the secondary battery 60 based on the discharge time and the discharge energy per unit time of the secondary battery 60 are obtained in advance as training data. The training data are obtained in advance for the values of the operating parameters of the various categories by actual measurement, and the structure of the neural network is determined based on the values of the operating parameters of the plurality of categories related to the secondary battery 60 and the training data so that the output value y is equal to the value of the training data. The weights of the neural network are learned in advance so that the difference between the training data corresponding to the values of the plurality of types of operating parameters related to the secondary battery 60 becomes smaller. The training data obtained in advance by actual measurement of the values of the plurality of types of operating parameters related to the secondary battery 60 within the preset range Rn are stored in the storage unit of the electronic control unit 67 .

在该第五实施例中也是,使用与在事先学习中使用的神经网络相同构造的神经网络,使用学习完成时的神经网络的权重,在车辆运转中以车载方式进一步进行学习。图31示出了该以车载方式进行的第五实施例的学习处理例程,该学习处理例程通过每隔一定时间(例如,每隔一秒)的中断来执行。Also in this fifth embodiment, a neural network having the same structure as the neural network used in the previous learning is used, and the weight of the neural network at the time of completion of learning is used, and the learning is further performed on-board while the vehicle is running. FIG. 31 shows a learning processing routine of the fifth embodiment carried out in the vehicle-mounted manner, and the learning processing routine is executed by interruption at regular intervals (for example, every one second).

即,参照图31,首先,在步骤700中,读入存储于电子控制单元67的存储部的已学习的权重、在事先的学习中使用的训练数据即对预先设定的范围Rn内的与二次电池60相关的多个类别的运转参数的值通过实测而事先求出的训练数据、表示输入数据的范围即与二次电池60相关的多个类别的运转参数的值的预先设定的范围的值An、Bn(n=1,2,3,4)(图28A)。该已学习的权重用作权重的初始值。接着,在步骤701中,读入在事先的学习中使用的神经网络的输出层的前一个隐藏层的节点的个数K。接着,在步骤702中,判别是否设置有追加学习标志。在未设置追加学习标志时,进入步骤703。31 , first, in step 700, the learned weights stored in the storage unit of the electronic control unit 67 and the training data used in the previous learning, that is, the sum of the weights within the preset range Rn, are read. Training data obtained in advance by actual measurement for the values of the operating parameters of the plurality of types related to the secondary battery 60 , and preset values indicating the range of the input data, that is, the values of the operating parameters of the plurality of types related to the secondary battery 60 . Range of values An, Bn (n=1, 2, 3, 4) (FIG. 28A). This learned weight is used as an initial value for the weight. Next, in step 701, the number K of nodes in the hidden layer immediately preceding the output layer of the neural network used in the previous learning is read. Next, in step 702, it is determined whether or not the additional learning flag is set. When the additional learning flag is not set, the process proceeds to step 703 .

在步骤703中,取得新的输入值x即新的与二次电池60相关的多个类别的运转参数的值,该新的输入值x即新的与二次电池60相关的多个类别的运转参数的值存储于电子控制单元67的存储部。In step 703 , a new input value x, that is, a new value of a plurality of types of operation parameters related to the secondary battery 60, is obtained, and the new input value x is a new input value x that is a new value of the plurality of types of the secondary battery 60. The values of the operating parameters are stored in the storage unit of the electronic control unit 67 .

接着,在步骤704中,判别新的输入值xn即新取得的与二次电池60相关的多个类别的运转参数的值是否处于预先设定的范围Rn(An与Bn之间)内,即新的输入值xn是否为An以上且Bn以下。在新的输入值xn处于预先设定的范围Rn内时,进入步骤705,将各输入值xn即新取得的与二次电池60相关的多个类别的运转参数的值向神经网络的输入层的对应的节点输入,基于从神经网络的输出层的节点输出的输出值y和对新取得的与二次电池60相关的多个类别的运转参数的值通过实测而求出的训练数据,使用误差反向传播法,以使输出值y与训练数据之差变小的方式学习神经网络的权重。Next, in step 704, it is determined whether or not the new input value xn, that is , the newly acquired values of the operating parameters of the plurality of types related to the secondary battery 60, is within the preset range Rn (between An and Bn), That is, whether or not the new input value x n is greater than or equal to An and less than or equal to Bn. When the new input value xn is within the preset range Rn , the process proceeds to step 705, and each input value xn, that is , the newly acquired values of the operation parameters of the plurality of types related to the secondary battery 60, are sent to the neural network. The corresponding node input of the input layer is based on the output value y output from the node of the output layer of the neural network and the training data obtained by the actual measurement for the newly acquired values of the operation parameters of the plurality of categories related to the secondary battery 60 . , using the error back-propagation method to learn the weights of the neural network in such a way that the difference between the output value y and the training data becomes smaller.

另一方面,在步骤704中判别为新的输入值xn即新取得的与二次电池60相关的多个类别的运转参数的值中的至少一个类别的运转参数的值不处于预先设定的范围Rn(An与Bn之间)内时,例如,在图28B中表示气温的输入值x1处于B1~C1(B1<C1)的预先设定的范围(B1~C1)内的情况或在图28B中表示二次电池60的放电时间的输入值x3处于C3~A3(C3<A3)的预先设定的范围(C3~A3)内的情况下,进入步骤706。在步骤706中,设置训练数据要求标志,将此时取得的新的输入值xn作为用于追加学习的新的输入值xn而存储。接着,完成处理循环。On the other hand, it is determined in step 704 that the new input value x n , that is, the value of at least one type of the operation parameter value of the plurality of types of operation parameter values related to the secondary battery 60 newly acquired is not within the preset value In the range Rn (between An and Bn) of , for example, in FIG. 28B , the input value x 1 indicating that the air temperature is within a predetermined range (B 1 to C 1 ) of B 1 to C 1 (B 1 <C 1 ) 1 ), or in FIG. 28B , the input value x 3 indicating the discharge time of the secondary battery 60 is within a preset range (C 3 to A 3 ) of C 3 to A 3 (C 3 <A 3 ). In the case of , go to step 706 . In step 706, a training data request flag is set, and the new input value xn obtained at this time is stored as a new input value xn for additional learning. Next, the processing loop is completed.

当设置训练数据要求标志后,从图30的训练数据取得处理例程可知,在二次电池60的充电完成时,检测二次电池60的劣化度,该二次电池60的劣化度作为用于追加学习的训练数据而存储。接着,设置追加学习标志。当在步骤706中设置追加学习标志后,在下次的处理循环中,从步骤702进入步骤707。在步骤707中,从存储部读出为了用于追加学习而存储的新的输入值xn和为了用于追加学习而存储的训练数据,算出该新的输入值xn所属的范围(Bn~Cn)或范围(Cn~An)内的训练数据相对于新的输入值xn的密度D(=训练数据个数/(Cn-Bn)或训练数据个数/(An-Cn))。关于该训练数据密度D的定义,如前所述。在步骤707中,当算出训练数据密度D后,判别训练数据密度D是否变得比预先确定的数据密度D0高。在训练数据密度D比预先确定的数据密度D0低的情况下,完成处理循环。When the training data request flag is set, it can be seen from the training data acquisition processing routine of FIG. 30 that when the charging of the secondary battery 60 is completed, the degree of deterioration of the secondary battery 60 is detected, and the degree of deterioration of the secondary battery 60 is used as a The training data for additional learning is stored. Next, the additional learning flag is set. When the additional learning flag is set in step 706 , the process proceeds from step 702 to step 707 in the next processing loop. In step 707, the new input value x n stored for additional learning and the training data stored for additional learning are read from the storage unit, and the range (B n ) to which the new input value x n belongs is calculated. ~C n ) or the density D of the training data in the range (C n ~A n ) relative to the new input value x n (= number of training data/(C n -B n ) or number of training data/(A n -C n )). The definition of the training data density D is as described above. In step 707, after the training data density D is calculated, it is determined whether the training data density D becomes higher than the predetermined data density D 0 . In the event that the training data density D is lower than the predetermined data density D 0 , the processing loop is completed.

另一方面,在步骤707中判别为训练数据密度D变得比预先确定的数据密度D0高时,进入步骤708。在该情况下,在D(=训练数据个数/(An-Cn))>D0时,通过下式来算出追加节点数α。On the other hand, when it is determined in step 707 that the training data density D has become higher than the predetermined data density D 0 , the process proceeds to step 708 . In this case, when D(=number of training data/(A n -C n ))>D 0 , the number of additional nodes α is calculated by the following equation.

追加节点数α=round{(K/(Bn-An))·(An-Cn)}Number of additional nodes α=round{(K/(Bn-An))·(An-Cn)}

另一方面,在D(=训练数据个数/(Cn-Bn))>D0时,通过下式来算出追加节点数α。On the other hand, when D (=number of training data/(C n -B n ))>D 0 , the number of additional nodes α is calculated by the following equation.

追加节点数α=round{(K/(Bn-An))·(Cn-Bn)}Number of additional nodes α=round{(K/(Bn-An))·(Cn-Bn)}

需要说明的是,在上式中,K表示节点的个数,round意味着四舍五入。It should be noted that, in the above formula, K represents the number of nodes, and round means rounding.

当在步骤708中算出追加节点数α后,进入步骤709,更新神经网络的输出层的前一个隐藏层的节点的个数K,使输出层的前一个隐藏层的节点的个数K增大追加节点数α(K←K+α)。这样,在该第五实施例中,当将训练数据的个数除以表示运转参数的值的预先设定的范围的最大值及最小值的差值而得到的数据密度增大时,增大神经网络的输出层的前一个隐藏层的节点的个数。即,在该第五实施例中,与将训练数据的个数除以表示运转参数的值的预先设定的范围的最大值及最小值的差值而得到的数据密度的增大相应地,增大神经网络的输出层的前一个隐藏层的节点的个数。After calculating the number of additional nodes α in step 708, the process proceeds to step 709, where the number K of nodes in the previous hidden layer of the output layer of the neural network is updated to increase the number K of nodes in the previous hidden layer of the output layer The number of nodes α (K←K+α) is added. In this way, in the fifth embodiment, when the data density obtained by dividing the number of training data by the difference between the maximum value and the minimum value of the preset range representing the value of the operating parameter increases, the data density increases. The number of nodes in the previous hidden layer of the output layer of the neural network. That is, in the fifth embodiment, the data density is increased in accordance with the increase in the data density obtained by dividing the number of training data by the difference between the maximum value and the minimum value of the preset range representing the value of the operating parameter. Increase the number of nodes in the previous hidden layer of the output layer of the neural network.

当在步骤709中使输出层的前一个隐藏层的节点的个数K增大追加节点数α后(K←K+α),进入步骤710,以使输出层的前一个隐藏层的节点的个数K增大的方式更新神经网络。接着,进入步骤705。在步骤705中,将相对于新的输入值x新得到的训练数据也包含于训练数据,以使输出值y与训练数据之差变小的方式学习更新后的神经网络的权重。即,在步骤705中,使用对新取得的与二次电池60相关的多个类别的运转参数的值通过实测而得到的训练数据及对预先设定的范围Rn内的与二次电池60相关的多个类别的运转参数的值通过实测而事先求出的训练数据,以使根据预先设定的范围内及预先设定的范围外的与二次电池60相关的多个类别的运转参数的值而变化的输出值y与对应于与该二次电池60相关的多个类别的运转参数的值的训练数据之差变小的方式学习更新后的神经网络的权重。When the number K of nodes in the previous hidden layer of the output layer is increased by the number of additional nodes α in step 709 (K←K+α), the process proceeds to step 710, so that the number of nodes in the previous hidden layer of the output layer is increased. The neural network is updated by increasing the number K. Next, go to step 705 . In step 705, the training data newly obtained with respect to the new input value x is also included in the training data, and the weight of the updated neural network is learned so that the difference between the output value y and the training data becomes smaller. That is, in step 705 , the training data obtained by actual measurement of the newly acquired values of the operation parameters of the plurality of types related to the secondary battery 60 and the data related to the secondary battery 60 within the preset range Rn are used. The training data obtained in advance by the actual measurement of the values of the operating parameters of the plurality of types of the The weights of the updated neural network are learned so that the difference between the output value y which changes in value and the training data corresponding to the values of the operating parameters of the plurality of categories related to the secondary battery 60 becomes smaller.

图32及图33示出了第五实施例的变形例。在该变形例中,与二次电池60相关的各类别的运转参数的值的预先设定的范围被划分为多个。即,图32中的Rw、Rx、Ry、Rz分别表示气温、二次电池60的温度、二次电池60的放电时间及二次电池60的每单位时间的放电能量的预先设定的范围,如图24所示,气温、二次电池60的温度、二次电池60的放电时间及二次电池60的每单位时间的放电能量的预先设定的范围被划分为多个。需要说明的是,在图32中,W1、W2…Wn,X1、X2…Xn,Y1、Y2…Yn,Z1、Z2…Zn分别表示各类别的运转参数的值的划分范围。32 and 33 show a modification of the fifth embodiment. In this modification, the predetermined range of the value of the operating parameter of each type related to the secondary battery 60 is divided into a plurality of groups. That is, Rw, Rx, Ry, and Rz in FIG. 32 represent the predetermined ranges of the air temperature, the temperature of the secondary battery 60, the discharge time of the secondary battery 60, and the discharge energy per unit time of the secondary battery 60, respectively. As shown in FIG. 24 , the predetermined ranges of the air temperature, the temperature of the secondary battery 60 , the discharge time of the secondary battery 60 , and the discharge energy per unit time of the secondary battery 60 are divided into a plurality of ranges. It should be noted that, in FIG. 32 , W 1 , W 2 . . . W n , X 1 , X 2 . . . X n , Y 1 , Y 2 . The division range of the value of the operating parameter.

而且,在该变形例中,预先设定有通过各类别的运转参数的值的划分后的各划分范围的组合而划定的多个划分区域〔Wi,Xj,Yk,Zl〕(i=1,2…n,j=1,2…n,k=1,2…n,l=1,2…n),针对各划分区域〔Wi,Xj,Yk,Zl〕分别制作有独立的神经网络。这些神经网络具有图27所示的构造。在该情况下,隐藏层(L=3)的节点的个数针对各神经网络而不同,以下,将划分区域〔Wi,Xj,Yk,Zl〕中的神经网络的输出层的前一个隐藏层的节点的个数用Ki、j、k、l表示。该隐藏层的节点的个数Ki、j、k、l根据各划分区域〔Wi,Xj,Yk,Zl〕内的训练数据的变化相对于输入值的变化的复杂度而事先设定。Furthermore, in this modification, a plurality of divided regions [Wi, Xj, Yk, Zl] (i=1) demarcated by the combination of the divided ranges obtained by dividing the values of the operating parameters of each category are preset. , 2...n, j=1, 2...n, k=1, 2...n, l=1, 2...n), an independent neural network is created for each divided area [Wi, Xj, Yk, Zl] . These neural networks have the configuration shown in FIG. 27 . In this case, the number of nodes in the hidden layer (L=3) is different for each neural network. Hereinafter, the hidden layer before the output layer of the neural network in the region [Wi, Xj, Yk, Zl] will be divided The number of nodes is represented by Ki, j, k, l. The numbers Ki, j, k, and l of nodes in the hidden layer are set in advance according to the complexity of the change of the training data in each of the divided regions [Wi, Xj, Yk, Zl] with respect to the change of the input value.

在该变形例中,对在与二次电池60相关的多个类别的运转参数的值的预先设定的范围Rw、Rx、Ry、Rz内形成的各划分区域〔Wi,Xj,Yk,Zl〕内的各种输入值x1、x2、x3、x4即气温、二次电池60的温度、二次电池60的放电时间及二次电池60的每单位时间的放电能量实测出的二次电池60的劣化度作为训练数据而事先求出,即,对预先设定的范围Rw、Rx、Ry、Rz内的与二次电池60相关的多个类别的运转参数的值通过实测而事先求出了训练数据,根据这些与二次电池60相关的多个类别的运转参数的值及训练数据,也包含隐藏层的节点的个数Ki、j、k、l而决定相对于各划分区域〔Wi,Xj,Yk,Zl〕的神经网络的构造,以使输出值y1、y2、y3与对应的训练数据之差变小的方式事先学习各划分区域〔Wi,Xj,Yk,Zl〕的神经网络的权重。In this modified example, each of the divided regions [Wi, Xj, Yk, Zl formed in the preset ranges Rw, Rx, Ry, Rz of the values of the operating parameters of the plurality of types related to the secondary battery 60 The various input values x1 , x2 , x3 , and x4 in the The degree of deterioration of , is obtained in advance as training data, that is, the values of a plurality of types of operating parameters related to the secondary battery 60 within the preset ranges Rw, Rx, Ry, and Rz are obtained in advance by actual measurement. The training data is determined for each divided area [Wi, The structure of the neural network of Xj, Yk, Zl] learns in advance the difference between the output values y 1 , y 2 , y 3 and the corresponding training data to reduce the difference between the divided regions [Wi, Xj, Yk, Zl] Neural network weights.

因此,在该变形例中,以下,有时也将该事先进行了学习的划分区域〔Wi,Xj,Yk,Zl〕称作已学习划分区域〔Wi,Xj,Yk,Zl〕。需要说明的是,对该预先设定的范围Rw、Rx、Ry、Rz内的与二次电池60相关的多个类别的运转参数的值通过实测而事先求出的训练数据存储于电子控制单元56的存储部。在该变形例中也是,关于各划分区域〔Wi,Xj,Yk,Zl〕,使用与在事先学习中使用的神经网络相同构造的神经网络,使用学习完成时的神经网络的权重,在车辆运转中以车载方式进一步进行学习。图33示出了该以车载方式进行的变形例的学习处理例程,该学习处理例程通过每隔一定时间(例如,每隔一秒)的中断来执行。Therefore, in this modification, the previously learned division area [Wi, Xj, Yk, Zl] may also be referred to as a learned division area [Wi, Xj, Yk, Zl] hereinafter. It should be noted that training data obtained in advance by actual measurement of the values of the plurality of types of operating parameters related to the secondary battery 60 within the preset ranges Rw, Rx, Ry, and Rz are stored in the electronic control unit. 56 storage section. Also in this modification, for each of the divided regions [Wi, Xj, Yk, Zl], a neural network having the same structure as the neural network used in the previous learning is used, and the weights of the neural network at the time of completion of the learning are used, and the vehicle is running. Further learning is carried out in the vehicle mode. FIG. 33 shows a learning processing routine of this modification carried out in the vehicle-mounted manner, and the learning processing routine is executed by interruption at regular intervals (for example, every one second).

参照图33,首先,在步骤800中,读入存储于电子控制单元56的存储部的已学习的权重、在事先的学习中使用的训练数据即对预先设定的范围Rw、Rx、Ry、Rz内的与二次电池60相关的多个类别的运转参数的值通过实测而事先求出的训练数据、各已学习划分区域〔Wi,Xj,Yk,Zl〕。该已学习的权重用作权重的初始值。接着,在步骤801中,读入对各已学习划分区域〔Wi,Xj,Yk,Zl〕在事先的学习中使用的输出层的前一个隐藏层的节点的个数Ki、j、k、l。接着,在步骤802中,判别是否设置有追加学习标志。在未设置追加学习标志时,进入步骤803。Referring to FIG. 33 , first, in step 800, the learned weights stored in the storage unit of the electronic control unit 56 and the training data used in the previous learning, that is, the preset ranges Rw, Rx, Ry, The training data obtained in advance by actual measurement of the values of the operating parameters of the plurality of types related to the secondary battery 60 in Rz, and each learned division area [Wi, Xj, Yk, Zl]. This learned weight is used as an initial value for the weight. Next, in step 801, the number Ki, j, k, l of nodes in the previous hidden layer of the output layer used in the previous learning for each learned partition region [Wi, Xj, Yk, Zl] is read in . Next, in step 802, it is determined whether or not the additional learning flag is set. When the additional learning flag is not set, the process proceeds to step 803 .

在步骤803中,取得新的输入值x1、x2、x3、x4即气温、二次电池60的温度、二次电池60的放电时间及二次电池60的每单位时间的放电能量,该新的输入值x1、x2、x3、x4即新的与二次电池60相关的多个类别的运转参数的值存储于电子控制单元56的存储部。In step 803, new input values x1, x2, x3, and x4 are obtained, namely the air temperature, the temperature of the secondary battery 60, the discharge time of the secondary battery 60, and the discharge energy per unit time of the secondary battery 60. The new input values x1, x2, x3, and x4 are obtained. The input values x1 , x2 , x3 , and x4 , that is, the new values of the operation parameters of a plurality of types related to the secondary battery 60 are stored in the storage unit of the electronic control unit 56 .

接着,在步骤804中,判别新的输入值x1、x2、x3、x4是否处于已学习划分区域〔Wi,Xj,Yk,Zl〕内,即新取得的与二次电池60相关的多个类别的运转参数的值是否处于预先设定的范围Rw、Rx、Ry、Rz内。在新的输入值x1、x2、x3、x4处于已学习划分区域Rw、Rx、Ry、Rz内时,即,在新取得的与二次电池60相关的多个类别的运转参数的值处于预先设定的范围Rw、Rx、Ry、Rz内时,进入步骤805,将输入值x1、x2、x3、x4即新取得的与二次电池60相关的多个类别的运转参数的值向新取得的与二次电池60相关的多个类别的运转参数的值所属的已学习划分区域〔Wi,Xj,Yk,Zl〕的神经网络的输入层的各节点输入,基于从神经网络的输出层的节点输出的输出值y和对新取得的与二次电池60相关的多个类别的运转参数的值通过实测而求出的训练数据,使用误差反向传播法,以使输出值y与训练数据之差变小的方式,进一步学习新取得的与二次电池60相关的多个类别的运转参数的值所属的已学习划分区域〔Wi,Xj,Yk,Zl〕的神经网络的权重。Next, in step 804, it is determined whether the new input values x1, x2, x3, and x4 are within the learned division area [Wi, Xj, Yk, Z1], that is, the newly acquired multiple categories related to the secondary battery 60 Whether the value of the operating parameter is within the preset ranges Rw, Rx, Ry, Rz. When the new input values x1 , x2 , x3 , and x4 are within the learned division regions Rw, Rx, Ry, and Rz, that is, when the newly acquired values of the operating parameters of the plurality of categories related to the secondary battery 60 are in the preliminarily If it is within the set ranges Rw, Rx, Ry, and Rz, the process proceeds to step 805, and the input values x1, x2, x3, and x4, that is, the newly acquired values of the plurality of types of operating parameters related to the secondary battery 60, are added to the newly acquired The input of each node of the input layer of the neural network of the learned partition region [Wi, Xj, Yk, Zl] to which the values of the operating parameters of the plurality of categories related to the secondary battery 60 belong, is based on the input from the output layer of the neural network. The output value y output by the node and the training data obtained by actual measurement of the newly acquired values of the operation parameters of the plurality of categories related to the secondary battery 60 are obtained by using the error back propagation method, so that the output value y is related to the training data. The weights of the neural network of the learned division regions [Wi, Xj, Yk, Zl] to which the newly acquired values of the operating parameters of the plurality of types related to the secondary battery 60 belong so that the difference becomes smaller are further learned.

另一方面,在步骤804中判别为新的输入值x1、x2、x3、x4不处于已学习划分区域〔Wi,Xj,Yk,Zl〕内时,进入步骤806。在步骤806中,设置训练数据要求标志,此时取得的新的输入值xn作为用于追加学习的新的输入值xn而存储。接着,完成处理循环。On the other hand, when it is determined in step 804 that the new input values x1, x2, x3, and x4 are not within the learned division area [Wi, Xj, Yk, Z1], the process proceeds to step 806 . In step 806, a training data request flag is set, and the new input value xn acquired at this time is stored as a new input value xn for additional learning. Next, the processing loop is completed.

当设置训练数据要求标志后,从图30的训练数据取得处理例程可知,在二次电池60的充电完成时,检测二次电池60的劣化度,该二次电池60的劣化度作为用于追加学习的训练数据而存储。接着,设置追加学习标志。当在步骤806中设置追加学习标志后,在下次的处理循环中,从步骤802进入步骤807。在步骤807中,从存储部读出为了用于追加学习而存储的新的输入值xn和为了用于追加学习而存储的训练数据,在预先设定的范围Rw、Rx、Ry、Rz外设定由为了用于追加学习而存储的新的输入值x1、x2、x3、x4划定的未学习区域。例如,在判别为该新的输入值x2、x3、x4处于对应的预先设定的范围Rx、Ry、Rz内且新的输入值x1不处于对应的预先设定的范围Rw内时,若将新的输入值x1所属的范围设为Wa,则设定由新的输入值x1、x2、x3、x4划定的未学习区域〔Wa,Xj,Yk,Zl〕。另外,在判别为新的输入值x3、x4处于对应的预先设定的范围Ry、Rz内且新的输入值x1、x2不处于对应的预先设定的范围Rw、Rx内时,若将新的输入值x1所属的范围设为Wa,将新的输入值x2所属的范围设为Xb,则设定由新的输入值x1、x2、x3、x4划定的未学习区域〔Wa,Xb,Yk,Zl〕。When the training data request flag is set, it can be seen from the training data acquisition processing routine of FIG. 30 that when the charging of the secondary battery 60 is completed, the degree of deterioration of the secondary battery 60 is detected, and the degree of deterioration of the secondary battery 60 is used as a The training data for additional learning is stored. Next, the additional learning flag is set. After the additional learning flag is set in step 806 , the process proceeds from step 802 to step 807 in the next processing loop. In step 807, the new input value xn stored for additional learning and the training data stored for additional learning are read from the storage unit, and are outside the preset ranges Rw, Rx, Ry, Rz An unlearned area defined by the new input values x1, x2, x3, and x4 stored for additional learning is set. For example, when it is determined that the new input values x2, x3, and x4 are within the corresponding preset ranges Rx, Ry, and Rz and the new input value x1 is not within the corresponding preset range Rw, if the The range to which the new input value x1 belongs is set to Wa, and an unlearned area [Wa, Xj, Yk, Zl] defined by the new input values x1, x2, x3, and x4 is set. In addition, when it is determined that the new input values x3 and x4 are within the corresponding preset ranges Ry and Rz and the new input values x1 and x2 are not within the corresponding preset ranges Rw and Rx, if the new input values The range to which the new input value x1 belongs is set to Wa, and the range to which the new input value x2 belongs is set to Xb, then the unlearned area delimited by the new input values x1, x2, x3, and x4 is set [Wa, Xb, Yk, Zl].

接着,在步骤807中,制作相对于未学习区域的新的神经网络。当在步骤807中制作新的神经网络后,进入步骤805。在步骤805中,关于未学习区域,以使输出值y与为了用于追加学习而存储的训练数据之差变小的方式学习对未学习区域制作出的新的神经网络的权重。Next, in step 807, a new neural network for the unlearned region is created. After creating a new neural network in step 807 , go to step 805 . In step 805 , the weights of the new neural network created for the unlearned area are learned so that the difference between the output value y and the training data stored for additional learning becomes small about the unlearned area.

根据以上的说明,在本发明的实施例中,在用于使用神经网络来输出相对于机器的运转参数的值的输出值的机器学习装置中,预先设定有与该机器相关的特定类别的运转参数的值的范围,并且预先设定有与该机器相关的特定类别的运转参数的值的范围所对应的神经网络的隐藏层的节点的个数。在新取得的与该机器相关的特定类别的运转参数的值为预先设定的范围外时,使神经网络的输出层的前一个隐藏层的节点的个数增大,使用对新取得的与该机器相关的特定类别的运转参数的值通过实测而得到的训练数据及对预先设定的范围内的该机器的运转参数的值通过实测而得到的训练数据来学习神经网络的权重。使用学习了权重的神经网络来输出相对于与该机器相关的特定类别的运转参数的值的输出值。From the above description, in the embodiment of the present invention, in the machine learning device for outputting the output value with respect to the value of the operating parameter of the machine using the neural network, the specific type of the machine is preset. The value range of the operating parameter is preset, and the number of nodes of the hidden layer of the neural network corresponding to the value range of the specific type of operating parameter related to the device is preset. When the newly acquired value of the operation parameter of the specific type related to the machine is outside the preset range, increase the number of nodes in the previous hidden layer of the output layer of the neural network, and use the newly acquired and The weights of the neural network are learned from training data obtained by actual measurement of the values of the specific types of operating parameters related to the equipment and training data obtained by actual measurement of the values of the operating parameters of the equipment within a preset range. A neural network that has learned weights is used to output an output value relative to the value of a particular class of operating parameters associated with the machine.

在该情况下,在本发明的实施例中,机器学习装置具备电子控制单元。该电子控制单元具备:参数值取得部,取得与上述的机器相关的特定类别的运转参数的值;运算部,使用包含输入层、隐藏层及输出层的神经网络来进行运算;及存储部,与上述的机器相关的特定类别的运转参数的值被向输入层输入,根据与上述的机器相关的特定类别的运转参数的值而变化的输出值被从输出层输出。预先设定有与上述的机器相关的特定类别的运转参数的值的范围,并且预先设定有与上述的机器相关的特定类别的运转参数的值的范围所对应的神经网络的隐藏层的节点的个数,对预先设定的范围内的与上述的机器相关的特定类别的运转参数的值通过实测而事先求出的训练数据存储于存储部。在新取得的与上述的机器相关的特定类别的运转参数的值为预先设定的范围内时,使用对新取得的与上述的机器相关的特定类别的运转参数的值通过实测而得到的训练数据,利用运算部,以使根据新取得的与上述的机器相关的特定类别的运转参数的值而变化的输出值与对新取得的与上述的机器相关的特定类别的运转参数的值通过实测而得到的训练数据之差变小的方式学习神经网络的权重。在由参数值取得部新取得的与上述的机器相关的特定类别的运转参数的值为预先设定的范围外时,与对新取得的与上述的机器相关的特定类别的运转参数的值通过实测而得到的训练数据的个数的增大或将训练数据的个数除以表示运转参数的值的预先设定的范围的最大值及最小值的差值而得到的数据密度的增大相应地,增大神经网络的输出层的前一个隐藏层的节点的个数,并且使用对新取得的与上述的机器相关的特定类别的运转参数的值通过实测而得到的训练数据及事先求出的训练数据,利用运算部,以使根据预先设定的范围内及预先设定的范围外的与上述的机器相关的特定类别的运转参数的值而变化的输出值与对应于与上述的机器相关的特定类别的运转参数的值的训练数据之差变小的方式学习神经网络的权重。使用学习了权重的神经网络来输出相对于与上述的机器相关的特定类别的运转参数的值的输出值。In this case, in the embodiment of the present invention, the machine learning device includes an electronic control unit. The electronic control unit includes: a parameter value acquisition unit that acquires values of operating parameters of a specific type related to the above-mentioned equipment; a calculation unit that performs calculations using a neural network including an input layer, a hidden layer, and an output layer; and a storage unit, The value of the operating parameter of the specific type related to the above-mentioned equipment is input to the input layer, and the output value that varies according to the value of the specific type of operating parameter related to the above-mentioned equipment is output from the output layer. The range of values of the operating parameters of the specific type related to the above-mentioned equipment is preset, and the nodes of the hidden layer of the neural network corresponding to the value range of the specific type of operating parameters related to the above-mentioned equipment are preset. The number of objects is stored in the storage unit as training data obtained in advance by actual measurement for the values of the operating parameters of the specific type related to the above-mentioned equipment within a preset range. When the newly acquired value of the operating parameter of the specific type related to the above-mentioned equipment is within a preset range, a training method obtained by actual measurement of the newly acquired value of the specific type of operating parameter related to the above-mentioned equipment is used The data is used to make the output value changed according to the newly acquired value of the operating parameter of the specific type related to the above-mentioned equipment and the value of the newly acquired operating parameter of the specific type related to the above-mentioned equipment to pass actual measurement by using the calculation unit The weights of the neural network are learned in such a way that the difference between the obtained training data becomes smaller. When the value of the operation parameter of the specific type related to the above-mentioned equipment newly acquired by the parameter value acquisition unit is outside the preset range, the value of the newly acquired operation parameter of the specific type related to the above-mentioned equipment is passed through The increase in the number of training data obtained by the actual measurement or the increase in the data density obtained by dividing the number of training data by the difference between the maximum value and the minimum value of the preset range representing the value of the operating parameter corresponds to an increase in the data density. In order to increase the number of nodes in the previous hidden layer of the output layer of the neural network, and use the training data obtained by the actual measurement of the newly acquired values of the operating parameters of the specific category related to the above-mentioned equipment, and obtained in advance The training data is used to make the output values that vary according to the values of the specific types of operating parameters related to the above-mentioned equipment within the preset range and outside the preset range to correspond to the above-mentioned equipment. The weights of the neural network are learned so that the difference between the training data of the values of the operation parameters of the specific category becomes smaller. An output value with respect to a value of a specific class of operating parameters related to the above-mentioned machine is output using a neural network that has learned weights.

另外,在该情况下,在本发明的实施例中,电子控制单元具备:参数值取得部,取得与上述的机器相关的特定类别的运转参数的值;运算部,使用包含输入层、隐藏层及输出层的神经网络来进行运算;及存储部,与上述的机器相关的特定类别的运转参数的值被向输入层输入,根据与上述的机器相关的特定类别的运转参数的值而变化的多个输出值被从输出层输出。预先设定有与上述的机器相关的特定类别的运转参数的值的范围,并且预先设定有与上述的机器相关的特定类别的运转参数的值的范围所对应的神经网络的隐藏层的节点的个数,对预先设定的范围内的与上述的机器相关的特定类别的运转参数的值通过实测而事先求出的训练数据存储于存储部。在由参数值取得部新取得的上述的机器的运转参数的值为预先设定的范围内时,使用对新取得的与上述的机器相关的特定类别的运转参数的值通过实测而得到的训练数据,利用运算部,以使根据与上述的机器相关的特定类别的运转参数的值而变化的多个输出值与对应于与上述的机器相关的特定类别的运转参数的值的训练数据之差变小的方式学习神经网络的权重。在由参数值取得部新取得的与上述的机器相关的特定类别的运转参数的值为预先设定的范围外时,根据对新取得的与上述的机器相关的特定类别的运转参数的值通过实测而得到的训练数据的个数的增大或将训练数据的个数除以表示预先设定的范围的最大值及最小值的差值而得到的数据密度的增大,来使神经网络的输出层的前一个隐藏层的节点的个数增大,并且使用对预先设定的范围内及预先设定的范围外的新取得的与上述的机器相关的特定类别的运转参数的值通过实测而得到的训练数据及事先求出的训练数据,利用运算部,以使根据与上述的机器相关的特定类别的运转参数的值而变化的多个输出值与对应于与上述的机器相关的特定类别的运转参数的值的训练数据之差变小的方式学习神经网络的权重。使用学习了权重的神经网络来输出相对于与上述的机器相关的特定类别的运转参数的值的多个输出值。In addition, in this case, in the embodiment of the present invention, the electronic control unit includes: a parameter value acquisition unit that acquires the value of the operation parameter of the specific type related to the above-mentioned equipment; and a calculation unit that uses the input layer and the hidden layer and the neural network of the output layer to perform operations; and a storage unit, the value of the specific type of operating parameter related to the above-mentioned equipment is input to the input layer, and the value of the specific type of operating parameter related to the above-mentioned equipment is changed. Multiple output values are output from the output layer. The range of values of the operating parameters of the specific type related to the above-mentioned equipment is preset, and the nodes of the hidden layer of the neural network corresponding to the value range of the specific type of operating parameters related to the above-mentioned equipment are preset. The number of objects is stored in the storage unit as training data obtained in advance by actual measurement for the values of the operating parameters of the specific type related to the above-mentioned equipment within a preset range. When the value of the operating parameter of the above-mentioned equipment newly acquired by the parameter value acquisition unit is within the preset range, the training obtained by the actual measurement of the newly acquired value of the operating parameter of the specific type related to the above-mentioned equipment is used The data is obtained by using the arithmetic unit to obtain a difference between a plurality of output values that vary according to the value of the specific type of operating parameter related to the above-mentioned equipment and training data corresponding to the value of the specific type of operating parameter related to the above-mentioned equipment Learning the weights of a neural network in a smaller way. When the value of the operation parameter of the specific type related to the above-mentioned equipment newly acquired by the parameter value acquisition unit is outside the preset range, the value of the operation parameter of the specific type related to the above-mentioned equipment newly acquired is passed through The increase of the number of training data obtained by actual measurement or the increase of the data density obtained by dividing the number of training data by the difference between the maximum value and the minimum value representing a preset range increases the performance of the neural network. The number of nodes in the previous hidden layer of the output layer is increased, and the values of the operating parameters of the specific category related to the above-mentioned equipment that are newly acquired within the preset range and outside the preset range are used to pass the actual measurement. The obtained training data and the previously obtained training data are used by the arithmetic unit to make a plurality of output values that vary according to the value of the operating parameter of the specific type related to the above-mentioned equipment to correspond to the specific type of the above-mentioned equipment. The weights of the neural network are learned so that the difference between the values of the operating parameters of the classes and the training data becomes smaller. A plurality of output values with respect to the values of the above-described specific types of operating parameters related to the machine are output using the neural network that has learned the weights.

另一方面,在本发明的实施例中,在用于使用神经网络来输出相对于机器的运转参数的值的输出值的机器学习装置中,预先设定有与该机器相关的多个类别的运转参数的值的范围,并且预先设定有与该机器相关的多个类别的运转参数的值的范围所对应的神经网络的隐藏层的节点的个数,在新取得的与该机器相关的多个类别的运转参数的值为预先设定的范围外时,使神经网络的输出层的前一个隐藏层的节点的个数增大,使用对新取得的与该机器相关的多个类别的运转参数的值通过实测而得到的训练数据及对预先设定的范围内的该机器的运转参数的值通过实测而得到的训练数据来学习神经网络的权重。使用学习了权重的神经网络来输出相对于与该机器相关的多个类别的运转参数的值的输出值。On the other hand, in the embodiment of the present invention, in the machine learning device for outputting the output value with respect to the value of the operating parameter of the machine using the neural network, a plurality of categories related to the machine are preset. The value range of the operating parameter, and the number of nodes in the hidden layer of the neural network corresponding to the range of values of the operating parameters of a plurality of categories related to the machine is preset, and the newly acquired data related to the machine is used. When the values of the operating parameters of multiple categories are outside the preset range, the number of nodes in the previous hidden layer of the output layer of the neural network is increased, and the newly acquired values of multiple categories related to the machine are used. The weights of the neural network are learned from the training data obtained by the actual measurement of the values of the operation parameters and the training data obtained by the actual measurement of the values of the operation parameters of the equipment within a preset range. The neural network that has learned the weights is used to output output values with respect to the values of the operating parameters of the plurality of categories related to the machine.

在该情况下,在本发明的实施例中,机器学习装置具备电子控制单元,该电子控制单元具备:参数值取得部,取得与上述的机器相关的多个类别的运转参数的值;运算部,使用包含输入层、隐藏层及输出层的神经网络来进行运算;及存储部,与上述的机器相关的多个类别的运转参数的值被向输入层输入,根据与上述的机器相关的多个类别的运转参数的值而变化的输出值被从输出层输出。关于与上述的机器相关的多个类别的运转参数的各个,预先设定有各类别的运转参数的值的范围,并且预先设定有与上述的机器相关的多个类别的运转参数的值的范围所对应的神经网络的隐藏层的节点的个数,对多个类别的运转参数的值通过实测而事先求出且各类别的运转参数的值为预先设定的范围内的训练数据存储于存储部。在由参数值取得部新取得的上述的机器的多个运转参数的值分别为预先设定的范围内时,使用对新取得的与上述的机器相关的多个类别的运转参数的值通过实测而得到的训练数据,利用运算部,以使根据与上述的机器相关的多个类别的运转参数的值而变化的输出值与对应于与上述的机器相关的多个类别的运转参数的值的训练数据之差变小的方式学习神经网络的权重。在由参数值取得部新取得的与上述的机器相关的多个类别的运转参数中的至少一个类别的运转参数的值为预先设定的范围外时,与对新取得的与上述的机器相关的多个类别的运转参数的值通过实测而得到的训练数据的个数的增大或将训练数据的个数除以表示预先设定的范围的最大值及最小值的差值而得到的数据密度的增大相应地,使神经网络的输出层的前一个隐藏层的节点的个数增大,并且使用对预先设定的范围内及该预先设定的范围外的新取得的与上述的机器相关的多个类别的运转参数的值通过实测而得到的训练数据及事先求出的训练数据,利用运算部,以使根据与上述的机器相关的多个类别的运转参数的值而变化的输出值与对应于与上述的机器相关的多个类别的运转参数的值的训练数据之差变小的方式学习神经网络的权重。使用学习了权重的神经网络来输出相对于与上述的机器相关的多个类别的运转参数的值的输出值。In this case, in the embodiment of the present invention, the machine learning device includes an electronic control unit including: a parameter value acquisition unit that acquires values of a plurality of types of operating parameters related to the above-mentioned equipment; a calculation unit , using a neural network including an input layer, a hidden layer, and an output layer to perform operations; and a storage unit, in which the values of a plurality of types of operating parameters related to the above-mentioned equipment are input to the input layer, Output values that vary depending on the values of the operating parameters of each category are output from the output layer. For each of the plurality of types of operating parameters related to the above-mentioned equipment, a range of values of the operating parameters of each type is preset, and a value of the plurality of types of operating parameters related to the above-mentioned equipment is preset. The number of nodes in the hidden layer of the neural network corresponding to the range, the values of the operating parameters of a plurality of categories are obtained in advance through actual measurement, and the training data of the values of the operating parameters of each category within the preset range is stored in storage department. When the values of the plurality of operating parameters of the above-mentioned equipment newly acquired by the parameter value acquisition unit are within the preset ranges, respectively, the actual measurement is performed using the newly acquired values of the operating parameters of the plurality of types related to the above-mentioned equipment. The training data thus obtained is used by the computing unit to make the output values that vary according to the values of the operating parameters of the plurality of categories related to the above-mentioned equipment and the values of the operating parameters corresponding to the plurality of categories related to the above-mentioned equipment to be The weights of the neural network are learned in such a way that the difference between the training data becomes smaller. When the value of the operation parameter of at least one of the plurality of types of operation parameters related to the above-mentioned equipment newly acquired by the parameter value acquisition unit is outside the preset range, the newly acquired operation parameter related to the above-mentioned equipment is The values of the operating parameters of a plurality of categories are obtained by increasing the number of training data obtained by actual measurement or by dividing the number of training data by the difference between the maximum value and the minimum value representing a preset range. The increase of the density correspondingly increases the number of nodes in the previous hidden layer of the output layer of the neural network, and uses the newly obtained data within the preset range and outside the preset range with the above-mentioned The training data obtained by the actual measurement of the values of the operating parameters of the plurality of categories related to the equipment and the training data obtained in advance are used by the computing unit to change according to the values of the operating parameters of the plurality of categories related to the above-mentioned equipment. The weights of the neural network are learned such that the difference between the output value and the training data corresponding to the values of the operating parameters of the above-described plurality of categories of equipment becomes smaller. Output values for the values of the operating parameters of the plurality of categories related to the above-mentioned equipment are output using the neural network that has learned the weights.

另外,在该情况下,在本发明的实施例中,电子控制单元具备:参数值取得部,取得与上述的机器相关的多个类别的运转参数的值;运算部,使用包含输入层、隐藏层及输出层的神经网络来进行运算;及存储部,与上述的机器相关的多个类别的运转参数的值被向输入层输入,根据与上述的机器相关的多个类别的运转参数的值而变化的多个输出值被从输出层输出。关于与上述的机器相关的多个类别的运转参数的各个,预先设定有各类别的运转参数的值的范围,并且预先设定有与上述的机器相关的多个类别的运转参数的值的范围所对应的神经网络的隐藏层的节点的个数,对多个类别的运转参数的值通过实测而事先求出且各类别的运转参数的值为预先设定的范围内的训练数据存储于存储部。在由参数值取得部新取得的上述的机器的多个运转参数的值分别为预先设定的范围内时,使用对新取得的与上述的机器相关的多个类别的运转参数的值通过实测而得到的训练数据,利用运算部,以使根据与上述的机器相关的多个类别的运转参数的值而变化的多个输出值与对应于与上述的机器相关的多个类别的运转参数的值的训练数据之差变小的方式学习神经网络的权重。在由参数值取得部新取得的与上述的机器相关的多个类别的运转参数中的至少一个类别的运转参数的值为预先设定的范围外时,与对新取得的与上述的机器相关的多个类别的运转参数的值通过实测而得到的训练数据的个数的增大或将训练数据的个数除以表示预先设定的范围的最大值及最小值的差值而得到的数据密度的增大相应地,使神经网络的输出层的前一个隐藏层的节点的个数增大,并且使用对预先设定的范围内及预先设定的范围外的新取得的与上述的机器相关的多个类别的运转参数的值通过实测而得到的训练数据及事先求出的训练数据,利用运算部,以使根据与上述的机器相关的多个类别的运转参数的值而变化的多个输出值与对应于与上述的机器相关的多个类别的运转参数的值的训练数据之差变小的方式学习神经网络的权重。使用学习了权重的神经网络来输出相对于与上述的机器相关的多个类别的运转参数的值的多个输出值。In addition, in this case, in the embodiment of the present invention, the electronic control unit includes: a parameter value acquisition unit that acquires the values of a plurality of types of operation parameters related to the above-mentioned equipment; layer and the output layer of the neural network to perform operations; and a storage unit, the values of the plurality of types of operating parameters related to the above-mentioned equipment are input to the input layer, and the values of the plurality of types of operating parameters related to the above-mentioned equipment are used. And the changed multiple output values are output from the output layer. For each of the plurality of types of operating parameters related to the above-mentioned equipment, a range of values of the operating parameters of each type is preset, and a value of the plurality of types of operating parameters related to the above-mentioned equipment is preset. The number of nodes in the hidden layer of the neural network corresponding to the range, the values of the operating parameters of a plurality of categories are obtained in advance through actual measurement, and the training data of the values of the operating parameters of each category within the preset range is stored in storage department. When the values of the plurality of operating parameters of the above-mentioned equipment newly acquired by the parameter value acquisition unit are within the preset ranges, respectively, the actual measurement is performed using the newly acquired values of the operating parameters of the plurality of types related to the above-mentioned equipment. The training data thus obtained is used by the arithmetic unit so that a plurality of output values that vary according to the values of the plurality of types of operating parameters related to the above-mentioned equipment and the plurality of types of operating parameters corresponding to the above-mentioned equipment are obtained. The weights of the neural network are learned in such a way that the difference between the values of the training data becomes smaller. When the value of the operation parameter of at least one of the plurality of types of operation parameters related to the above-mentioned equipment newly acquired by the parameter value acquisition unit is outside the preset range, the newly acquired operation parameter related to the above-mentioned equipment is The values of the operating parameters of a plurality of categories are obtained by increasing the number of training data obtained by actual measurement or by dividing the number of training data by the difference between the maximum value and the minimum value representing a preset range. The increase of the density correspondingly increases the number of nodes in the previous hidden layer of the output layer of the neural network, and uses the newly acquired and the above-mentioned machines within the preset range and outside the preset range. The training data obtained by the actual measurement and the training data obtained in advance for the values of the operating parameters of the plurality of categories related to the above-mentioned apparatuses are used to change more according to the values of the operating parameters of the plurality of categories related to the above-mentioned equipment. The weights of the neural network are learned such that the difference between each output value and the training data corresponding to the values of the operating parameters of the plurality of categories related to the above-mentioned equipment becomes smaller. A plurality of output values for the values of the operating parameters of the plurality of categories related to the above-mentioned equipment are output using the neural network that has learned the weights.

另一方面,在本发明的实施例中,在用于使用神经网络来输出相对于机器的运转参数的值的输出值的机器学习装置中,预先设定有与该机器相关的多个类别的运转参数的值的范围,并且预先形成有与该机器相关的多个类别的运转参数的值的范围所对应的神经网络,在新取得的与该机器相关的多个类别的运转参数的值中的至少一个类别的运转参数的值为预先设定的范围外时,形成新的神经网络,使用对新取得的与该机器相关的多个类别的运转参数的值通过实测而得到的训练数据来学习新的神经网络的权重。使用学习了权重的神经网络来输出相对于与该机器相关的多个类别的运转参数的值的输出值。On the other hand, in the embodiment of the present invention, in the machine learning device for outputting the output value with respect to the value of the operating parameter of the machine using the neural network, a plurality of categories related to the machine are preset. The range of values of the operating parameters, and the neural network corresponding to the range of values of the operating parameters of the plurality of categories related to the device is pre-formed, among the newly acquired values of the operating parameters of the plurality of categories related to the device When the value of the operating parameter of at least one of the categories is outside the preset range, a new neural network is formed, and the training data obtained by the actual measurement of the newly acquired values of the operating parameters of the plurality of categories related to the machine is used to form a new neural network. Learn the weights of a new neural network. The neural network that has learned the weights is used to output output values with respect to the values of the operating parameters of the plurality of categories related to the machine.

在该情况下,在本发明的实施例中,机器学习装置具备电子控制单元,该电子控制单元具备:参数值取得部,取得与上述的机器相关的多个类别的运转参数的值;运算部,使用包含输入层、隐藏层及输出层的多个神经网络来进行运算;及存储部。与上述的机器相关的多个类别的运转参数的值被向输入层输入,根据与上述的机器相关的多个类别的运转参数的值而变化的输出值被从对应的输出层输出。关于与上述的机器相关的多个类别的运转参数的各个,预先设定有各类别的运转参数的值的范围,将各类别的运转参数的值的预先设定的范围划分为多个,并且预先设定有通过各类别的运转参数的值的划分后的各范围的组合而划定的多个划分区域。针对各划分区域制作有神经网络并且预先设定有各神经网络的隐藏层的节点的个数。对多个类别的运转参数的值通过实测而事先求出的训练数据存储于存储部。在由参数值取得部新取得的与上述的机器相关的多个类别的运转参数的值为预先设定的范围内时,使用对新取得的与上述的机器相关的多个类别的运转参数的值通过实测而得到的训练数据,利用运算部,以使根据与上述的机器相关的多个类别的运转参数的值而变化的输出值与对应于与上述的机器相关的多个类别的运转参数的值的训练数据之差变小的方式,学习新取得的与上述的机器相关的多个类别的运转参数的值所属的划分区域的神经网络的权重。在由参数值取得部新取得的与上述的机器相关的多个类别的运转参数中的至少一个类别的运转参数的值为预先设定的范围外时,设定至少一个类别的运转参数的值所属且由各类别的运转参数的值的预先设定的范围的组合划定的新的区域,并且对该新的区域制作新的神经网络。使用对新取得的与上述的机器相关的多个类别的运转参数的值通过实测而得到的训练数据,利用运算部,以使根据与上述的机器相关的多个类别的运转参数的值而变化的输出值与对应于与上述机器相关的多个类别的运转参数的值的训练数据之差变小的方式学习新的神经网络的权重。使用学习了权重的各神经网络来输出相对于上述的机器的运转参数的值的输出值。In this case, in the embodiment of the present invention, the machine learning device includes an electronic control unit including: a parameter value acquisition unit that acquires values of a plurality of types of operating parameters related to the above-mentioned equipment; a calculation unit , using a plurality of neural networks including an input layer, a hidden layer and an output layer to perform operations; and a storage unit. The values of the operating parameters of the plurality of categories related to the above-mentioned equipment are input to the input layer, and output values that vary according to the values of the plurality of categories of operating parameters related to the above-mentioned equipment are output from the corresponding output layer. For each of the plurality of types of operation parameters related to the above-mentioned equipment, a range of values of the operation parameters of each type is preset, and the preset ranges of the values of the operation parameters of each type are divided into a plurality of groups, and A plurality of divided regions demarcated by the combination of the divided ranges of the values of the operating parameters of each category are set in advance. A neural network is created for each divided region, and the number of nodes of the hidden layer of each neural network is preset. The training data obtained in advance by actual measurement for the values of the operating parameters of the plurality of categories are stored in the storage unit. When the values of the operating parameters of the plurality of categories related to the above-mentioned equipment newly acquired by the parameter value acquisition unit are within a preset range, the values of the newly acquired operating parameters of the plurality of categories related to the above-mentioned equipment are used. The training data obtained by the actual measurement is used to make the output values that vary according to the values of the operating parameters of the plurality of categories related to the above-mentioned equipment and the operating parameters corresponding to the plurality of categories of the above-mentioned equipment by the computing unit. The weights of the neural network of the divided regions to which the values of the operating parameters of the plurality of categories newly acquired related to the above-mentioned equipment belong are learned so that the difference between the values of the training data becomes smaller. When the value of the operation parameter of at least one type of the operation parameters of the plurality of types newly acquired by the parameter value acquisition unit is outside the preset range, the value of the operation parameter of at least one type is set It belongs to a new region defined by a combination of preset ranges of operating parameter values for each category, and a new neural network is created for the new region. Using the training data obtained by the actual measurement of the newly acquired values of the operating parameters of the plurality of categories related to the above-mentioned equipment, the arithmetic unit is used to change the values of the operating parameters of the plurality of categories related to the above-mentioned equipment. The weights of the new neural network are learned in such a way that the difference between the output value of and the training data corresponding to the values of the operating parameters of the plurality of categories related to the above-mentioned machine becomes smaller. Output values corresponding to the values of the above-mentioned operating parameters of the equipment are output using each neural network that has learned the weights.

标号说明Label description

1 内燃机1 Internal combustion engine

14 节气门开度传感器14 Throttle valve opening sensor

23 NOX传感器23 NO X sensor

24 大气温传感器24 Atmospheric temperature sensor

30、56、67 电子控制单元30, 56, 67 Electronic control unit

50 空调主体50 Main body of air conditioner

53、65 温度计53, 65 Thermometer

54 湿度计54 Hygrometer

55 GPS55 GPS

60 二次电池60 Secondary batteries

53 电流计53 Galvanometer

64 电压计。64 Voltmeter.

Claims (9)

1.一种用于内燃机的机器学习装置,具备电子控制单元,1. A machine learning device for an internal combustion engine, comprising an electronic control unit, 该电子控制单元具备:The electronic control unit has: 参数值取得部,取得与内燃机相关的特定类别的运转参数的值;a parameter value acquisition unit for acquiring values of specific types of operating parameters related to the internal combustion engine; 运算部,使用包含输入层、隐藏层及输出层的神经网络来进行运算;及an operation part, which uses a neural network including an input layer, a hidden layer and an output layer to perform operations; and 存储部,storage department, 与上述内燃机相关的特定类别的运转参数的值被向输入层输入,根据与上述内燃机相关的特定类别的运转参数的值而变化的输出值被从输出层输出,The value of the specific type of operating parameter related to the internal combustion engine is input to the input layer, and the output value that varies according to the value of the specific type of operating parameter related to the internal combustion engine is output from the output layer, 在该机器学习装置中,In this machine learning device, 预先设定有与上述内燃机相关的特定类别的运转参数的值的范围,并且预先设定有与上述内燃机相关的特定类别的运转参数的值的范围所对应的神经网络的隐藏层的节点的个数,The range of values of the specific type of operating parameter related to the internal combustion engine is preset, and the number of nodes of the hidden layer of the neural network corresponding to the range of the value of the specific type of operating parameter related to the internal combustion engine is preset. number, 对该预先设定的范围内的与上述内燃机相关的特定类别的运转参数的值通过实测而事先求出的训练数据存储于存储部,The training data obtained in advance by actual measurement of the values of the specific types of operating parameters related to the internal combustion engine within the preset range is stored in the storage unit, 在新取得的与上述内燃机相关的特定类别的运转参数的值为预先设定的范围内时,使用对新取得的与上述内燃机相关的特定类别的运转参数的值通过实测而得到的训练数据,利用该运算部,以使根据新取得的与上述内燃机相关的特定类别的运转参数的值而变化的输出值与对新取得的与上述内燃机相关的特定类别的运转参数的值通过实测而得到的训练数据之差变小的方式学习神经网络的权重,When the newly acquired value of the operating parameter of the specific type related to the internal combustion engine is within a preset range, training data obtained by actual measurement of the newly acquired value of the specific type of operating parameter related to the internal combustion engine is used, Using this computing unit, the output value that changes according to the newly acquired value of the operating parameter of the specific type related to the internal combustion engine and the newly acquired value of the operating parameter of the specific type related to the internal combustion engine are obtained by actual measurement. The weights of the neural network are learned in such a way that the difference between the training data becomes smaller, 在由该参数值取得部新取得的与上述内燃机相关的特定类别的运转参数的值为预先设定的范围外时,与对新取得的与上述内燃机相关的特定类别的运转参数的值通过实测而得到的训练数据的个数的增大或将该训练数据的个数除以表示运转参数的值的预先设定的范围的最大值及最小值的差值而得到的数据密度的增大相应地,使神经网络的输出层的前一个隐藏层的节点的个数增大,并且使用对新取得的与上述内燃机相关的特定类别的运转参数的值通过实测而得到的训练数据及该事先求出的训练数据,利用该运算部,以使根据该预先设定的范围内及该预先设定的范围外的与上述内燃机相关的特定类别的运转参数的值而变化的输出值与对应于与上述内燃机相关的特定类别的运转参数的值的训练数据之差变小的方式学习神经网络的权重,使用学习了权重的神经网络来输出相对于与上述内燃机相关的特定类别的运转参数的值的输出值。When the value of the operating parameter of the specific type related to the internal combustion engine newly acquired by the parameter value acquisition unit is outside the preset range, the value of the newly acquired operating parameter of the specific type related to the internal combustion engine is measured by actual measurement The increase in the number of training data obtained or the increase in the data density obtained by dividing the number of training data by the difference between the maximum value and the minimum value of a preset range representing the value of the operating parameter corresponds to an increase in the data density. Specifically, the number of nodes in the previous hidden layer of the output layer of the neural network is increased, and the training data obtained by actual measurement of the newly acquired values of the operating parameters of the specific type related to the above-mentioned internal combustion engine and the pre-required values are used. The obtained training data is used to make the output value that changes according to the value of the operating parameter of the specific type related to the above-mentioned internal combustion engine within the preset range and outside the preset range to correspond to the The weights of the neural network are learned such that the difference between the training data of the values of the specific types of operating parameters related to the internal combustion engine becomes smaller, and the weights of the neural network are used to output the values of the specific types of operating parameters related to the internal combustion engine. output value. 2.一种用于内燃机的机器学习装置,具备电子控制单元,2. A machine learning device for an internal combustion engine, comprising an electronic control unit, 该电子控制单元具备:The electronic control unit has: 参数值取得部,取得与内燃机相关的特定类别的运转参数的值;a parameter value acquisition unit for acquiring values of specific types of operating parameters related to the internal combustion engine; 运算部,使用包含输入层、隐藏层及输出层的神经网络来进行运算;及an operation part, which uses a neural network including an input layer, a hidden layer and an output layer to perform operations; and 存储部,storage department, 与上述内燃机相关的特定类别的运转参数的值被向输入层输入,根据与上述内燃机相关的特定类别的运转参数的值而变化的多个输出值被从输出层输出,The value of the specific type of operating parameter related to the internal combustion engine is input to the input layer, and a plurality of output values that vary according to the value of the specific type of operating parameter related to the internal combustion engine are output from the output layer, 在该机器学习装置中,In this machine learning device, 预先设定有与上述内燃机相关的特定类别的运转参数的值的范围,并且预先设定有与上述内燃机相关的特定类别的运转参数的值的范围所对应的神经网络的隐藏层的节点的个数,The range of values of the specific type of operating parameter related to the internal combustion engine is preset, and the number of nodes of the hidden layer of the neural network corresponding to the range of the value of the specific type of operating parameter related to the internal combustion engine is preset. number, 对该预先设定的范围内的与上述内燃机相关的特定类别的运转参数的值通过实测而事先求出的训练数据存储于存储部,The training data obtained in advance by actual measurement of the values of the specific types of operating parameters related to the internal combustion engine within the preset range is stored in the storage unit, 在由该参数值取得部新取得的上述内燃机的运转参数的值为预先设定的范围内时,使用对新取得的与上述内燃机相关的特定类别的运转参数的值通过实测而得到的训练数据,利用该运算部,以使根据与上述内燃机相关的特定类别的运转参数的值而变化的多个输出值与对应于与上述内燃机相关的特定类别的运转参数的值的训练数据之差变小的方式学习神经网络的权重,When the value of the operating parameter of the internal combustion engine newly acquired by the parameter value acquisition unit is within a preset range, training data obtained by actual measurement of the newly acquired value of the operating parameter of the specific type related to the internal combustion engine is used and the difference between the plurality of output values that vary according to the value of the specific type of operating parameter related to the internal combustion engine and the training data corresponding to the value of the specific type of operating parameter related to the internal combustion engine is reduced by the computing unit way to learn the weights of the neural network, 在由该参数值取得部新取得的与上述内燃机相关的特定类别的运转参数的值为预先设定的范围外时,与对新取得的与上述内燃机相关的特定类别的运转参数的值通过实测而得到的训练数据的个数的增大或将该训练数据的个数除以表示预先设定的范围的最大值及最小值的差值而得到的数据密度的增大相应地,使神经网络的输出层的前一个隐藏层的节点的个数增大,并且使用对该预先设定的范围内及该预先设定的范围外的新取得的与上述内燃机相关的特定类别的运转参数的值通过实测而得到的训练数据及该事先求出的训练数据,利用该运算部,以使根据与上述内燃机相关的特定类别的运转参数的值而变化的多个输出值与对应于与上述内燃机相关的特定类别的运转参数的值的训练数据之差变小的方式学习神经网络的权重,使用学习了权重的神经网络来输出相对于与上述内燃机相关的特定类别的运转参数的值的多个输出值。When the value of the operating parameter of the specific type related to the internal combustion engine newly acquired by the parameter value acquisition unit is outside the preset range, the value of the newly acquired operating parameter of the specific type related to the internal combustion engine is measured by actual measurement The increase in the number of training data obtained or the increase in data density obtained by dividing the number of training data by the difference between the maximum value and the minimum value representing a preset range corresponds to the increase in the neural network. The number of nodes in the previous hidden layer of the output layer increases, and the newly acquired values of the operating parameters of the specific category related to the internal combustion engine within the preset range and outside the preset range are used. The training data obtained by the actual measurement and the training data obtained in advance are used by the computing unit to make a plurality of output values that vary according to the values of the specific types of operating parameters related to the internal combustion engine to correspond to those related to the internal combustion engine. The weights of the neural network are learned so that the difference between the values of the operating parameters of the specific category of the training data becomes smaller, and the neural network that has learned the weights is used to output a plurality of outputs with respect to the values of the operating parameters of the specific category related to the above-mentioned internal combustion engine. value. 3.一种用于内燃机或二次电池的机器学习装置,具备电子控制单元,3. A machine learning device for an internal combustion engine or a secondary battery, comprising an electronic control unit, 该电子控制单元具备:The electronic control unit has: 参数值取得部,取得与内燃机或二次电池相关的多个类别的运转参数的值;a parameter value acquisition unit for acquiring values of a plurality of types of operating parameters related to the internal combustion engine or the secondary battery; 运算部,使用包含输入层、隐藏层及输出层的神经网络来进行运算;及an operation part, which uses a neural network including an input layer, a hidden layer and an output layer to perform operations; and 存储部,storage department, 与上述内燃机或二次电池相关的多个类别的运转参数的值被向输入层输入,根据与上述内燃机或二次电池相关的多个类别的运转参数的值而变化的输出值被从输出层输出,The values of the plurality of types of operating parameters related to the internal combustion engine or the secondary battery are input to the input layer, and output values that vary according to the values of the plurality of types of operating parameters related to the internal combustion engine or the secondary battery are sent from the output layer. output, 在该机器学习装置中,In this machine learning device, 关于与上述内燃机或二次电池相关的多个类别的运转参数的各个,预先设定有各类别的运转参数的值的范围,并且预先设定有与上述内燃机或二次电池相关的多个类别的运转参数的值的范围所对应的神经网络的隐藏层的节点的个数,For each of the plurality of categories of operating parameters related to the above-mentioned internal combustion engine or secondary battery, a range of values of the operating parameters of each category is preset, and a plurality of categories related to the above-mentioned internal combustion engine or secondary battery are preset. The number of nodes in the hidden layer of the neural network corresponding to the range of values of the operating parameters, 对多个类别的运转参数的值通过实测而事先求出且各类别的运转参数的值为该预先设定的范围内的训练数据存储于存储部,The values of the operating parameters of a plurality of categories are obtained in advance through actual measurement, and the training data in which the values of the operating parameters of each category are within the preset range are stored in the storage unit, 在由该参数值取得部新取得的上述内燃机或二次电池的多个运转参数的值分别为预先设定的范围内时,使用对新取得的与上述内燃机或二次电池相关的多个类别的运转参数的值通过实测而得到的训练数据,利用该运算部,以使根据与上述内燃机或二次电池相关的多个类别的运转参数的值而变化的输出值与对应于与上述内燃机或二次电池相关的多个类别的运转参数的值的训练数据之差变小的方式学习神经网络的权重,When the values of the plurality of operating parameters of the internal combustion engine or the secondary battery newly acquired by the parameter value acquisition unit are within a preset range, respectively, the plurality of categories related to the internal combustion engine or the secondary battery newly acquired are used. The training data obtained by the actual measurement of the values of the operating parameters, and the computing unit is used to make the output values that vary according to the values of the operating parameters of the plurality of categories related to the internal combustion engine or the secondary battery to correspond to the internal combustion engine or the secondary battery. The weights of the neural network are learned so that the difference between the training data of the values of the operation parameters of the plurality of categories related to the secondary battery becomes smaller, 在由该参数值取得部新取得的与上述内燃机或二次电池相关的多个类别的运转参数中的至少一个类别的运转参数的值为预先设定的范围外时,与对新取得的与上述内燃机或二次电池相关的多个类别的运转参数的值通过实测而得到的训练数据的个数的增大或将该训练数据的个数除以表示预先设定的范围的最大值及最小值的差值而得到的数据密度的增大相应地,使神经网络的输出层的前一个隐藏层的节点的个数增大,并且使用对该预先设定的范围内及该预先设定的范围外的新取得的与上述内燃机或二次电池相关的多个类别的运转参数的值通过实测而得到的训练数据及该事先求出的训练数据,利用该运算部,以使根据与上述内燃机或二次电池相关的多个类别的运转参数的值而变化的输出值与对应于与上述内燃机或二次电池相关的多个类别的运转参数的值的训练数据之差变小的方式学习神经网络的权重,使用学习了权重的神经网络来输出相对于与上述内燃机或二次电池相关的多个类别的运转参数的值的输出值。When the value of the operating parameter of at least one of the plurality of types of operating parameters related to the internal combustion engine or the secondary battery newly acquired by the parameter value acquisition unit is outside the preset range, it is compared with the newly acquired and Increase in the number of training data obtained by actual measurement of the values of a plurality of types of operating parameters related to the above-mentioned internal combustion engine or secondary battery, or dividing the number of training data by the maximum value and the minimum value indicating a preset range The increase of the data density obtained by the difference of the values correspondingly increases the number of nodes in the previous hidden layer of the output layer of the neural network, and uses the preset range and the preset value. The training data and the previously obtained training data obtained by the actual measurement of the newly acquired values of the plurality of types of operating parameters related to the internal combustion engine or the secondary battery, which are outside the range, are used by the computing unit so as to be based on the internal combustion engine and the above-mentioned internal combustion engine. The neural network is learned in such a way that the difference between the output value that varies depending on the value of the operating parameter of the plurality of categories related to the internal combustion engine or the secondary battery and the training data corresponding to the value of the operating parameter of the plurality of categories related to the internal combustion engine or the secondary battery becomes smaller. As the weight of the network, an output value corresponding to the value of a plurality of types of operating parameters related to the above-mentioned internal combustion engine or a secondary battery is output using a neural network that has learned the weight. 4.一种用于内燃机或空调的机器学习装置,具备电子控制单元,4. A machine learning device for an internal combustion engine or an air conditioner, comprising an electronic control unit, 该电子控制单元具备:The electronic control unit has: 参数值取得部,取得与内燃机或空调相关的多个类别的运转参数的值;a parameter value acquisition unit for acquiring values of a plurality of types of operating parameters related to the internal combustion engine or the air conditioner; 运算部,使用包含输入层、隐藏层及输出层的神经网络来进行运算;及an operation part, which uses a neural network including an input layer, a hidden layer and an output layer to perform operations; and 存储部,storage department, 与上述内燃机或空调相关的多个类别的运转参数的值被向输入层输入,根据与上述内燃机或空调相关的多个类别的运转参数的值而变化的多个输出值被从输出层输出,The values of the plurality of types of operating parameters related to the internal combustion engine or the air conditioner are input to the input layer, and the plurality of output values that vary according to the values of the plurality of types of operating parameters related to the internal combustion engine or the air conditioner are output from the output layer, 在该机器学习装置中,In this machine learning device, 关于与上述内燃机或空调相关的多个类别的运转参数的各个,预先设定有各类别的运转参数的值的范围,并且预先设定有与上述内燃机或空调相关的多个类别的运转参数的值的范围所对应的神经网络的隐藏层的节点的个数,For each of the plurality of types of operating parameters related to the internal combustion engine or the air conditioner, a range of values of the operating parameters of each type is preset, and a plurality of types of operating parameters related to the internal combustion engine or the air conditioner are preset. The number of nodes in the hidden layer of the neural network corresponding to the range of values, 对多个类别的运转参数的值通过实测而事先求出且各类别的运转参数的值为该预先设定的范围内的训练数据存储于存储部,The values of the operating parameters of a plurality of categories are obtained in advance through actual measurement, and the training data in which the values of the operating parameters of each category are within the preset range are stored in the storage unit, 在由该参数值取得部新取得的上述内燃机或空调的多个运转参数的值分别为预先设定的范围内时,使用对新取得的与上述内燃机或空调相关的多个类别的运转参数的值通过实测而得到的训练数据,利用该运算部,以使根据与上述内燃机或空调相关的多个类别的运转参数的值而变化的多个输出值与对应于与上述内燃机或空调相关的多个类别的运转参数的值的训练数据之差变小的方式学习神经网络的权重,When the values of the plurality of operating parameters of the internal combustion engine or the air conditioner newly acquired by the parameter value acquisition unit are within the preset ranges, respectively, the values of the newly acquired operating parameters of the plurality of categories related to the internal combustion engine or the air conditioner are used. Values are training data obtained by actual measurement, and the arithmetic unit is used to make a plurality of output values that vary according to the values of the plurality of types of operating parameters related to the internal combustion engine or the air conditioner to correspond to the plurality of output values related to the internal combustion engine or the air conditioner. The weights of the neural network are learned in such a way that the difference between the values of the operating parameters of each category of the training data becomes smaller, 在由该参数值取得部新取得的与上述内燃机或空调相关的多个类别的运转参数中的至少一个类别的运转参数的值为预先设定的范围外时,与对新取得的与上述内燃机或空调相关的多个类别的运转参数的值通过实测而得到的训练数据的个数的增大或将该训练数据的个数除以表示预先设定的范围的最大值及最小值的差值而得到的数据密度的增大相应地,使神经网络的输出层的前一个隐藏层的节点的个数增大,并且使用对该预先设定的范围内及该预先设定的范围外的新取得的与上述内燃机或空调相关的多个类别的运转参数的值通过实测而得到的训练数据及该事先求出的训练数据,利用该运算部,以使根据与上述内燃机或空调相关的多个类别的运转参数的值而变化的多个输出值与对应于与上述内燃机或空调相关的多个类别的运转参数的值的训练数据之差变小的方式学习神经网络的权重,使用学习了权重的神经网络来输出相对于与上述内燃机或空调相关的多个类别的运转参数的值的多个输出值。When the value of the operation parameter of at least one of the plurality of types of operation parameters related to the internal combustion engine or the air conditioner newly acquired by the parameter value acquisition unit is out of the preset range, the value of the operation parameter newly acquired with the internal combustion engine or an increase in the number of training data obtained by actual measurement of the values of operating parameters of multiple categories related to the air conditioner, or the difference between the maximum value and the minimum value representing a preset range divided by the number of training data The resulting increase in data density correspondingly increases the number of nodes in the previous hidden layer of the output layer of the neural network, and uses new data within the preset range and outside the preset range. The obtained training data and the previously obtained training data obtained by the actual measurement of the values of the plurality of types of operating parameters related to the internal combustion engine or the air conditioner are used by the computing unit to The weights of the neural network are learned so that the difference between the plurality of output values that vary depending on the values of the operating parameters of the class and the training data corresponding to the values of the operating parameters of the plurality of classes related to the internal combustion engine or the air conditioner becomes smaller, and the learned weights are used. A neural network is used to output a plurality of output values relative to the values of the plurality of categories of operating parameters related to the above-mentioned internal combustion engine or air conditioner. 5.一种用于内燃机的机器学习装置,具备电子控制单元,5. A machine learning device for an internal combustion engine, comprising an electronic control unit, 该电子控制单元具备:The electronic control unit has: 参数值取得部,取得与内燃机相关的多个类别的运转参数的值;a parameter value acquisition unit for acquiring values of a plurality of types of operating parameters related to the internal combustion engine; 运算部,使用包含输入层、隐藏层及输出层的多个神经网络来进行运算;及an operation part, using a plurality of neural networks including an input layer, a hidden layer and an output layer to perform operations; and 存储部,storage department, 与上述内燃机相关的多个类别的运转参数的值被向输入层输入,根据与上述内燃机相关的多个类别的运转参数的值而变化的输出值被从对应的输出层输出,The values of the operating parameters of the plurality of categories related to the internal combustion engine are input to the input layer, and output values that vary according to the values of the operating parameters of the plurality of categories related to the internal combustion engine are output from the corresponding output layer, 在该机器学习装置中,In this machine learning device, 关于与上述内燃机相关的多个类别的运转参数的各个,预先设定有各类别的运转参数的值的范围,将各类别的运转参数的值的该预先设定的范围划分为多个,并且预先设定有通过各类别的运转参数的值的划分后的各范围的组合而划定的多个划分区域,For each of the plurality of types of operating parameters related to the above-mentioned internal combustion engine, a range of values of the operating parameters of each type is preset, and the preset range of values of the operating parameters of each type is divided into a plurality of groups, and A plurality of divided regions demarcated by the combination of the divided ranges of the values of the operating parameters of each category are preset. 针对每个该划分区域制作有神经网络并且预先设定有各神经网络的隐藏层的节点的个数,A neural network is created for each of the divided regions, and the number of nodes of the hidden layer of each neural network is preset, 对多个类别的运转参数的值通过实测而事先求出的训练数据存储于存储部,Training data obtained in advance by actual measurement for the values of the operating parameters of a plurality of categories is stored in the storage unit, 在由该参数值取得部新取得的与上述内燃机相关的多个类别的运转参数的值为预先设定的范围内时,使用对新取得的与上述内燃机相关的多个类别的运转参数的值通过实测而得到的训练数据,利用该运算部,以使根据与上述内燃机相关的多个类别的运转参数的值而变化的输出值与对应于与上述内燃机相关的多个类别的运转参数的值的训练数据之差变小的方式,学习新取得的与上述内燃机相关的多个类别的运转参数的值所属的划分区域的神经网络的权重,When the values of the operating parameters of the plurality of categories related to the internal combustion engine newly acquired by the parameter value acquisition unit are within a preset range, the newly acquired values of the operating parameters of the plurality of categories related to the internal combustion engine are used The training data obtained by the actual measurement is used by the computing unit to make output values that vary according to the values of the plurality of types of operating parameters related to the internal combustion engine and values corresponding to the plurality of types of operating parameters related to the internal combustion engine. to learn the weights of the neural network of the divided regions to which the newly acquired values of the operating parameters of the plurality of categories related to the above-mentioned internal combustion engine belong, so that the difference between the training data becomes smaller, 在由该参数值取得部新取得的与上述内燃机相关的多个类别的运转参数中的至少一个类别的运转参数的值为预先设定的范围外时,设定该至少一个类别的运转参数的值所属且通过各类别的运转参数的值的预先设定的范围的组合而划定的新的区域,并且对该新的区域制作新的神经网络,使用对新取得的与上述内燃机相关的多个类别的运转参数的值通过实测而得到的训练数据,利用该运算部,以使根据与上述内燃机相关的多个类别的运转参数的值而变化的输出值与对应于与上述内燃机相关的多个类别的运转参数的值的训练数据之差变小的方式学习该新的神经网络的权重,使用学习了权重的各神经网络来输出相对于上述内燃机的运转参数的值的输出值。When the value of the operation parameter of at least one type among the plurality of types of operation parameters related to the internal combustion engine newly acquired by the parameter value acquisition unit is outside the preset range, the operation parameter of the at least one type of operation parameter is set. The value belongs to a new area defined by the combination of the preset ranges of the values of the operating parameters of each category, and a new neural network is created for the new area, using the newly acquired data related to the internal combustion engine. The training data obtained by the actual measurement of the values of the operating parameters of the individual categories is used to make the output values that vary according to the values of the operating parameters of the plurality of categories related to the internal combustion engine and the output values corresponding to the operating parameters related to the internal combustion engine. The weights of the new neural network are learned so that the difference between the training data of the values of the operating parameters of the respective categories becomes smaller, and each neural network for which the weights have been learned is used to output output values corresponding to the values of the operating parameters of the internal combustion engine. 6.根据权利要求5所述的用于内燃机的机器学习装置,6. The machine learning device for an internal combustion engine according to claim 5, 与上述内燃机相关的运转参数由两个类别的运转参数构成,在由该参数值取得部新取得的与内燃机相关的运转参数中的一个类别的运转参数的值为预先设定的范围外且另一类别的运转参数的值为预先设定的范围内时,上述新的区域在对于该一个类别的运转参数的值而处于预先设定的范围外且与该另一类别的运转参数的值所属的上述划分区域相同的范围内,与该另一类别的运转参数的值所属的该划分区域相邻地设定。The operating parameters related to the internal combustion engine are composed of two types of operating parameters, and the value of one of the operating parameters related to the internal combustion engine newly acquired by the parameter value acquiring unit is outside a preset range and is another. When the value of the operating parameter of one category is within a preset range, the new region is outside the preset range for the value of the operating parameter of the one category and belongs to the value of the operating parameter of the other category Within the same range as the above-mentioned divided area of , it is set adjacent to the divided area to which the value of the operating parameter of the other category belongs. 7.根据权利要求5所述的用于内燃机的机器学习装置,7. The machine learning device for an internal combustion engine according to claim 5, 基于在与该新的区域相邻的该划分区域内除去了未设定神经网络的输出层的前一个隐藏层的节点的个数的划分区域后的划分区域的神经网络的输出层的前一个隐藏层的节点的个数的平均值,来设定该新的神经网络的输出层的前一个隐藏层的节点的个数。The one before the output layer of the neural network is based on the divided area in the divided area adjacent to the new area excluding the divided area where the number of nodes in the hidden layer before the output layer of the neural network is not set. The average value of the number of nodes in the hidden layer is used to set the number of nodes in the previous hidden layer of the output layer of the new neural network. 8.根据权利要求7所述的用于内燃机的机器学习装置,8. The machine learning device for an internal combustion engine according to claim 7, 与该新的区域内的运转参数的值被输入到输入层的个数的增大相应地,使该新的区域的新的神经网络的输出层的前一个隐藏层的节点的个数增大。The number of nodes in the previous hidden layer of the output layer of the new neural network in the new region is increased in accordance with the increase in the number of input layers where the values of the operating parameters in the new region are input. . 9.一种用于空调或二次电池的机器学习装置,具备电子控制单元,9. A machine learning device for an air conditioner or a secondary battery, comprising an electronic control unit, 该电子控制单元具备:The electronic control unit has: 参数值取得部,取得与空调或二次电池相关的多个类别的运转参数的值;a parameter value acquisition unit for acquiring values of a plurality of types of operating parameters related to the air conditioner or the secondary battery; 运算部,使用包含输入层、隐藏层及输出层的多个神经网络来进行运算;及an operation part, using a plurality of neural networks including an input layer, a hidden layer and an output layer to perform operations; and 存储部,storage department, 与上述空调或二次电池相关的多个类别的运转参数的值被向输入层输入,根据与上述空调或二次电池相关的多个类别的运转参数的值而变化的输出值被从对应的输出层输出,The values of the operating parameters of the plurality of categories related to the air conditioner or the secondary battery are input to the input layer, and the output values that vary according to the values of the operating parameters of the plurality of categories related to the air conditioner or the secondary battery are extracted from the corresponding output layer output, 在该机器学习装置中,In this machine learning device, 关于与上述空调或二次电池相关的多个类别的运转参数的各个,预先设定有各类别的运转参数的值的范围,将各类别的运转参数的值的该预先设定的范围划分为多个,并且预先设定有通过各类别的运转参数的值的划分后的各范围的组合而划定的多个划分区域,For each of the plurality of types of operating parameters related to the above-mentioned air conditioner or secondary battery, a range of values of the operating parameters of each type is preset, and the preset range of the values of the operating parameters of each type is divided into and a plurality of divided regions demarcated by the combination of the divided ranges of the values of the operating parameters of each category are set in advance, 针对每个该划分区域制作有神经网络并且预先设定有各神经网络的隐藏层的节点的个数,A neural network is created for each of the divided regions, and the number of nodes of the hidden layer of each neural network is preset, 对多个类别的运转参数的值通过实测而事先求出的训练数据存储于存储部,Training data obtained in advance by actual measurement for the values of the operating parameters of a plurality of categories is stored in the storage unit, 在由该参数值取得部新取得的与上述空调或二次电池相关的多个类别的运转参数的值为预先设定的范围内时,使用对新取得的与上述空调或二次电池相关的多个类别的运转参数的值通过实测而得到的训练数据,利用该运算部,以使根据与上述空调或二次电池相关的多个类别的运转参数的值而变化的输出值与对应于与上述空调或二次电池相关的多个类别的运转参数的值的训练数据之差变小的方式,学习新取得的与上述空调或二次电池相关的多个类别的运转参数的值所属的划分区域的神经网络的权重,When the values of the operating parameters of the plurality of types related to the air conditioner or the secondary battery newly acquired by the parameter value acquisition unit are within a preset range, the newly acquired values of the operating parameters related to the air conditioner or the secondary battery are used. The training data obtained by the actual measurement of the values of the operating parameters of the plurality of categories is used to make the output values changed according to the values of the operating parameters of the plurality of categories related to the above-mentioned air conditioner or the secondary battery to correspond to the The division to which the newly acquired values of the operating parameters of the plurality of categories related to the air conditioner or the secondary battery belong to is learned so that the difference between the training data of the values of the operating parameters of the plurality of categories related to the air conditioner or the secondary battery becomes smaller. the weights of the neural network in the region, 在由该参数值取得部新取得的与上述空调或二次电池相关的多个类别的运转参数中的至少一个类别的运转参数的值为预先设定的范围外时,设定该至少一个类别的运转参数的值所属且通过各类别的运转参数的值的预先设定的范围的组合而划定的新的区域,并且对该新的区域制作新的神经网络,使用对新取得的与上述空调或二次电池相关的多个类别的运转参数的值通过实测而得到的训练数据,利用该运算部,以使根据与上述空调或二次电池相关的多个类别的运转参数的值而变化的输出值与对应于与上述空调或二次电池相关的多个类别的运转参数的值的训练数据之差变小的方式学习该新的神经网络的权重,使用学习了权重的各神经网络来输出相对于上述空调或二次电池的运转参数的值的输出值。When the value of the operation parameter of at least one of the plurality of types of operation parameters related to the air conditioner or the secondary battery newly acquired by the parameter value acquisition unit is outside the preset range, the at least one type is set The value of the operating parameter belongs to a new area defined by the combination of the preset ranges of the operating parameter values of each category, and a new neural network is created for the new area, using the newly acquired and the above The training data obtained by actual measurement of the values of the operating parameters of the plurality of categories related to the air conditioner or the secondary battery is used to change the values of the operating parameters of the plurality of categories related to the air conditioner or the secondary battery by the computing unit. The weight of the new neural network is learned so that the difference between the output value of the air conditioner and the training data corresponding to the values of the operating parameters of the plurality of categories related to the air conditioner or the secondary battery becomes smaller, and each neural network that has learned the weight is used to An output value corresponding to the value of the operating parameter of the air conditioner or the secondary battery is output.
CN201980001105.XA 2018-02-05 2019-02-05 machine learning device Expired - Fee Related CN110352297B (en)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
JP2018-018425 2018-02-05
JP2018018425 2018-02-05
JP2018216766A JP2019135392A (en) 2018-02-05 2018-11-19 Control device for internal combustion engine and device for outputting output value
JP2018-216850 2018-11-19
JP2018-216766 2018-11-19
JP2018216850A JP6501032B1 (en) 2018-11-19 2018-11-19 Machine learning device
PCT/JP2019/004080 WO2019151536A1 (en) 2018-02-05 2019-02-05 Machine learning device

Publications (2)

Publication Number Publication Date
CN110352297A CN110352297A (en) 2019-10-18
CN110352297B true CN110352297B (en) 2020-09-15

Family

ID=67910416

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980001105.XA Expired - Fee Related CN110352297B (en) 2018-02-05 2019-02-05 machine learning device

Country Status (3)

Country Link
US (1) US10853727B2 (en)
CN (1) CN110352297B (en)
DE (1) DE112019000020B4 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6852141B2 (en) * 2018-11-29 2021-03-31 キヤノン株式会社 Information processing device, imaging device, control method of information processing device, and program
JP6593560B1 (en) * 2019-02-15 2019-10-23 トヨタ自動車株式会社 Internal combustion engine misfire detection device, internal combustion engine misfire detection system, data analysis device, and internal combustion engine control device
JP6849028B2 (en) * 2019-08-23 2021-03-24 ダイキン工業株式会社 Air conditioning control system, air conditioner, and machine learning device
US11427210B2 (en) * 2019-09-13 2022-08-30 Toyota Research Institute, Inc. Systems and methods for predicting the trajectory of an object with the aid of a location-specific latent map
KR102726697B1 (en) * 2019-12-11 2024-11-06 현대자동차주식회사 System and Method for providing driving information based on big data
US11459962B2 (en) * 2020-03-02 2022-10-04 Sparkcognitton, Inc. Electronic valve control
US20230029746A1 (en) * 2021-08-02 2023-02-02 Prezerv Technologies Mapping subsurface infrastructure
KR20230045490A (en) * 2021-09-28 2023-04-04 에스케이플래닛 주식회사 Apparatus for providing traffic information based on driving noise and method therefor
CN115144301B (en) * 2022-06-29 2024-12-03 厦门大学 A method for automatic identification of scale alignment in static weighing calibration of glass float

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6098012A (en) * 1995-02-13 2000-08-01 Daimlerchrysler Corporation Neural network based transient fuel control method
CN1981123A (en) * 2004-06-25 2007-06-13 Fev电机技术有限公司 Motor vehicle control device provided with a neuronal network
JP2007299366A (en) * 2006-01-31 2007-11-15 Sony Corp Learning system and method, recognition device and method, creation device and method, recognition and creation device and method, and program
CN101630144A (en) * 2009-08-18 2010-01-20 湖南大学 Self-learning inverse model control method of electronic throttle
JP2011132915A (en) * 2009-12-25 2011-07-07 Honda Motor Co Ltd Device for estimating physical quantity
JP2012112277A (en) * 2010-11-24 2012-06-14 Honda Motor Co Ltd Control device of internal combustion engine

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5093899A (en) 1988-09-17 1992-03-03 Sony Corporation Neural network with normalized learning constant for high-speed stable learning
JP2606317B2 (en) 1988-09-20 1997-04-30 ソニー株式会社 Learning processing device
JPH0738186B2 (en) 1989-03-13 1995-04-26 シャープ株式会社 Self-expanding neural network
US5331550A (en) * 1991-03-05 1994-07-19 E. I. Du Pont De Nemours And Company Application of neural networks as an aid in medical diagnosis and general anomaly detection
JPH1182137A (en) 1998-02-09 1999-03-26 Matsushita Electric Ind Co Ltd Parameter estimation device
US6269351B1 (en) 1999-03-31 2001-07-31 Dryken Technologies, Inc. Method and system for training an artificial neural network
US7483868B2 (en) 2002-04-19 2009-01-27 Computer Associates Think, Inc. Automatic neural-net model generation and maintenance
US7917333B2 (en) * 2008-08-20 2011-03-29 Caterpillar Inc. Virtual sensor network (VSN) based control system and method
US9400955B2 (en) * 2013-12-13 2016-07-26 Amazon Technologies, Inc. Reducing dynamic range of low-rank decomposition matrices
JP5899272B2 (en) 2014-06-19 2016-04-06 ヤフー株式会社 Calculation device, calculation method, and calculation program
US20190073580A1 (en) * 2017-09-01 2019-03-07 Facebook, Inc. Sparse Neural Network Modeling Infrastructure
US10634081B2 (en) 2018-02-05 2020-04-28 Toyota Jidosha Kabushiki Kaisha Control device of internal combustion engine

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6098012A (en) * 1995-02-13 2000-08-01 Daimlerchrysler Corporation Neural network based transient fuel control method
CN1981123A (en) * 2004-06-25 2007-06-13 Fev电机技术有限公司 Motor vehicle control device provided with a neuronal network
JP2007299366A (en) * 2006-01-31 2007-11-15 Sony Corp Learning system and method, recognition device and method, creation device and method, recognition and creation device and method, and program
CN101630144A (en) * 2009-08-18 2010-01-20 湖南大学 Self-learning inverse model control method of electronic throttle
JP2011132915A (en) * 2009-12-25 2011-07-07 Honda Motor Co Ltd Device for estimating physical quantity
JP2012112277A (en) * 2010-11-24 2012-06-14 Honda Motor Co Ltd Control device of internal combustion engine

Also Published As

Publication number Publication date
US20200234136A1 (en) 2020-07-23
CN110352297A (en) 2019-10-18
DE112019000020T5 (en) 2019-10-02
DE112019000020B4 (en) 2020-10-15
US10853727B2 (en) 2020-12-01

Similar Documents

Publication Publication Date Title
CN110352297B (en) machine learning device
CN110118130B (en) Control device for internal combustion engine
US6755078B2 (en) Methods and apparatus for estimating the temperature of an exhaust gas recirculation valve coil
CN111016920B (en) Control device and control method of drive device for vehicle, vehicle electronic control unit, learned model and machine learning system
US9989029B2 (en) Method and device for determining a charge air mass flow rate
US7174250B2 (en) Method for determining an exhaust gas recirculation quantity for an internal combustion engine provided with exhaust gas recirculation
US10825267B2 (en) Control system of internal combustion engine, electronic control unit, server, and control method of internal combustion engine
CN111476345A (en) machine learning device
US10947909B2 (en) Control device of internal combustion engine and control method of same and learning model for controlling internal combustion engine and learning method of same
CN112412649A (en) Vehicle control device, vehicle learning system, and vehicle control method
CN108571391A (en) The control device and control method of internal combustion engine
CN113392574A (en) Gasoline engine secondary charging model air inflow estimation method based on neural network model
CN110005537B (en) Control device for internal combustion engine
JP6501032B1 (en) Machine learning device
CN109684704B (en) An online calibration method of engine intake air flow based on velocity density model
WO2019151536A1 (en) Machine learning device
JP2020197165A (en) Abnormality detection system of exhaust gas recirculation system
JP2021085335A (en) Internal combustion engine control device
JP2019143477A (en) Control device of internal combustion engine
JP5488520B2 (en) Control device for internal combustion engine
Sidorow et al. Model based fault diagnosis of the intake and exhaust path of turbocharged diesel engines
JP2019148243A (en) Control device of internal combustion engine
JP5601232B2 (en) Control device for internal combustion engine
JP4429355B2 (en) Recirculation exhaust gas flow rate calculation device
JP2022012826A (en) Machine learning system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200915

CF01 Termination of patent right due to non-payment of annual fee