[go: up one dir, main page]

WO2025169305A1 - Accident inference device, insurance fee examination device, on-vehicle equipment, terminal device, information processing system, accident inference method, accident inference program, ai model generation method, and ai generation program - Google Patents

Accident inference device, insurance fee examination device, on-vehicle equipment, terminal device, information processing system, accident inference method, accident inference program, ai model generation method, and ai generation program

Info

Publication number
WO2025169305A1
WO2025169305A1 PCT/JP2024/003950 JP2024003950W WO2025169305A1 WO 2025169305 A1 WO2025169305 A1 WO 2025169305A1 JP 2024003950 W JP2024003950 W JP 2024003950W WO 2025169305 A1 WO2025169305 A1 WO 2025169305A1
Authority
WO
WIPO (PCT)
Prior art keywords
accident
information
miss
vehicle
near miss
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/JP2024/003950
Other languages
French (fr)
Japanese (ja)
Inventor
君孝 村下
和真 橋本
徹洋 加藤
雅樹 瓜生
英行 上出
章人 岩田
真一 塩津
渉 長谷川
友理 坂口
啓輔 山埜
康貴 岡田
ともえ 大築
幹 小島
竜介 関
竜也 渡邊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Denso Ten Ltd
Original Assignee
Denso Ten Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Denso Ten Ltd filed Critical Denso Ten Ltd
Priority to PCT/JP2024/003950 priority Critical patent/WO2025169305A1/en
Publication of WO2025169305A1 publication Critical patent/WO2025169305A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q40/00Finance; Insurance; Tax strategies; Processing of corporate or income taxes
    • G06Q40/08Insurance
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems

Definitions

  • the disclosed embodiments relate to an accident estimation device, an insurance premium review device, an on-board device, a terminal device, an information processing system, an accident estimation method, an accident estimation program, an AI model generation method, and an AI generation program.
  • One aspect of the embodiment has been made in consideration of the above, and aims to provide an accident estimation device, insurance premium review device, in-vehicle device, terminal device, information processing system, accident estimation method, accident estimation program, AI model generation method, and AI generation program that can contribute to improving the effectiveness of traffic safety education.
  • An accident estimation device includes a controller.
  • a controller When an attempted accident, known as a near miss ("something dangerous happened, but inevitably did not result in a disaster"), occurs, the controller estimates an accident that is likely to occur in the near miss situation based on the situation. The controller outputs accident information related to the accident.
  • the accident estimation device, insurance premium review device, in-vehicle device, terminal device, information processing system, accident estimation method, accident estimation program, AI model generation method, and AI generation program estimate and generate information on accidents that are feared to occur in response to near misses that involve the individual. This allows for the notification of information that is relevant to the individual, thereby contributing to improving the effectiveness of traffic safety education.
  • FIG. 1 is an explanatory diagram showing the configuration of an accident estimation device and a vehicle according to an embodiment.
  • FIG. 2 is an explanatory diagram showing a method for generating an accident estimation AI model according to the embodiment.
  • FIG. 3 is an explanatory diagram of a method for creating training data according to the embodiment.
  • FIG. 4 is an explanatory diagram illustrating the configuration of the accident information generating unit according to the embodiment.
  • FIG. 5 is an explanatory diagram illustrating an example of a table stored in the video DB according to the embodiment.
  • FIG. 6 is an explanatory diagram illustrating an example of a table stored in the video DB according to the embodiment.
  • FIG. 7 is an explanatory diagram illustrating an example of a table stored in the video DB according to the embodiment.
  • FIG. 8 is an explanatory diagram illustrating an example of a table stored in the accident DB according to the embodiment.
  • FIG. 9 is an explanatory diagram showing the configuration of an accident information generation unit equipped with AI according to the embodiment.
  • FIG. 10 is an explanatory diagram illustrating an example of a training dataset for the video information generation AI model according to the embodiment.
  • FIG. 11 is an explanatory diagram showing an example of a learning method for a video information generation AI model according to an embodiment.
  • FIG. 12 is an explanatory diagram showing an example of the operation of the video information generation AI model according to the embodiment.
  • FIG. 13 is an explanatory diagram illustrating an example of a learning dataset for the damage information generation AI model according to the embodiment.
  • FIG. 14 is an explanatory diagram showing an example of a learning method for a damage information generation AI model according to an embodiment.
  • FIG. 15 is an explanatory diagram illustrating an example of the operation of the damage information generation AI model according to the embodiment.
  • FIG. 16 is a flowchart showing a process executed by the controller of the accident estimation device according to the embodiment.
  • FIG. 17 is a flowchart showing a process executed by the controller of the accident estimation device according to the embodiment.
  • FIG. 18 is an explanatory diagram of an information processing system according to an embodiment.
  • the accident estimation device includes a computer, and the computer executes an accident estimation program stored in a memory to realize the accident estimation method according to the embodiment.
  • the accident estimation device is a device that, when a near-miss accident (hereinafter referred to as a "near miss") that is likely to develop into an accident occurs in a moving vehicle, estimates the accident that is likely to occur in the circumstances of the near miss and outputs accident information related to that hypothetical accident, i.e., hypothetical accident information based on the near miss.
  • the accident estimation device estimates the accident that is likely to occur based on the circumstances of the near miss.
  • the accident prediction device can output and provide accident information about accidents that are likely to occur as a result of that near miss to drivers who have actually experienced a near miss, raising their awareness of the danger of accidents.
  • Drivers are therefore provided with accident information that is relevant to them, and the accident prediction device can contribute to improving the effectiveness of traffic safety education.
  • FIG. 1 is an explanatory diagram showing the configuration of the accident estimation device 1 and a vehicle 2 according to an embodiment. First, the configuration of the vehicle 2 according to the embodiment will be described.
  • a drive recorder 20 having the ability to communicate with external devices is installed in the vehicle 2.
  • the drive recorder 20 is connected to the accident estimation device 1 so that it can communicate information with the accident estimation device 1 via a communication network N such as the Internet.
  • the drive recorder 20 is also connected to a steering angle sensor 22 and emotion sensor 23 installed in the vehicle 2 via an in-vehicle LAN (Local Area Network).
  • the drive recorder 20 is further equipped with an acceleration sensor 21, an on-board camera 24, an on-board microphone 25, a flash memory 26, a controller 27, and a communication unit 28.
  • the controller 27 acquires vehicle steering angle information detected by the steering angle sensor 22 and driver emotion information detected by the emotion sensor 23.
  • the controller 27 also acquires vehicle acceleration detected by the acceleration sensor 21, image information captured by the onboard camera 24, and audio information collected by the onboard microphone 25.
  • the controller 27 stores the image information captured by the onboard camera 24 and the audio information collected by the onboard microphone 25 in the flash memory 26.
  • the controller 27 also reads out the image information and audio information stored in the flash memory 26, uses them for various processes, and transmits them to an external device such as the accident estimation device 1 via the communication unit 28.
  • the acceleration sensor 21 is a sensor that detects the acceleration of the vehicle.
  • it consists of an elastic body and a mass body, and detects acceleration by measuring the force applied to the mass body due to acceleration as the displacement of the elastic body. It can be configured as a frequency change type, piezoelectric type, piezo-resistive type, capacitance type, or other acceleration sensor.
  • the steering angle sensor 22 is a sensor that detects the rotation state of the steering wheel of the vehicle 2.
  • the emotion sensor 23 is a sensor that detects biometric information to estimate the driver's emotions, such as a heart rate sensor or brain wave sensor, and emotions are estimated based on the brain waves and heart rate detected by these sensors.
  • the heart rate sensor is provided on the steering wheel or other part of the vehicle 2. Note that a method of estimating emotions from the driver's facial information (facial expression) using artificial intelligence or the like can also be applied.
  • the on-board camera 24 can be used as a sensor to acquire the driver's facial information.
  • the onboard camera 24 includes an interior imaging device that captures video from within the cabin of the vehicle 2, and an exterior imaging device that captures video from around the vehicle 2.
  • the onboard microphone 25 is a sound collection device that collects audio from within the cabin of the vehicle 2.
  • the flash memory 26 is a recording device that stores the video from within the cabin captured by the onboard camera 24 and the audio from within the cabin collected by the onboard microphone 25, as well as the video from around the vehicle 2 captured by the onboard camera 24 and the audio from around the vehicle 2 collected by the onboard microphone 25.
  • the controller 27 includes a microcomputer with a CPU (Central Processing Unit), ROM (Read Only Memory), RAM (Random Access Memory), and various other circuits.
  • the controller 27 is equipped with a near-miss detection unit 29 that functions when the CPU executes a program stored in the ROM using the RAM as a working area.
  • the controller 27 may be partially or entirely composed of hardware such as an ASIC (Application Specific Integrated Circuit) or FPGA (Field Programmable Gate Array).
  • the communication unit 28 is a communication interface that communicates information with the accident estimation device 1 via the communication network N.
  • the acceleration sensor 21 detects acceleration in the front-rear, left-right, up-down directions of the vehicle 2, assuming that the forward direction of the vehicle 2 is the front-rear direction, and outputs the results to the controller 27.
  • the onboard camera 24 outputs images of the interior of the vehicle 2 and images of the surroundings of the vehicle 2 to the controller 27.
  • the onboard microphone 25 picks up audio from within the interior of the vehicle 2 and outputs the audio to the controller 27.
  • the controller 27 may also store the steering wheel rotation angle, rotation speed, and rotation acceleration detected by the steering angle sensor 22, as well as emotion information (heart rate information) detected by the emotion sensor 23, in the flash memory 26, in association with the date and time.
  • the lower threshold of acceleration used to determine whether a near miss has occurred is the upper limit (with an appropriate offset) of acceleration that can occur when vehicle 2 is being driven safely, and the upper threshold is the lower limit (with an appropriate offset) of acceleration when an accident occurs, and is set based on experiments during design and development, for example.
  • the deployment status of vehicle 2's airbags may also be used to determine whether a near miss has occurred (if the airbags are deployed, it is determined that an accident has occurred but not a near miss).
  • the near-miss detection unit 29 determines that a near-miss has occurred when the rotational speed and rotational acceleration of the steering wheel exceed the corresponding rotational speed threshold and rotational acceleration threshold, respectively.
  • the driver when a near-miss occurs, the driver often makes an abrupt turn in the steering wheel, so it determines that a near-miss has occurred when the rotational speed and rotational acceleration of the steering wheel are greater than the threshold.
  • the rotational speed threshold and rotational acceleration threshold are set to values that provide good judgment accuracy, for example, based on experiments during design and development.
  • drivers who have a high frequency of near misses are predicted to have a high probability of causing an accident
  • drivers who have a high frequency of near misses that lead to accidents causing major damage are predicted to have a high probability of causing a major accident, but the reality is that this near miss information is not used to calculate insurance premiums.
  • the accident estimation device 1 outputs accident type information indicating the type of accident that is feared to occur as a result of a near miss, and hypothetical accident information indicating the accident event that is feared to occur as a result of a near miss.
  • the near miss detection unit 29 determines that a near miss has occurred, it reads from the flash memory 26 the date and time data and location data of the near miss, as well as video and audio recorded a predetermined time before and after the date and time of the near miss.
  • the near-miss detection unit 29 reads from the flash memory 26 the acceleration data detected by the acceleration sensor 21, the steering speed data detected by the steering angle sensor 22, and the driver's emotion data detected by the emotion sensor 23.
  • the near-miss detection unit 29 outputs these various data when a near-miss occurs to the communication unit 28.
  • the communication unit 28 transmits these various data when a near-miss occurs, which are input from the controller 27, to the accident estimation device 1 as near-miss information.
  • the controller 10 includes a microcomputer with a CPU, ROM, RAM, etc., and various circuits. It is equipped with an accident estimation model 13 that functions when the CPU executes a program stored in the ROM using the RAM as a work area, and an accident information generation unit 14.
  • the controller 10 may be configured in part or in whole using hardware such as an ASIC or FPGA.
  • the accident DB 12 is an information storage device such as a data flash, and stores information about accidents that have occurred in the past. An example of the information stored in the accident DB 12 will be described later with reference to Figure 4.
  • the communication unit 11 receives near-miss information from the vehicle 2, it outputs the received near-miss information to the accident estimation model 13 of the controller 10.
  • Accident inference model 13 is an inference model that, when near-miss information is input, outputs an accident scene that is likely to occur in the near-miss situation corresponding to the near-miss information.
  • Accident inference model 13 can be realized by a matching process using a database (accident type database) made up of accident scene information (information that identifies the type of accident) that uses various information from the near-miss information as parameters, but because accident scenes come in a wide variety of patterns (types), it is more realistic to realize it using an AI (Artificial Intelligence) model.
  • AI Artificial Intelligence
  • This AI model is an AI model that has been machine-learned to estimate accident scenes based on input near-miss information.
  • accident estimation model 13 using an AI model will be referred to as accident estimation AI model 13A.
  • the circumstances surrounding the accident are estimated based on the relative speed with respect to the object struck, the direction of impact (direction relative to the vehicle (e.g., the forward direction of the vehicle)), and the point of impact (location) immediately before the accident (collision, etc.).
  • the circumstances surrounding the accident also change depending on the environment around the accident site; for example, accident footage can vary greatly depending on the surrounding scenery.
  • the accident estimation AI model 13A can make more detailed estimations.
  • the data used for estimation must be of an appropriate type and with an appropriate level of accuracy, taking into account the performance of the device (CPU, etc.) and the AI learning time. For this reason, and to make the explanation easier to understand, this embodiment will explain using an example in which the data items used for estimation are limited to items with a large impact. However, even if other items are used, the accident situation can be estimated using similar processing.
  • the accident estimation AI model 13A is a model that estimates the accident situation based on near-miss information. It estimates the accident situation that would occur if appropriate evasive action were not taken in the event of a near-miss, based on the similarity between the situation immediately before the near-miss information was generated (before any action to avoid the accident was taken) and the situation immediately before the accident occurred (the amount of time before the action to avoid the accident was taken).
  • the input data for the learning data is “status data on vehicle driving, etc. immediately before the accident," and the correct answer data is "accident result (accident situation).” Therefore, the input data for the learning data is vehicle status data for an appropriate time period (an appropriate time is set based on experiments, etc.) ending an appropriate time before the accident occurs (an appropriate time is set based on experiments, etc.).
  • this time period will be referred to as the estimated data time period.
  • the input data is "operational state data before evasive action in the near miss," and in reality, just as during learning, the input data for the learning data is vehicle state data from the estimated data time period (an appropriate time period (an appropriate time is set based on experiments, etc.) ending an appropriate time before the near miss occurs (an appropriate time is set based on experiments, etc.)).
  • the near-miss information includes the detection results of the acceleration sensor 21 (acceleration sensor information) during the estimated data time period when the near-miss occurred.
  • the near-miss information also includes the detection results of the steering angle sensor 22 (steering angle sensor information) during the estimated data time period.
  • FIG. 2 is an explanatory diagram showing a method for generating the accident estimation AI model 13A according to an embodiment.
  • the accident estimation AI model 13A includes a DNN (Deep Neural Network) 13B that performs predictive calculations from input values to derive output values.
  • DNN Deep Neural Network
  • the accident estimation AI model 13A is generated, for example, by supervised learning.
  • supervised learning a learning data set is prepared that consists of a large amount of learning data in which input values are paired with correct values.
  • predictive information which is information similar to near-miss information immediately before an accident or near-miss occurs, is used as the input value, and the accident scene of the accident (or an accident predicted to occur from a near-miss) is used as the correct value.
  • the input predictive information includes acceleration sensor information, steering angle sensor information, emotion sensor information, in-vehicle camera footage, and in-vehicle microphone audio from the estimated data time period at the time of the accident.
  • Accident scenes that serve as correct answers are associated with identification information (accident scene ID) that identifies each accident scene.
  • the correct data is accident information (accident scene) that includes predictive information similar to that of a near miss (information similar to the information immediately before the accident occurs in the event information) during an appropriate period before the accident avoidance action (sudden deceleration, sudden steering, avoidance action by the other vehicle (determined from images, etc.)) in the process of the near miss occurring.
  • the time of the accident avoidance action can be determined based on the vehicle's driving (operation) status (speed sensor information, steering angle sensor information, etc.), and the data collection period can be set at the time of the accident avoidance action, but as explained above, the estimated data time period when the near miss occurs (time of the accident avoidance action) can also be determined through experiments, etc.
  • accident information corresponding to the near miss information is extracted by comparing the similarity of data in which no action is taken to avoid the accident.
  • This learning data creation method can be done manually (by looking at near miss information and accident information and creating it based on the above-mentioned perspective) or by computer processing (by comparing the similarity of near miss information and accident information based on the above-mentioned perspective).
  • Second learning data creation method extracts accident precursor states (information) during the accident occurrence process based on accident information from actual accidents, and extracts near-miss information for near-misses that have near-miss precursor states (information) similar to the accident precursor states in question. The extracted near-miss information is then used as input data to create learning data with the corresponding accident information as correct answer data.
  • This learning data creation method can also be done manually (by looking at near-miss information and accident information and creating it based on the above-mentioned perspectives) or by computer processing (by comparing the similarities between near-miss information and accident information based on the above-mentioned perspectives).
  • Third learning data creation method Based on near-miss information, near-miss information before evasive action is extracted during the near-miss occurrence process. Then, the vehicle's behavior if the behavior before the evasive action is maintained is predicted (if an other vehicle is present, the behavior of the other vehicle is also predicted), and accident information is predicted and generated. Note that data for predictive generation of accident information (various data for predictive generation of accident images and vehicle damage conditions) is generated in advance based on past accident information. For example, accident images are created using computer graphics technology using image components of vehicle images and landscape images, as well as various mechanical formulas and coefficients related to object movement.
  • an identification code is assigned to each accident and this identification code is used as the correct answer data.
  • Various information about each accident is managed and stored in a database in which this information is stored in data records that use the identification code as key data.
  • the correct answer data in the training data is the identification code for each accident.
  • an accident estimation model 13 that uses an accident type database
  • a group of information that associates input data in the above-mentioned learning data with corresponding correct answer data becomes the accident type database.
  • DNN13B sequentially derives estimated values of likely accident scenes from the near-miss information (predictive information) of each piece of learning data in the sequentially input learning data set, and outputs the estimated values.
  • Controller 10 causes accident estimation AI model 13A to perform machine learning by updating the weighting coefficients of each layer in DNN13B using processing such as backpropagation, so as to reduce the error between the estimated accident scene results output from DNN13B and the correct values of the learning data corresponding to the input values.
  • the accident inference model 13 infers an accident scene that is likely to occur in the near-miss situation corresponding to the near-miss information, and outputs the inference result to the accident information generation unit 14. At this time, the accident inference model 13 outputs the near-miss information used to infer the accident to the accident information generation unit 14, along with the accident scene ID of the inferred accident scene.
  • the accident information generation unit 14 generates accident information about the accident scene estimated by the accident estimation model 13 and outputs it to the information output device 15.
  • the information output device 15 is a display capable of displaying an image of the accident information.
  • the information output device 15 also includes a speaker capable of outputting the audio of the accident information.
  • the information output device 15 includes a camera that captures images of viewers watching the accident information.
  • the information output device 15 outputs a captured image of the viewer viewing the accident information to the controller 10 of the accident estimation device 1.
  • the information output device 15 may be configured to be connected to a heart rate sensor that measures the heart rate of the viewer viewing the accident information. In this case, the information output device 15 outputs information indicating the viewer's heart rate to the controller 10.
  • the video DB 33 stores live-action videos, virtual reality (VR) videos, and computer graphics (CG) videos of various accident scenes.
  • the video DB 33 stores data such as that shown in FIG. 5, in which accident scene IDs are associated with accident videos.
  • the data for the video DB 33 is collected and created in advance by design developers and the like, and stored in the video DB 33.
  • data such as accident audio may also be stored, so that the accident audio is played back synchronously when the accident image is played back.
  • the first accident video generation method stores accident video for each type of accident in the video DB 33, taking into account the surrounding environment such as the background of the accident location, but this has the drawback of resulting in a large amount of data.
  • the second accident video generation method generates accident video by combining video of the accident itself, that is, video of the accident vehicle and accident objects (people, objects, etc.), which are the objects involved in the accident, and background video, in this case a database of unchanging video (unchanging video over the duration of the accident, for example, images of still objects such as buildings) and changing video (people, animals, displays and traffic lights (changes due to displayed images or flashing)).
  • the video generation model 34 performs image recognition on the background image of stationary objects in the near-miss information captured by the vehicle-mounted camera 24, such as information about the intersection shape, and based on the recognized information, selects the near-miss location and background image, such as a background image with a similar intersection shape, from the video DB 33.
  • Information about the intersection shape here includes, for example, information about the road configuration, such as a T-junction, the number of lanes, and information indicating whether or not there are traffic lights.
  • a stationary object can be determined, for example, by comparing the moving speed of the object in the video with the moving speed of the vehicle filming it (if the moving speeds of both are the same, it is a stationary object).
  • the video generation model 34 performs image recognition on the video from the in-vehicle camera 24 to determine the presence or absence of people believed to have been involved in the near miss, as well as party information relating to their location, and, based on the recognized party information, selects accident video (video of the parties involved excluding the background, etc.) from the video DB 33 that contains party information similar to the situation of the parties involved in the near miss.
  • the video DB 33 will include a database for selecting background images (a database of information associating accident background image elements (intersection information, etc.) with background images for generating simulation images), a database for selecting images of parties involved in the accident (a database of information associating accident party elements (location information of each party, etc.) with images of parties involved for generating simulation images), and a database for selecting related vehicle images (a database of information associating accident-related vehicle elements (location information of each related vehicle, etc.) with related vehicle images for generating simulation images).
  • a database for selecting background images a database of information associating accident background image elements (intersection information, etc.) with background images for generating simulation images
  • a database for selecting images of parties involved in the accident a database of information associating accident party elements (location information of each party, etc.) with images of parties involved for generating simulation images
  • related vehicle images a database of information associating accident-related vehicle elements (location information of each related vehicle, etc.) with related vehicle images for generating simulation images.
  • the video generation model 34 may output to the information output device 15 a video generated by stitching a near-miss image with a simulated accident image (splicing the near-miss image with the simulation image just before the avoidance action is taken in the near-miss image).
  • the accident estimation device 1 can help drivers understand the horror of accidents that can occur from near-miss situations they have experienced themselves, thereby contributing to improving the effectiveness of traffic safety education.
  • the video generation model 34 may also be configured to output video to the information output device 15 by superimposing a real-life scenery image (a real-life photographed image or a real-life scenery image of the accident location included in the map data) at the time of the near-miss onto the accident object (images of the parties involved and related vehicle images (no background image)) in the accident video selected from the video DB 33.
  • a real-life scenery image a real-life photographed image or a real-life scenery image of the accident location included in the map data
  • the accident object images of the parties involved and related vehicle images (no background image)
  • the second accident video generation method requires less data than the first accident video generation method, but because video of the parties involved in the accident and accident-related vehicles is stored in the video DB 33, there is the issue of a large data volume.
  • the third accident video generation method generates images of the accident subject objects (parties involved and related vehicles) based on physical characteristics estimated by image recognition processing of images taken at the time of the near miss. The generated images of the accident vehicles and accident objects (people, objects, etc.) are then combined with background video to generate an accident image.
  • the video DB 33 may store tables such as those shown in Figures 6 and 7, and accident images may be generated using the data in these tables.
  • the table shown in Figure 6 associates accident scene IDs with various physical parameters in an accident.
  • the various physical parameters in an accident are, for example, parameters indicating the vehicle involved, the relative speed of the vehicle at the time of the accident, the collision direction of the vehicle at the time of the accident, and the collision position of the vehicle at the time of the accident.
  • the table shown in Figure 7 associates vehicle types with vehicle images.
  • the video generation model 34 When the tables shown in Figures 6 and 7 are stored in the video DB 33, the video generation model 34 generates a simulation video from near-miss information using various physical parameters of an accident similar to the video captured by the in-vehicle camera 24.
  • the image generation model 34 estimates the movement and damage state of the target vehicle using various physical parameters in the accident, and generates an image of the accident target vehicle using CG.
  • the image generation model 34 then superimposes this accident image on the scenery at the near miss (actually photographed image or scenery image included in map data) to generate a simulation image.
  • the video generation model 34 (video information generation AI model 34A) generates and outputs a simulated video of the accident scene that includes at least one of the following information: the driver's own injury situation in the accident, images of personal injury, property damage, and images of property damage. This allows the accident estimation device 1 to help drivers more realistically understand the horror of accidents that can occur from near-miss situations, thereby contributing to improving the effectiveness of traffic safety education.
  • the video information generation unit 31 When a driver is viewing a video for the first time, the video information generation unit 31 causes the driver to view a simulation video with the degree of extremity set to a default value. In this case, the video information generation unit 31 generates and causes the driver to view a simulation video that is, for example, CG-only video rather than live-action video, and cuts out scenes of contact with people. The video information generation unit 31 also generates and causes the driver to view a simulation video in which the model and color of the vehicle 2 that appears are different from those of the vehicle that appeared in the near-miss situation. In other words, the video information generation unit 31 adjusts the degree of reproduction (similarity) of the generated image to the actual image, thereby adjusting the intensity of the stimulation of the generated image (the extremity of the image).
  • the video information generation unit 31 then acquires images of the viewer who has watched the simulation video from the information output device 15, and determines the viewer's tolerance for extreme content using the tolerance determination unit 35.
  • the video information generation unit 31 may also acquire the viewer's heart rate from the information output device 15 and determine tolerance based on the heart rate as well.
  • the video information generation unit 31 will reduce the level of extremism in the next simulation video to be viewed. If the tolerance is above the threshold, for example, if it recognizes from an image that the viewer is paying attention to the simulation video, the video information generation AI model 34A will increase the level of extremism in the next simulation video to be viewed.
  • the video information generation unit 31 When reducing the level of extremism, the video information generation unit 31 performs image processing such as gradually reducing the resolution of the video or applying a blurring process. When increasing the level of extremism, the video information generation unit 31 performs image processing such as increasing the resolution of the video.
  • Other methods that can be applied include adjusting the ratio of real images in the image (when reducing the level of extremism, reduce the ratio of real images and increase the ratio of CG), or adjusting the ratio of video to still images (when reducing the level of extremity, reduce the ratio of video and increase the ratio of still images).
  • the video information generation AI model 34A creates simulation images (video or still images) that appear more realistically, such as by allowing viewers to watch extreme scenes, such as scenes involving contact with people, without cutting them out, or by making the model and color of the vehicle 2 that appears the same as the vehicle that appeared in the near-miss situation.
  • the accident estimation device 1 can contribute to improving traffic safety education by providing simulated images of accident scenes that can be reliably viewed by various drivers with different tolerances for extreme situations, according to their tolerance level.
  • the damage information generation unit 32 Based on the accident scene ID of the accident scene input from the accident estimation model 13 and the accident DB 12, the damage information generation unit 32 generates and outputs damage information regarding damages caused by an accident when an actual accident occurs from a near miss situation.
  • the accident estimation device 1 helps drivers recognize the damage situation after an accident, such as increased vehicle repair costs and insurance costs, and encourages safe driving, thereby contributing to improving the effectiveness of traffic safety education.
  • the damage information generation unit 32 may also be configured with a damage information generation AI model 32A (see Figure 9) that generates the above-mentioned damage information from near-miss images.
  • a damage information generation AI model 32A see Figure 9
  • the accident information generation unit 14 uses AI to generate simulation footage of an accident that occurs from a near-miss situation and the above-mentioned damage information.
  • the accident information generation unit 14 includes a video information generation unit 31 and a damage information generation unit 32.
  • the video information generation unit 31 includes a video information generation AI model 34A.
  • the damage information generation unit 32 includes a damage information generation AI model 32A.
  • the learning dataset for the video information generation AI model 34A is a dataset in which a plurality of near-miss images serving as input data are each associated one-to-one with a plurality of accident images serving as ground truth data.
  • the video information generation AI model 34A sequentially estimates simulation videos of potential accident scenes from the near-miss information input data that is sequentially input, and outputs them as estimation results.
  • the video information generation unit 31 causes the video information generation AI model 34A to perform machine learning by updating the weighting coefficients of each layer in the DNN using processing such as backpropagation, so as to reduce the error between the estimated results of the simulation videos output from the video information generation AI model 34A and the correct values of the learning data corresponding to the input values.
  • the video information generation unit 31 can generate a video information generation AI model 34A that can estimate, from the input near-miss information, a simulation video of an accident scene that is likely to occur in the near-miss situation corresponding to the near-miss information.
  • the damage information generation AI model 32A is an AI model that has been machine-trained to estimate damage information relating to damage caused by an accident when near-miss information is input and an actual accident occurs based on the circumstances of the near-miss.
  • a learning dataset such as that shown in FIG. 13 is used for the machine learning of the damage information generation AI model 32A.
  • the learning dataset for the damage information generation AI model 32A is a dataset in which a plurality of near-miss images serving as input data are each associated one-to-one with a plurality of pieces of damage information serving as correct answer data.
  • the damage information generation AI model 32A sequentially estimates damage information related to damage caused by an accident when an actual accident occurs, from the near-miss information of the input data that is sequentially input, and outputs the estimated results.
  • the damage information generation unit 32 causes the damage information generation AI model 32A to perform machine learning by updating the weighting coefficients of each layer in the DNN using processing such as backpropagation so as to reduce the error between the estimated damage information output from the damage information generation AI model 32A and the correct value of the learning data corresponding to the input value.
  • the controller 10 generates information indicating the estimated percentage of fault between the driver and the other party in the accident scene, information indicating at least one of the personal injury status and property damage status, and information indicating future changes (trends) in insurance premiums due to the occurrence of the accident.
  • damage information may be generated from images of personal injury status and property damage status, which can be generated based on the accident simulation image described below (by cutting out images of people or damaged parts of the vehicle, for example).
  • the insurance premium review device is used to create insurance premium calculation standards and calculate insurance premiums for each individual.
  • insurance premiums are calculated based on data related to accidents that have occurred. Therefore, insurance premiums are calculated based only on actual data, and do not include information on potential accidents.
  • the accident estimation device 1 can estimate the damage from potential accidents based on the occurrence of near misses and the magnitude (damage) of the accidents that lead to them, and reflect this in the insurance premiums.
  • an insurance premium calculation standard for example, near-miss occurrence status data for an appropriate period is added to the parameters
  • near-miss occurrence status data for an appropriate period would be created based on the total amount of damages caused by actual accidents and the amount of damages caused by accidents resulting from near-misses (multiplied by an appropriate coefficient (big data is required (data collected over an appropriate period))), and the insurance premium for each individual would be calculated based on each individual's data (near-miss occurrence status data for an appropriate period is used to calculate the insurance premium).
  • the accident estimation device 1 may be mounted on an on-board device such as a car navigation device or a drive recorder. In this case, when a near-miss occurs, the on-board device estimates an accident that is likely to occur in the near-miss situation based on the near-miss situation, and notifies the vehicle occupants of accident information related to the accident.
  • an on-board device such as a car navigation device or a drive recorder.
  • the in-vehicle device notifies the occupants of accident information when the vehicle comes to a stop or when one trip has ended.
  • This allows the in-vehicle device to notify the occupants of accident information about an accident that is likely to occur as a result of a near miss when the vehicle comes to a stop or when one trip has ended, in the event of a near miss occurring, without much time having passed since the near miss. Therefore, by providing the occupants involved with accident information about an accident that is likely to occur as a result of a near miss that they recently experienced, the in-vehicle device can encourage safe driving and contribute to improving the effectiveness of traffic safety education.
  • the terminal device can help drivers understand the horror of accidents that can occur from near-miss situations experienced by the trainees themselves, thereby contributing to improving the effectiveness of traffic safety education.
  • the accident estimation device 1, insurance premium review device 101, on-board device 102, terminal device 103, and vehicle 2 are connected via a communications network N so that information can be communicated.
  • the insurance premium review device 101, on-board device 102, and terminal device 103 are each equipped with the accident estimation device 1.
  • the information processing system 100 can provide the above-mentioned accident information at any of the locations where the accident estimation device 1 is installed, the location where the insurance premium review device 101 is installed, inside the vehicle where the onboard device 102 is installed, and the location where the terminal device 103 is installed. Therefore, by providing accident information to users in various locations, the information processing system 100 can contribute to improving the effectiveness of traffic safety education.
  • the features of the present invention are as follows. (1) Obtain near miss information about the near miss that occurred, outputting accident type information for the near miss that has occurred by referring to an accident type database that associates the near miss information with an accident type relating to an accident that is feared to occur from the near miss; An accident estimation device including a controller. (2) The controller By referring to an accident database in which the output accident type information is associated with virtual accident information indicating an accident event that may occur due to the accident, the virtual accident information for the near miss that has occurred is output. The accident estimation device according to (1) above.
  • the controller instead of the accident type database, the near-miss information is used as input information, and an accident type model trained with learning data in which the accident type information is correct answer data is used to output the accident type information for the near-miss that has occurred.
  • the controller instead of the accident type database, the near-miss information is used as input information, and an accident type model trained with learning data in which the accident type information is correct answer data is used to output the accident type information for the near-miss that has occurred.
  • the virtual accident information Includes simulated images of virtual accidents, The accident estimation device according to (2) or (4).
  • the virtual accident information Contains hypothetical damage information regarding damages resulting from hypothetical accidents; The accident estimation device according to (2) or (4).
  • the virtual damage information Including information showing the percentage of fault between us and the other party in the accident, The accident estimation device according to (6) above.
  • the virtual damage information information indicating at least one of a personal injury situation, an image of the personal injury situation, a property damage situation, and an image of the property damage situation in the virtual accident;
  • the virtual damage information Contains information showing the change in premiums resulting from the occurrence of hypothetical accidents; The accident estimation device according to (6), (7) or (8).
  • the computer Obtain near miss information about the near miss that occurred, outputting accident type information for the near miss that has occurred by referring to an accident type database that associates the near miss information with an accident type relating to an accident that is feared to occur from the near miss; Accident estimation method.
  • a computer is caused to execute a procedure of outputting accident type information for the near miss that has occurred by referring to an accident type database that associates the near miss information with an accident type relating to an accident that is feared to occur from the near miss, using the acquired near miss information; Accident estimation program.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Engineering & Computer Science (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • Technology Law (AREA)
  • General Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Traffic Control Systems (AREA)

Abstract

An accident inference device according to one embodiment of the present invention is provided with a controller. When a near-miss occurs, the controller infers an accident that is likely to happen in said near-miss situation, on the basis of the situation of the near-miss. The controller outputs accident information related to the inferred virtual accident.

Description

事故推定装置、保険料検討装置、車載機、端末装置、情報処理システム、事故推定方法、事故推定プログラム、AIモデル生成方法、およびAI生成プログラムAccident estimation device, insurance premium review device, in-vehicle device, terminal device, information processing system, accident estimation method, accident estimation program, AI model generation method, and AI generation program

 開示の実施形態は、事故推定装置、保険料検討装置、車載機、端末装置、情報処理システム、事故推定方法、事故推定プログラム、AIモデル生成方法、およびAI生成プログラムに関する。 The disclosed embodiments relate to an accident estimation device, an insurance premium review device, an on-board device, a terminal device, an information processing system, an accident estimation method, an accident estimation program, an AI model generation method, and an AI generation program.

 過去に発生した事故の発生地点、内容、および周辺環境などの事故情報と、事故につながるヒヤリハットの発生地点における危険因子の情報とを車両のドライバなどに通知また紹介して交通安全に寄与する危険個所情報表示装置がある(例えば、特許文献1参照)。 There is a danger point information display device that contributes to traffic safety by notifying or presenting vehicle drivers with accident information such as the location, details, and surrounding environment of past accidents, as well as information on risk factors at the location of near misses that could lead to accidents (see, for example, Patent Document 1).

特開2007-51973号公報JP 2007-51973 A

 しかしながら、ヒヤリハットは事故とはなっていないためヒヤリハットに関する情報は、やや刺激が小さいと言う面がある。また、過去に発生した事故および過去に発生した事故およびヒヤリハットに関する情報は、事故の当事者以外の者にとって他人事である。これらにより、交通安全教育効果を向上させる教材として不充分である。 However, because near misses do not become accidents, information about near misses is somewhat less stimulating. Furthermore, information about past accidents and past accidents and near misses is something that does not concern anyone other than those involved in the accident. For these reasons, it is insufficient as teaching material to improve the effectiveness of traffic safety education.

 実施形態の一態様は、上記に鑑みてなされたものであって、交通安全教育効果の向上に寄与できる事故推定装置、保険料検討装置、車載機、端末装置、情報処理システム、事故推定方法、事故推定プログラム、AIモデル生成方法、およびAI生成プログラムを提供することを目的とする。 One aspect of the embodiment has been made in consideration of the above, and aims to provide an accident estimation device, insurance premium review device, in-vehicle device, terminal device, information processing system, accident estimation method, accident estimation program, AI model generation method, and AI generation program that can contribute to improving the effectiveness of traffic safety education.

 実施形態の一態様に係る事故推定装置は、コントローラを備える。コントローラは、所謂ヒヤリハットと表現される事故未遂事象(「危ないことが起こったが、幸い災害には至らなかった事象のこと」)が発生した場合に、前記ヒヤリハットの状況において発生しそうな事故を前記状況に基づいて推定する。前記コントローラは、前記事故に関する事故情報を出力する。 An accident estimation device according to one aspect of the embodiment includes a controller. When an attempted accident, known as a near miss ("something dangerous happened, but fortunately did not result in a disaster"), occurs, the controller estimates an accident that is likely to occur in the near miss situation based on the situation. The controller outputs accident information related to the accident.

 実施形態の一態様に係る事故推定装置、保険料検討装置、車載機、端末装置、情報処理システム、事故推定方法、事故推定プログラム、AIモデル生成方法、およびAI生成プログラムは、自身に関わるヒヤリハットに対して発生が懸念される事故の情報を推定生成する。従って、自分事に類する情報報知が実現でき、交通安全教育効果の向上に寄与できるという効果を奏する。 The accident estimation device, insurance premium review device, in-vehicle device, terminal device, information processing system, accident estimation method, accident estimation program, AI model generation method, and AI generation program according to one aspect of the embodiment estimate and generate information on accidents that are feared to occur in response to near misses that involve the individual. This allows for the notification of information that is relevant to the individual, thereby contributing to improving the effectiveness of traffic safety education.

図1は、実施形態に係る事故推定装置および車両の構成を示す説明図である。FIG. 1 is an explanatory diagram showing the configuration of an accident estimation device and a vehicle according to an embodiment. 図2は、実施形態に係る事故推定AIモデルの生成方法を示す説明図である。FIG. 2 is an explanatory diagram showing a method for generating an accident estimation AI model according to the embodiment. 図3は、実施形態に係る学習データの作成方法の説明図である。FIG. 3 is an explanatory diagram of a method for creating training data according to the embodiment. 図4は、実施形態に係る事故情報生成部の構成を示す説明図である。FIG. 4 is an explanatory diagram illustrating the configuration of the accident information generating unit according to the embodiment. 図5は、実施形態に係る映像DBに記憶されるテーブルの一例を示す説明図である。FIG. 5 is an explanatory diagram illustrating an example of a table stored in the video DB according to the embodiment. 図6は、実施形態に係る映像DBに記憶されるテーブルの一例を示す説明図である。FIG. 6 is an explanatory diagram illustrating an example of a table stored in the video DB according to the embodiment. 図7は、実施形態に係る映像DBに記憶されるテーブルの一例を示す説明図である。FIG. 7 is an explanatory diagram illustrating an example of a table stored in the video DB according to the embodiment. 図8は、実施形態に係る事故DBに記憶されるテーブルの一例を示す説明図である。FIG. 8 is an explanatory diagram illustrating an example of a table stored in the accident DB according to the embodiment. 図9は、実施形態に係るAIを備えた事故情報生成部の構成を示す説明図である。FIG. 9 is an explanatory diagram showing the configuration of an accident information generation unit equipped with AI according to the embodiment. 図10は、実施形態に係る映像情報生成AIモデルの学習データセットの一例を示す説明図である。FIG. 10 is an explanatory diagram illustrating an example of a training dataset for the video information generation AI model according to the embodiment. 図11は、実施形態に係る映像情報生成AIモデルの学習方法の一例を示す説明図である。FIG. 11 is an explanatory diagram showing an example of a learning method for a video information generation AI model according to an embodiment. 図12は、実施形態に係る映像情報生成AIモデルの動作の一例を示す説明図である。FIG. 12 is an explanatory diagram showing an example of the operation of the video information generation AI model according to the embodiment. 図13は、実施形態に係る損害情報生成AIモデルの学習データセットの一例を示す説明図である。FIG. 13 is an explanatory diagram illustrating an example of a learning dataset for the damage information generation AI model according to the embodiment. 図14は、実施形態に係る損害情報生成AIモデルの学習方法の一例を示す説明図である。FIG. 14 is an explanatory diagram showing an example of a learning method for a damage information generation AI model according to an embodiment. 図15は、実施形態に係る損害情報生成AIモデルの動作の一例を示す説明図である。FIG. 15 is an explanatory diagram illustrating an example of the operation of the damage information generation AI model according to the embodiment. 図16は、実施形態に係る事故推定装置のコントローラが実行する処理を示すフローチャートである。FIG. 16 is a flowchart showing a process executed by the controller of the accident estimation device according to the embodiment. 図17は、実施形態に係る事故推定装置のコントローラが実行する処理を示すフローチャートである。FIG. 17 is a flowchart showing a process executed by the controller of the accident estimation device according to the embodiment. 図18は、実施形態に係る情報処理システムの説明図である。FIG. 18 is an explanatory diagram of an information processing system according to an embodiment.

 以下、添付図面を参照して、事故推定装置、保険料検討装置、車載機、端末装置、情報処理システム、事故推定方法、事故推定プログラム、AIモデル生成方法、およびAI生成プログラムの実施形態を詳細に説明する。なお、以下に示す実施形態によりこの発明が限定されるものではない。 Below, with reference to the attached drawings, embodiments of the accident estimation device, insurance premium review device, in-vehicle device, terminal device, information processing system, accident estimation method, accident estimation program, AI model generation method, and AI generation program will be described in detail. Note that the present invention is not limited to the embodiments shown below.

[1.事故推定方法の概要]
 まず、実施形態に係る事故推定装置による事故推定方法の概要について説明する。事故推定装置は、コンピュータを備え、コンピュータがメモリに記憶している事故推定プログラムを実行することによって、実施形態に係る事故推定方法を実現する。
[1. Overview of accident estimation method]
First, an outline of an accident estimation method performed by an accident estimation device according to an embodiment will be described. The accident estimation device includes a computer, and the computer executes an accident estimation program stored in a memory to realize the accident estimation method according to the embodiment.

 実施形態に係る事故推定装置は、走行中の車両において事故に発展しそうなニアミスアクシデント(以下、「ヒヤリハット」と記載する)が発生した場合に、そのヒヤリハットの状況において発生しそうな事故を推定し、その仮想的な事故に関する事故情報、つまりヒヤリハットに基づく仮想事故情報を出力する装置である。事故推定装置は、ヒヤリハットの状況に基づいて、発生しそうな事故を推定する。 The accident estimation device according to the embodiment is a device that, when a near-miss accident (hereinafter referred to as a "near miss") that is likely to develop into an accident occurs in a moving vehicle, estimates the accident that is likely to occur in the circumstances of the near miss and outputs accident information related to that hypothetical accident, i.e., hypothetical accident information based on the near miss. The accident estimation device estimates the accident that is likely to occur based on the circumstances of the near miss.

 これにより、事故推定装置は、実際にヒヤリハットを経験したドライバに、そのときのヒヤリハットから実際に発生しそうな事故に関する事故情報を出力して提供し、事故に対する危機感を認識させる。従って、ドライバは自分事に類する事故情報の提供を受けることになり、事故推定装置は、交通安全教育の効果向上に寄与できる。 In this way, the accident prediction device can output and provide accident information about accidents that are likely to occur as a result of that near miss to drivers who have actually experienced a near miss, raising their awareness of the danger of accidents. Drivers are therefore provided with accident information that is relevant to them, and the accident prediction device can contribute to improving the effectiveness of traffic safety education.

[2.事故推定装置]
 図1を参照して、実施形態に係る事故推定装置1について説明する。図1は、実施形態に係る事故推定装置1および車両2の構成を示す説明図である。まず、実施形態に係る車両2の構成について説明する。
[2. Accident estimation device]
An accident estimation device 1 according to an embodiment will be described with reference to Fig. 1. Fig. 1 is an explanatory diagram showing the configuration of the accident estimation device 1 and a vehicle 2 according to an embodiment. First, the configuration of the vehicle 2 according to the embodiment will be described.

 図1に示すように、車両2には、外部装置との通信機能を有するドライブレコーダ20が設置される。ドライブレコーダ20は、例えば、インターネットなどの通信ネットワークNを介して、事故推定装置1と情報通信可能に接続される。また、ドライブレコーダ20には、車両内LAN(Local Area Network)を介して、車両2に設置された舵角センサ22、感情センサ23が接続される。そして、ドライブレコーダ20は、加速度センサ21、車載カメラ24、車載マイク25、フラッシュメモリ26、コントローラ27、および通信部28を備える。 As shown in FIG. 1, a drive recorder 20 having the ability to communicate with external devices is installed in the vehicle 2. The drive recorder 20 is connected to the accident estimation device 1 so that it can communicate information with the accident estimation device 1 via a communication network N such as the Internet. The drive recorder 20 is also connected to a steering angle sensor 22 and emotion sensor 23 installed in the vehicle 2 via an in-vehicle LAN (Local Area Network). The drive recorder 20 is further equipped with an acceleration sensor 21, an on-board camera 24, an on-board microphone 25, a flash memory 26, a controller 27, and a communication unit 28.

 コントローラ27は、舵角センサ22が検出した車両の舵角情報、および感情センサ23が検出したドライバの感情情報を取得する。また、コントローラ27は、加速度センサ21が検出した車両の加速度、車載カメラ24が撮影した撮影画像情報、および車載マイク25が集音した音声情報を取得する。 The controller 27 acquires vehicle steering angle information detected by the steering angle sensor 22 and driver emotion information detected by the emotion sensor 23. The controller 27 also acquires vehicle acceleration detected by the acceleration sensor 21, image information captured by the onboard camera 24, and audio information collected by the onboard microphone 25.

 コントローラ27は、車載カメラ24が撮影した撮影画像情報、および車載マイク25が集音した音声情報をフラッシュメモリ26に記憶する。また、コントローラ27は、フラッシュメモリ26に記憶された撮影画像情報、および音声情報を読み出して各種処理に用い、また通信部28を介して事故推定装置1等の外部装置に送信する。 The controller 27 stores the image information captured by the onboard camera 24 and the audio information collected by the onboard microphone 25 in the flash memory 26. The controller 27 also reads out the image information and audio information stored in the flash memory 26, uses them for various processes, and transmits them to an external device such as the accident estimation device 1 via the communication unit 28.

 加速度センサ21は、車両の加速度を検出するセンサで、例えば、弾性体と質量体からなり、加速度による質量体に加わる力を弾性体の変移量で計測して加速度を検出するもので、周波数変化式、圧電式、ピエゾ抵抗式、静電容量式等の加速度センサで構成できる。舵角センサ22は、車両2のハンドル回転状態を検出するセンサである。 The acceleration sensor 21 is a sensor that detects the acceleration of the vehicle. For example, it consists of an elastic body and a mass body, and detects acceleration by measuring the force applied to the mass body due to acceleration as the displacement of the elastic body. It can be configured as a frequency change type, piezoelectric type, piezo-resistive type, capacitance type, or other acceleration sensor. The steering angle sensor 22 is a sensor that detects the rotation state of the steering wheel of the vehicle 2.

 感情センサ23は、ドライバの感情を推定するための生体情報を検出するセンサで、例えば、心拍センサ、脳波センサ等であり、感情はこれらセンサが検出した脳波や心拍に基づき推定される。心拍センサ等は、車両2のハンドル等に設けられる。なお、人工知能等を用いてドライバの顔情報(表情)から感情を推定する方法も適用できる。この場合、車載カメラ24をドライバの顔情報を取得するセンサとして利用すればよい。 The emotion sensor 23 is a sensor that detects biometric information to estimate the driver's emotions, such as a heart rate sensor or brain wave sensor, and emotions are estimated based on the brain waves and heart rate detected by these sensors. The heart rate sensor is provided on the steering wheel or other part of the vehicle 2. Note that a method of estimating emotions from the driver's facial information (facial expression) using artificial intelligence or the like can also be applied. In this case, the on-board camera 24 can be used as a sensor to acquire the driver's facial information.

 車載カメラ24は、車両2の車室内の映像を撮像する車室内撮像装置と、車両2の周囲の映像を撮像する車室外撮像装置とを含む。車載マイク25は、車両2の車室内の音声を集音する集音装置である。フラッシュメモリ26は、車載カメラ24が撮影した車室内の映像および車載マイク25が集音した車室内の音声と、車載カメラ24が撮影した車両2の周囲映像および車載マイク25が集音した車両2の周囲音声を記憶する記録装置である。 The onboard camera 24 includes an interior imaging device that captures video from within the cabin of the vehicle 2, and an exterior imaging device that captures video from around the vehicle 2. The onboard microphone 25 is a sound collection device that collects audio from within the cabin of the vehicle 2. The flash memory 26 is a recording device that stores the video from within the cabin captured by the onboard camera 24 and the audio from within the cabin collected by the onboard microphone 25, as well as the video from around the vehicle 2 captured by the onboard camera 24 and the audio from around the vehicle 2 collected by the onboard microphone 25.

 コントローラ27は、CPU(Central Processing Unit)、ROM(Read Only Memory)、RAM(Random Access Memory)などを有するマイクロコンピュータや各種の回路を含む。コントローラ27は、CPUがROMに記憶されたプログラムを、RAMを作業領域として使用して実行することにより機能するヒヤリハット検知部29を備える。 The controller 27 includes a microcomputer with a CPU (Central Processing Unit), ROM (Read Only Memory), RAM (Random Access Memory), and various other circuits. The controller 27 is equipped with a near-miss detection unit 29 that functions when the CPU executes a program stored in the ROM using the RAM as a working area.

 コントローラ27は、一部または全部がASIC(Application Specific Integrated Circuit)やFPGA(Field Programmable Gate Array)等のハードウェアで構成されてもよい。通信部28は、通信ネットワークNを介して事故推定装置1と情報通信する通信インターフェースである。 The controller 27 may be partially or entirely composed of hardware such as an ASIC (Application Specific Integrated Circuit) or FPGA (Field Programmable Gate Array). The communication unit 28 is a communication interface that communicates information with the accident estimation device 1 via the communication network N.

 次に、車両2が備える各構成要素の動作について説明する。加速度センサ21は、車両2の前進方向を前後方向とした場合に、車両2の前後左右上下方向の加速度を検出してコントローラ27に出力する。 Next, we will explain the operation of each component of the vehicle 2. The acceleration sensor 21 detects acceleration in the front-rear, left-right, up-down directions of the vehicle 2, assuming that the forward direction of the vehicle 2 is the front-rear direction, and outputs the results to the controller 27.

 舵角センサ22は、車両2が直進するときのハンドルの状態を基準位置とし、基準位置からのハンドルの回転角度、回転速度、および回転加速度を検出してコントローラ27に出力する。なお、ハンドルの回転速度、および回転加速度については、コントローラ27が回転角度に基づき演算処理(微分処理、および2回微分処理)により算出しても良い。 The steering angle sensor 22 uses the state of the steering wheel when the vehicle 2 is traveling straight as a reference position, and detects the steering wheel's rotation angle, rotation speed, and rotation acceleration from the reference position, and outputs these to the controller 27. Note that the controller 27 may calculate the steering wheel's rotation speed and rotation acceleration based on the rotation angle through arithmetic processing (differential processing and second-order differential processing).

 感情センサ23は、ハンドルを握るドライバの手の平の脈動からドライバの心拍を検出する。ヒヤリハット発生時には、ドライバの感情が急変するので、感情に相関がある生体信号、例えば心拍も急変するため、本ドライブレコーダ20では心拍を用いて感情(ヒヤリハット時の感情状態)を推定する。感情センサ23は、ドライバの心拍を検出してコントローラ27に出力する。なお、感情センサ23は、心拍以外の感情に相関のある生体信号、例えば脳波を検出するセンサを用いることも可能である。 The emotion sensor 23 detects the driver's heartbeat from the pulse of the driver's palms gripping the steering wheel. When a near miss occurs, the driver's emotions change suddenly, and so do biosignals that correlate with emotions, such as heartbeat. Therefore, the drive recorder 20 uses the heartbeat to estimate emotions (emotional state at the time of the near miss). The emotion sensor 23 detects the driver's heartbeat and outputs it to the controller 27. Note that the emotion sensor 23 can also be a sensor that detects biosignals other than heartbeat that are correlated with emotions, such as brain waves.

 車載カメラ24は、撮像する車両2の車室内の映像、および、車両2の周囲の映像をコントローラ27に出力する。車載マイク25は、車両2の車室内の音声を収音してコントローラ27に出力する。 The onboard camera 24 outputs images of the interior of the vehicle 2 and images of the surroundings of the vehicle 2 to the controller 27. The onboard microphone 25 picks up audio from within the interior of the vehicle 2 and outputs the audio to the controller 27.

 コントローラ27は、車載カメラ24から入力される映像と、車載マイク25から入力される音声をフラッシュメモリ26に記憶する。コントローラ27は、入力される映像に撮像された日時を対応付け、入力される音声に集音された日時を対応付けてフラッシュメモリ26に記憶する。 The controller 27 stores the video input from the in-vehicle camera 24 and the audio input from the in-vehicle microphone 25 in the flash memory 26. The controller 27 associates the input video with the date and time it was captured, and the input audio with the date and time it was collected, and stores these in the flash memory 26.

 なお、コントローラ27が、舵角センサ22が検出したハンドルの回転角度、回転速度、および回転加速度、また感情センサ23が検出した感情情報(心拍情報)も、日時を対応付けてフラッシュメモリ26に記憶するようにしてもよい。 The controller 27 may also store the steering wheel rotation angle, rotation speed, and rotation acceleration detected by the steering angle sensor 22, as well as emotion information (heart rate information) detected by the emotion sensor 23, in the flash memory 26, in association with the date and time.

 ヒヤリハット検知部29は、車両2の走行中に発生するヒヤリハットを検知する。ヒヤリハット検知部29は、加速度センサ21から入力される車両2の加速度と、舵角センサ22から入力されるハンドルの回転角度、回転速度、および回転加速度と、ドライバの心拍とのうち、少なくともいずれか一つに基づいて、ヒヤリハットを検知する。 The near-miss detection unit 29 detects near-misses that occur while the vehicle 2 is traveling. The near-miss detection unit 29 detects near-misses based on at least one of the acceleration of the vehicle 2 input from the acceleration sensor 21, the steering wheel rotation angle, rotation speed, and rotation acceleration input from the steering angle sensor 22, and the driver's heart rate.

 ヒヤリハット検知部29は、車両2の加速度が加速度閾値を超える場合に、ヒヤリハットが発生したと判定する。つまり、ヒヤリハット発生時には、急ブレーキや急ハンドル操作が行われることが多く、また事故に至らないまでも何らかの衝撃が加わることが多いので、加速度の検出状況によりヒヤリハット発生の検出を行う。 The near-miss detection unit 29 determines that a near-miss has occurred when the acceleration of the vehicle 2 exceeds an acceleration threshold. In other words, when a near-miss occurs, sudden braking or abrupt steering is often performed, and some kind of impact is often applied even if it does not result in an accident, so the occurrence of a near-miss is detected based on the acceleration detection status.

 なお、ヒヤリハット発生が判定される加速度の下限閾値は、車両2が安全運転走行している場合に発生しうる車両2の加速度の上限値(適当なオフセットが付される)で、また上限閾値は事故発生時の加速度の下限値(適当なオフセットが付される)で、例えば設計開発時における実験等に基づき設定される。また、車両2のエアバッグの作動状態をヒヤリハット発生の判定に利用しても良い(エアバッグの作動時は、事故発生でヒヤリハット発生ではないと判定)。 The lower threshold of acceleration used to determine whether a near miss has occurred is the upper limit (with an appropriate offset) of acceleration that can occur when vehicle 2 is being driven safely, and the upper threshold is the lower limit (with an appropriate offset) of acceleration when an accident occurs, and is set based on experiments during design and development, for example. The deployment status of vehicle 2's airbags may also be used to determine whether a near miss has occurred (if the airbags are deployed, it is determined that an accident has occurred but not a near miss).

 また、ヒヤリハット検知部29は、ハンドルの回転速度、および回転加速度が、それぞれに対応する回転速度閾値、および回転加速度閾値を超える場合に、ヒヤリハットが発生したと判定する。つまり、ヒヤリハット発生時には、ドライバが急ハンドルを切る場合が多いので、ハンドルの回転速度、および回転加速度が閾値より大きい場合にヒヤリハット発生と判断する。なお、回転速度閾値、および回転加速度閾値は、例えば設計開発時における実験等に基づき判定精度が良好となる値に設定される。 Furthermore, the near-miss detection unit 29 determines that a near-miss has occurred when the rotational speed and rotational acceleration of the steering wheel exceed the corresponding rotational speed threshold and rotational acceleration threshold, respectively. In other words, when a near-miss occurs, the driver often makes an abrupt turn in the steering wheel, so it determines that a near-miss has occurred when the rotational speed and rotational acceleration of the steering wheel are greater than the threshold. Note that the rotational speed threshold and rotational acceleration threshold are set to values that provide good judgment accuracy, for example, based on experiments during design and development.

 なお、回転速度閾値、および回転加速度閾値は、それぞれ、車両2が安全運転走行している場合に、ドライバによって操作されるハンドルの回転速度、および回転加速度の上限値(適当なオフセットが付される)で、例えば設計開発時における実験等に基づき判定精度が良好となる値に設定される。 The rotational speed threshold and rotational acceleration threshold are the upper limit (with an appropriate offset) of the rotational speed and rotational acceleration of the steering wheel operated by the driver when vehicle 2 is being driven safely, and are set to values that provide good judgment accuracy based on, for example, experiments conducted during design and development.

 ヒヤリハット検知部29は、判定時間内におけるドライバの心拍の上昇量が心拍閾値を超える場合に、ヒヤリハットが発生したと判定する。ここでの判定時間は、例えば、10秒間である。なお、ここでの判定時間は、10秒間に限定されるものではない。心拍閾値は、車両2が安全運転走行している間の判定時間内におけるドライバの心拍上昇量の上限値(適当なオフセットが付される)である。なお、判定時間、および心拍閾値は、例えば設計開発時における実験等に基づき判定精度が良好となる適当な値設定される。 The near-miss detection unit 29 determines that a near-miss has occurred if the increase in the driver's heart rate within the determination time exceeds the heart rate threshold. The determination time here is, for example, 10 seconds. Note that the determination time here is not limited to 10 seconds. The heart rate threshold is the upper limit (with an appropriate offset) of the increase in the driver's heart rate within the determination time while the vehicle 2 is being driven safely. Note that the determination time and heart rate threshold are set to appropriate values that provide good determination accuracy, for example, based on experiments during design and development.

 なお、ヒヤリハット発生判定は、上述の車両2の加速度状態による判定、ハンドル回転状態による判定、およびドライバの心拍(感情)状態による判定を組み合わせた判定(これらの判定結果の論理演算結果に基づき判定)により行っても良い。 In addition, the near-miss occurrence determination may be made by a combination of the above-mentioned determination based on the acceleration state of the vehicle 2, determination based on the steering wheel rotation state, and determination based on the driver's heart rate (emotions) state (determination based on the results of a logical operation of these determination results).

 このように、ヒヤリハット検知部29によってヒヤリハットの発生を検知できれば、ヒヤリハットが発生した場所、およびヒヤリハットの状況に関する情報を、例えば、交通安全教育を行う機関に提供できる。そして、交通安全教育を行う機関は、例えば、ヒヤリハットが発生する場所、およびヒヤリハットの状況を明示したヒヤリハットマップを作成し、ドライバなどに提供する等して、交通安全の啓蒙活動を実施できる。 In this way, if the near miss detection unit 29 can detect the occurrence of a near miss, information about the location where the near miss occurred and the circumstances of the near miss can be provided to, for example, an institution that provides traffic safety education. Then, the institution that provides traffic safety education can carry out traffic safety awareness activities, for example, by creating a near miss map that clearly shows the location where the near miss occurred and the circumstances of the near miss, and providing this to drivers, etc.

 しかし、ドライバは、ヒヤリハットを見ただけでは、実際には事故が起っていない事実もあって、強く印象に残らない。例えば、ヒヤリハットが生じたとき、それがどんな事故になった可能性があるのか、その場合の被害の程度や事故の責任割合など、ヒヤリハットが事故にまで至ったときの状況が分からないこともあり、少し現実味に欠けるところがある。 However, drivers do not make a strong impression when they see a near miss, partly because no accident actually occurs. For example, when a near miss occurs, they may not know what kind of accident it could have become, the extent of the damage, or the percentage of responsibility for the accident, so the situation when the near miss escalates into an accident can be unclear, which can make it seem a little unrealistic.

 一方、事故の損害を補償する保険には、運転距離に応じて保険料が決まる走行距離連動型(PAYD)と、運転操作に連動した運転行動連動型(PHYD)がある。PHYDでは、車載端末が持つ各種センサ情報から、安全運転しているかどうかを急ブレーキ、急発進、急ハンドルなどの回数・頻度で把握する。PHYDはPAYDに比べ、運転事故リスクをより精度良く把握でき、保険料に反映できるとされている。 On the other hand, insurance that compensates for damages caused by accidents comes in two types: distance-linked insurance (PAYD), where the premium is determined according to the distance driven, and behavior-linked insurance (PHYD), where the premium is linked to driving operations. PHYD uses information from various sensors in the in-car device to determine whether the driver is driving safely based on the number and frequency of sudden braking, sudden acceleration, sudden steering, etc. Compared to PAYD, PHYD is said to be able to grasp the risk of driving accidents more accurately and reflect this in insurance premiums.

 このように、保険会社では適切な保険料を設定するために、いろいろな工夫を凝らしている。しかし、保険料設定に用いられている情報は、主に発生した事故に関する情報が主要なデータであって、事故には至らなかったヒヤリハット情報は十分に用いられていないのが実体である。 In this way, insurance companies use various methods to set appropriate insurance premiums. However, the reality is that the information used to set premiums is primarily data related to accidents that have occurred, and information on near misses that did not result in accidents is not used sufficiently.

 例えば、ヒヤリハットの発生頻度が高い運転手は事故を起こす確率が高いものと予測され、また大きな被害が生じる事故に繋がるヒヤリハットの発生頻度が高い運転手は大きな事故を起こす確率が高いものと予測されるが、これらのヒヤリハット情報は保険料の算出には生かされていないのが実体である。 For example, drivers who have a high frequency of near misses are predicted to have a high probability of causing an accident, and drivers who have a high frequency of near misses that lead to accidents causing major damage are predicted to have a high probability of causing a major accident, but the reality is that this near miss information is not used to calculate insurance premiums.

 そこで、実施形態に係るヒヤリハット検知部29は、ヒヤリハットが発生したと判定した場合、ヒヤリハットに関するヒヤリハット情報を事故推定装置1に送信する。事故推定装置1は、車両2のヒヤリハット検知部29から受信するヒヤリハット情報に基づいて、発生しそうな事故シーンを推定し、その事故シーンに関する事故情報を出力する。 Therefore, when the near-miss detection unit 29 according to the embodiment determines that a near-miss has occurred, it transmits near-miss information relating to the near-miss to the accident estimation device 1. The accident estimation device 1 estimates a likely accident scene based on the near-miss information received from the near-miss detection unit 29 of the vehicle 2, and outputs accident information relating to that accident scene.

 例えば、事故推定装置1は、ヒヤリハットから発生が懸念される事故に関する事故種別を示す事故種別情報、ヒヤリハットから発生が懸念される事故事象を示す仮想事故情報を出力する。 For example, the accident estimation device 1 outputs accident type information indicating the type of accident that is feared to occur as a result of a near miss, and hypothetical accident information indicating the accident event that is feared to occur as a result of a near miss.

 これにより、例えば、ドライバは、推定された事故情報を確認することによって、ヒヤリハットが生じたとき、どんな事故になった可能性があるのか、その場合の被害の程度や事故の責任割合など、ヒヤリハットが事故にまで至ったときの事故の程度、影響度合いを知ることができる。 This means that, for example, by checking estimated accident information, drivers can find out what kind of accident could have occurred when a near miss occurs, the extent of damage, and the percentage of responsibility for the accident, as well as the severity and impact of the accident if the near miss escalates into an accident.

 一方、保険会社は、推定された事故情報を確認することによって、潜在的な事故発生確率や事故の規模(被害規模)を推定することができ、より適切な保険料の算出が行えることが期待できる。 On the other hand, by checking the estimated accident information, insurance companies can estimate the potential probability of an accident occurring and the scale of the accident (scale of damage), which is expected to enable them to calculate more appropriate insurance premiums.

 具体的には、ヒヤリハット検知部29は、ヒヤリハットが発生したと判定した場合、ヒヤリハットの発生した発生日時データおよび発生位置データ、ヒヤリハット発生日時の前後所定時間に収録された映像および音声をフラッシュメモリ26から読み込む。 Specifically, when the near miss detection unit 29 determines that a near miss has occurred, it reads from the flash memory 26 the date and time data and location data of the near miss, as well as video and audio recorded a predetermined time before and after the date and time of the near miss.

 さらに、ヒヤリハット検知部29は、加速度センサ21により検出された加速度データ、舵角センサ22により検出された操舵速度データ、および感情センサ23により検出された運転者の感情データを、フラッシュメモリ26から読み込む。 Furthermore, the near-miss detection unit 29 reads from the flash memory 26 the acceleration data detected by the acceleration sensor 21, the steering speed data detected by the steering angle sensor 22, and the driver's emotion data detected by the emotion sensor 23.

 ここでの所定時間は、ヒヤリハット発生状況の特徴の把握に必要な各種情報の時間長で、例えば5秒間であるが、5秒間に限定されるものではなく、製品当該システムの設計開発時に実験等に基づき定められた適当な時間とすればよい。 The specified time here is the length of time for the various pieces of information necessary to understand the characteristics of the near-miss occurrence situation, and is, for example, 5 seconds, but is not limited to 5 seconds and can be any appropriate time determined based on experiments, etc. during the design and development of the product system.

 ヒヤリハット検知部29は、ヒヤリハットが発生した際におけるこれらの各種データを通信部28に出力する。通信部28は、コントローラ27から入力されるヒヤリハットが発生した際におけるこれらの各種データをヒヤリハット情報として事故推定装置1に送信する。 The near-miss detection unit 29 outputs these various data when a near-miss occurs to the communication unit 28. The communication unit 28 transmits these various data when a near-miss occurs, which are input from the controller 27, to the accident estimation device 1 as near-miss information.

 次に、事故推定装置1の構成について説明する。事故推定装置1は、コントローラ10と、通信部11と、事故データベース12(以下、「事故DB12」と記載する)を備える。通信部11は、通信ネットワークNを介して車両2と情報通信する通信インターフェースである。 Next, the configuration of the accident estimation device 1 will be described. The accident estimation device 1 includes a controller 10, a communication unit 11, and an accident database 12 (hereinafter referred to as "accident DB 12"). The communication unit 11 is a communication interface that communicates information with the vehicle 2 via the communication network N.

 コントローラ10は、CPU、ROM、RAMなどを有するマイクロコンピュータや各種の回路を含む。CPUがROMに記憶されたプログラムを、RAMを作業領域として使用して実行することにより機能する事故推定モデル13と、事故情報生成部14とを備える。 The controller 10 includes a microcomputer with a CPU, ROM, RAM, etc., and various circuits. It is equipped with an accident estimation model 13 that functions when the CPU executes a program stored in the ROM using the RAM as a work area, and an accident information generation unit 14.

 コントローラ10は、一部または全部がASICやFPGA等のハードウェアで構成されてもよい。事故DB12は、データフラッシュ等の情報記憶デバイスであり、過去に発生した事故事例に関する情報を記憶する。事故DB12に記憶される情報の一例については、図4を参照して後述する。 The controller 10 may be configured in part or in whole using hardware such as an ASIC or FPGA. The accident DB 12 is an information storage device such as a data flash, and stores information about accidents that have occurred in the past. An example of the information stored in the accident DB 12 will be described later with reference to Figure 4.

 次に、事故推定装置1の各構成要素の動作について説明する。通信部11は、車両2からヒヤリハット情報を受信すると、受信したヒヤリハット情報をコントローラ10の事故推定モデル13に出力する。 Next, the operation of each component of the accident estimation device 1 will be described. When the communication unit 11 receives near-miss information from the vehicle 2, it outputs the received near-miss information to the accident estimation model 13 of the controller 10.

 事故推定モデル13は、ヒヤリハット情報が入力されると、ヒヤリハット情報に対応するヒヤリハットの状況において発生しそうな事故シーンを出力する推定モデルである。事故推定モデル13は、ヒヤリハット情報の各種情報をパラメータとする事故シーンの情報(事故の種別を特定する情報)で構成されたデータベース(事故種別データベース)を用いた照合処理による方法等でも実現できるが、事故シーンは非常に多様なパターン(種別)となるため、AI(Artificial Intelligence)モデルを用いて実現するのが現実的である。 Accident inference model 13 is an inference model that, when near-miss information is input, outputs an accident scene that is likely to occur in the near-miss situation corresponding to the near-miss information. Accident inference model 13 can be realized by a matching process using a database (accident type database) made up of accident scene information (information that identifies the type of accident) that uses various information from the near-miss information as parameters, but because accident scenes come in a wide variety of patterns (types), it is more realistic to realize it using an AI (Artificial Intelligence) model.

 このAIモデルは、ヒヤリハット情報の入力により事故シーンを推定するように機械学習されたAIモデルである。なお、AIモデルを用いた事故推定モデル13については、説明を分かりやすくするため、事故推定AIモデル13Aと表記する。 This AI model is an AI model that has been machine-learned to estimate accident scenes based on input near-miss information. For ease of understanding, accident estimation model 13 using an AI model will be referred to as accident estimation AI model 13A.

 事故の発生状況(破損レベル等)は、事故(衝突等)直前における、衝突物との相対速度、衝突方向(車両(例えば車両前方方向)を基準とするに対する方向)、衝突点(箇所)等に基づき、推定される。また、事故の発生状況(破損レベル等)は、事故地点周辺の環境によっても変化し、例えば事故映像等は周囲の風景により大きく変わる。 The circumstances surrounding the accident (level of damage, etc.) are estimated based on the relative speed with respect to the object struck, the direction of impact (direction relative to the vehicle (e.g., the forward direction of the vehicle)), and the point of impact (location) immediately before the accident (collision, etc.). The circumstances surrounding the accident (level of damage, etc.) also change depending on the environment around the accident site; for example, accident footage can vary greatly depending on the surrounding scenery.

 このため、推定に用いるパラメータ種類数を多くすると各項目のデータも考慮された詳細な推定が可能となる。事故推定AIモデル13Aへの入力情報種別およびその情報の詳細性を増やし、また多くの詳細データで事故推定AIモデル13Aを学習することにより、事故推定AIモデル13Aは詳細な推定を行うことができる。 For this reason, increasing the number of types of parameters used in estimation enables more detailed estimation that takes into account the data for each item. By increasing the types of information input to the accident estimation AI model 13A and the level of detail of that information, and by training the accident estimation AI model 13A with a large amount of detailed data, the accident estimation AI model 13A can make more detailed estimations.

 しかし、実際には装置(CPU等)の性能やAI学習時間を考慮して推定に使用するデータを適度な項目・精度とする必要がある。このため、また説明を分かりやすくするため、本実施形態では推定に使用するデータ項目を影響の大きい項目に絞った例を用いて説明する。なお、他の項目を使用しても、同様の処理により事故状況の推定を行なえる。 However, in practice, the data used for estimation must be of an appropriate type and with an appropriate level of accuracy, taking into account the performance of the device (CPU, etc.) and the AI learning time. For this reason, and to make the explanation easier to understand, this embodiment will explain using an example in which the data items used for estimation are limited to items with a large impact. However, even if other items are used, the accident situation can be estimated using similar processing.

 事故推定AIモデル13Aは、ヒヤリハット情報に基づき事故状況を推定するモデルである。これは、ヒヤリハット情報発生直前(事故回避動作前)の状況と、事故発生直前(事故回避動作に必要な時間分前)の状況との類似性に基づき、発生したヒヤリハットにおいて適切な回避動作等が取られなかった場合に発生するであろう事故状況を推定するものである。 The accident estimation AI model 13A is a model that estimates the accident situation based on near-miss information. It estimates the accident situation that would occur if appropriate evasive action were not taken in the event of a near-miss, based on the similarity between the situation immediately before the near-miss information was generated (before any action to avoid the accident was taken) and the situation immediately before the accident occurred (the amount of time before the action to avoid the accident was taken).

 従って、事故推定AIモデル13Aの学習時における学習データ(事故データ)は、入力データが「事故直前の車両走行等の状態データ」、で正解データが「事故結果(事故状況)」となる。このため事故発生時点から適当な時間前(実験等に基づき適切な時間を設定する)を終点とする適当な時間帯(実験等に基づき適切な時間を設定する)の車両状態データが学習データの入力データとなる。 Therefore, when learning the accident estimation AI model 13A, the input data for the learning data (accident data) is "status data on vehicle driving, etc. immediately before the accident," and the correct answer data is "accident result (accident situation)." Therefore, the input data for the learning data is vehicle status data for an appropriate time period (an appropriate time is set based on experiments, etc.) ending an appropriate time before the accident occurs (an appropriate time is set based on experiments, etc.).

 なお、以降、この時間帯を推定データ時間帯と称する。また、事故推定モデル13の利用(推定動作)時における入力データは「ヒヤリハットにおける回避動作前の動作状態データ」で、実際には学習時と同様に推定データ時間帯(ヒヤリハット発生時点から適当な時間前(実験等に基づき適切な時間を設定する)を終点とする適当な時間帯(実験等に基づき適切な時間を設定する))の車両状態データが学習データの入力データとなる。 Hereinafter, this time period will be referred to as the estimated data time period. Furthermore, when the accident estimation model 13 is used (estimated operation), the input data is "operational state data before evasive action in the near miss," and in reality, just as during learning, the input data for the learning data is vehicle state data from the estimated data time period (an appropriate time period (an appropriate time is set based on experiments, etc.) ending an appropriate time before the near miss occurs (an appropriate time is set based on experiments, etc.)).

 本実施形態において、ヒヤリハット情報には、ヒヤリハット発生時における推定データ時間帯の加速度センサ21の検知結果(加速度センサ情報)が含まれる。ヒヤリハット情報には、推定データ時間帯の舵角センサ22の検知結果(舵角センサ情報)が含まれる。 In this embodiment, the near-miss information includes the detection results of the acceleration sensor 21 (acceleration sensor information) during the estimated data time period when the near-miss occurred. The near-miss information also includes the detection results of the steering angle sensor 22 (steering angle sensor information) during the estimated data time period.

 ヒヤリハット情報には、ヒヤリハット発生時推定データ時間帯の感情センサ23の検知結果(感情センサ情報)が含まれる。ヒヤリハット情報には、推定データ時間帯の車載カメラ24の撮像画像(車載カメラ映像)が含まれる。ヒヤリハット情報には、推定データ時間帯の車載マイク25による収音音声(車載マイク音声)が含まれる。 The near miss information includes the detection results (emotion sensor information) of the emotion sensor 23 during the estimated data time period when the near miss occurred. The near miss information includes images captured by the in-vehicle camera 24 during the estimated data time period (in-vehicle camera video). The near miss information includes audio picked up by the in-vehicle microphone 25 during the estimated data time period (in-vehicle microphone audio).

 ここで、図2を参照して、事故推定AIモデル13Aの生成(学習)方法について説明する。図2は、実施形態に係る事故推定AIモデル13Aの生成方法を示す説明図である。図2に示すように、事故推定AIモデル13Aは、入力される入力値から予測計算を行って出力値を導出するDNN(Deep Neural Network)13Bを備える。 Here, a method for generating (learning) the accident estimation AI model 13A will be described with reference to Figure 2. Figure 2 is an explanatory diagram showing a method for generating the accident estimation AI model 13A according to an embodiment. As shown in Figure 2, the accident estimation AI model 13A includes a DNN (Deep Neural Network) 13B that performs predictive calculations from input values to derive output values.

 事故推定AIモデル13Aは、例えば、教師あり学習によって生成される。教師あり学習では、入力値と正解値とが対になった多数の学習データから構成される学習データセットが用意される。事故推定AIモデル13Aの学習データの場合、入力値として事故あるいはヒヤリハット発生直前におけるヒヤリハット情報と同様の情報である予兆情報が使用され、正解値として当該事故(あるいはヒヤリハットから発生が予想される事故)における事故シーンが使用される。 The accident estimation AI model 13A is generated, for example, by supervised learning. In supervised learning, a learning data set is prepared that consists of a large amount of learning data in which input values are paired with correct values. In the case of the learning data for the accident estimation AI model 13A, predictive information, which is information similar to near-miss information immediately before an accident or near-miss occurs, is used as the input value, and the accident scene of the accident (or an accident predicted to occur from a near-miss) is used as the correct value.

 入力値となる予兆情報には、前述したように事故発生時における推定データ時間帯の加速度センサ情報、舵角センサ情報、感情センサ情報、車載カメラ映像、および車載マイク音声が含まれる。正解値となる事故シーンは、それぞれの事故シーンを識別する識別情報(事故シーンID)に各事故シーンが対応付けられる。 As mentioned above, the input predictive information includes acceleration sensor information, steering angle sensor information, emotion sensor information, in-vehicle camera footage, and in-vehicle microphone audio from the estimated data time period at the time of the accident. Accident scenes that serve as correct answers are associated with identification information (accident scene ID) that identifies each accident scene.

 次にこの教師あり学習データの作成方法について、3つの作成例を説明する。 Next, we will explain how to create this supervised learning data, using three examples.

 第1の学習データ作成方法:ヒヤリハット情報に基づきヒヤリハット発生過程におけるヒヤリハット予兆状態を抽出し、当該ヒヤリハット予兆状態と類似の事故予兆状態をもつ事故の事故情報を抽出する。そして、ヒヤリハット予兆状態に対応する予兆情報を入力データとし、対応する事故情報を正解データとする学習データを作成するものである。 First learning data creation method: Near miss precursor states are extracted from the near miss occurrence process based on near miss information, and accident information on accidents with accident precursor states similar to the near miss precursor states is extracted. Then, the precursor information corresponding to the near miss precursor states is used as input data, and learning data is created in which the corresponding accident information is used as correct answer data.

 具体的には、図3に示すように、ヒヤリハット発生過程における事故回避動作(急減速、急ハンドル操作、相手側車両の回避動作(画像等から判断))前の適当な期間におけるヒヤリハットの予兆情報と類似の予兆情報を含む(事項情報における事故発生直前の情報に類似の情報)事故情報(事故シーン)を正解データとする。なお、事故回避動作時点の判定は、車両の走行(操作)状態(速度センサ情報、舵角センサ情報等)に基づき定め、当該事故回避動作時点にデータ収集期間を決定しても行えば良いが、上述の説明のように、ヒヤリハット発生時における推定データ時間帯(事故回避動作時点)を実験等により定めてもよい。 Specifically, as shown in Figure 3, the correct data is accident information (accident scene) that includes predictive information similar to that of a near miss (information similar to the information immediately before the accident occurs in the event information) during an appropriate period before the accident avoidance action (sudden deceleration, sudden steering, avoidance action by the other vehicle (determined from images, etc.)) in the process of the near miss occurring. Note that the time of the accident avoidance action can be determined based on the vehicle's driving (operation) status (speed sensor information, steering angle sensor information, etc.), and the data collection period can be set at the time of the accident avoidance action, but as explained above, the estimated data time period when the near miss occurs (time of the accident avoidance action) can also be determined through experiments, etc.

 つまり、ヒヤリハットにおいて事故回避動作(操作)が無ければ事故が発生すると考えられるため、回避動作が無い状態のデータの類似性比較によりヒヤリハット情報に対応する事故情報を抽出するものである。なお、この学習データ作成方法は、人の手による作成(ヒヤリハット情報および事故情報を見て、上述の観点に基づき作成)、コンピュータ処理による作成(上述の観点によるヒヤリハット情報および事故情報の類似度比較に基づき作成)が可能である。 In other words, since it is believed that an accident will occur if no action (maneuver) is taken to avoid an accident in the event of a near miss, accident information corresponding to the near miss information is extracted by comparing the similarity of data in which no action is taken to avoid the accident. This learning data creation method can be done manually (by looking at near miss information and accident information and creating it based on the above-mentioned perspective) or by computer processing (by comparing the similarity of near miss information and accident information based on the above-mentioned perspective).

 第2の学習データ作成方法:第2の学習データ作成方法は、実際に発生した事故の事故情報に基づき事故発生過程における事故予兆状態(情報)を抽出し、当該事故予兆状態と類似のヒヤリハット予兆状態(情報)をもつヒヤリハットのヒヤリハット情報を抽出する。そして、抽出したヒヤリハット情報を入力データとし、対応する事故情報を正解データとする学習データを作成するものである。 Second learning data creation method: The second learning data creation method extracts accident precursor states (information) during the accident occurrence process based on accident information from actual accidents, and extracts near-miss information for near-misses that have near-miss precursor states (information) similar to the accident precursor states in question. The extracted near-miss information is then used as input data to create learning data with the corresponding accident information as correct answer data.

 具体的には、図3に示すように、事故情報を正解データとし、当該事故情報に対応する事故における事故発生時点から適当な時間前を終点とする事故予兆期間(事故発生前の非回避状態期間(この期間で回避動作があれば事故発生を防げた期間で、例えば事件等により定める))の事故予兆情報と類似の情報を含むヒヤリハットに対するヒヤリハット情報を入力データとする。つまり、事故において事故回避動作(操作)を行なえばヒヤリハットで済むと考えられるため、非回避回避状態前のデータの類似性比較により、事故情報に対応するヒヤリハット情報を抽出するものである。 Specifically, as shown in Figure 3, accident information is used as the correct answer data, and the input data is near-miss information for near-misses that includes information similar to accident precursor information from the accident precursor period (a non-avoidance state period before the accident (a period during which the accident could have been prevented if evasive action had been taken, determined, for example, by an incident)) that ends an appropriate time before the accident occurred for the accident corresponding to the accident information. In other words, since it is considered that an accident can be limited to a near-miss if accident avoidance action (operation) is taken in the event of an accident, near-miss information corresponding to the accident information is extracted by comparing the similarity of data from before the non-avoidance state.

 なお、この学習データ作成方法も、人の手による作成(ヒヤリハット情報および事故情報を見て、上述の観点に基づき作成)、コンピュータ処理による作成(上述の観点によるヒヤリハット情報および事故情報の類似度比較に基づき作成)が可能である。 This learning data creation method can also be done manually (by looking at near-miss information and accident information and creating it based on the above-mentioned perspectives) or by computer processing (by comparing the similarities between near-miss information and accident information based on the above-mentioned perspectives).

 第3の学習データ作成方法:ヒヤリハット情報に基づきヒヤリハット発生過程における回避動作前のヒヤリハット情報を抽出する。そして、回避動作前の動作が維持された場合の車両の動作を予測し(相手側車両が存在する場合は相手側車両の動作も予測)、事故情報を予測生成する。なお、事故情報の予測生成用データ(事故画像や車両の破損状況を予測生成するための各種データ)については、過去の事故情報に基づき生成しておくことになる。例えば、車両画像や風景画像の画像部品を用いて、また物体の動きに関する各種力学的算式や係数を用いて、コンピュータグラフィック技術により事故画像を作成する。 Third learning data creation method: Based on near-miss information, near-miss information before evasive action is extracted during the near-miss occurrence process. Then, the vehicle's behavior if the behavior before the evasive action is maintained is predicted (if an other vehicle is present, the behavior of the other vehicle is also predicted), and accident information is predicted and generated. Note that data for predictive generation of accident information (various data for predictive generation of accident images and vehicle damage conditions) is generated in advance based on past accident information. For example, accident images are created using computer graphics technology using image components of vehicle images and landscape images, as well as various mechanical formulas and coefficients related to object movement.

 なお、本実施形態では、各事故について識別コードを付与して当該識別コードを正解データとしている。そして、各事故に対する各種情報については、これら各種情報が識別コードをキーデータとするデータレコードに記憶されたデータベースにより管理・記憶されている。この場合、学習データにおける正解データは、各事故に対する識別コードとなる。 In this embodiment, an identification code is assigned to each accident and this identification code is used as the correct answer data. Various information about each accident is managed and stored in a database in which this information is stored in data records that use the identification code as key data. In this case, the correct answer data in the training data is the identification code for each accident.

 また、事故種別データベースを用いた事故推定モデル13の場合、例えば上述の学習データにおける入力データと対応する正解データを関連付けた情報群が事故種別データベースとなる。 Furthermore, in the case of an accident estimation model 13 that uses an accident type database, for example, a group of information that associates input data in the above-mentioned learning data with corresponding correct answer data becomes the accident type database.

 例えば、ある事故シーンIDには、「対向車の無理な右折による衝突事故」という事故シーン(事故シーンにおける各種データ)が対応付けられる。他の事故シーンとしては、例えば、「無理な右折による直進対向車との衝突事故」、「前方車両の急ブレーキによる前方車両との追突事故」、「前方不注意による前方車両との追突事故」、「無理な追い越しによる接触事故」など、様々なシーンが用意される。 For example, a certain accident scene ID is associated with an accident scene (various data in the accident scene) called "a collision caused by an oncoming vehicle making an unreasonable right turn." Other accident scenes include, for example, "a collision with an oncoming vehicle going straight due to an unreasonable right turn," "a rear-end collision with a vehicle in front due to the vehicle in front suddenly braking," "a rear-end collision with a vehicle in front due to inattention to the road ahead," and "a collision caused by unreasonable overtaking."

 DNN13Bは、順次入力される学習データセットにおける各学習データのヒヤリハット情報(予兆情報)から、発生しそうな事故シーンの推定値を順次導出し、推定結果として出力する。コントローラ10は、DNN13Bから出力される事故シーンの推定結果と、入力値に対応する学習データの正解値との誤差が小さくなるように、誤差逆伝搬法等の処理を用いてDNN13Bにおける各階層の重み付け係数等を更新することによって、事故推定AIモデル13Aに機械学習を行わせる。 DNN13B sequentially derives estimated values of likely accident scenes from the near-miss information (predictive information) of each piece of learning data in the sequentially input learning data set, and outputs the estimated values. Controller 10 causes accident estimation AI model 13A to perform machine learning by updating the weighting coefficients of each layer in DNN13B using processing such as backpropagation, so as to reduce the error between the estimated accident scene results output from DNN13B and the correct values of the learning data corresponding to the input values.

 つまり、コントローラ10は、AI生成プログラムを実行することによって、事故推定AIモデル13Aが入力されるヒヤリハットの状況に関する情報から推定する事故シーンが、入力されるヒヤリハットの状況で発生した事故シーン(正解データ)となるように、事故推定AIモデル13Aの機械学習を行う。 In other words, by executing the AI generation program, the controller 10 performs machine learning on the accident estimation AI model 13A so that the accident scene estimated by the accident estimation AI model 13A from the information about the near-miss situation inputted becomes an accident scene (correct data) that occurred in the input near-miss situation.

 これにより、コントローラ10は、入力されるヒヤリハット情報から、そのヒヤリハット情報に対応するヒヤリハットの状況において発生しそうな事故シーンを推定できる事故推定モデル13を生成することができる。 This allows the controller 10 to generate an accident estimation model 13 that can estimate, from the input near-miss information, the accident scene that is likely to occur in the near-miss situation corresponding to the near-miss information.

 図1に戻り、コントローラ10は、こうして生成した事故推定AIモデル13A等によって構成された事故推定モデル13に、通信部11を介して入力された車両2で発生したヒヤリハット情報を出力する。そしてコントローラ10は、事故推定モデル13の推定処理によって、ヒヤリハットの状況において発生しそうな事故シーンを推定する。 Returning to Figure 1, the controller 10 outputs near-miss information that has occurred in the vehicle 2, input via the communication unit 11, to the accident estimation model 13, which is configured using the accident estimation AI model 13A generated in this way. The controller 10 then uses the estimation process of the accident estimation model 13 to estimate an accident scene that is likely to occur in the near-miss situation.

 つまり、事故推定モデル13は、ヒヤリハット情報が入力されると、ヒヤリハット情報に対応するヒヤリハットの状況において発生しそうな事故シーンを推定し、推定結果を事故情報生成部14に出力する。このとき、事故推定モデル13は、推定した事故シーンの事故シーンIDとともに、事故の推定に使用したヒヤリハット情報を事故情報生成部14に出力する。 In other words, when near-miss information is input, the accident inference model 13 infers an accident scene that is likely to occur in the near-miss situation corresponding to the near-miss information, and outputs the inference result to the accident information generation unit 14. At this time, the accident inference model 13 outputs the near-miss information used to infer the accident to the accident information generation unit 14, along with the accident scene ID of the inferred accident scene.

 事故情報生成部14は、事故推定モデル13によって推定された事故シーンに関する事故情報を生成して情報出力装置15に出力する。情報出力装置15は、事故情報の画像を表示可能なディスプレイである。また、情報出力装置15は、事故情報の音声を出力可能なスピーカーを備える。さらに、情報出力装置15は、事故情報を視聴する視聴者を撮像するカメラを備える。 The accident information generation unit 14 generates accident information about the accident scene estimated by the accident estimation model 13 and outputs it to the information output device 15. The information output device 15 is a display capable of displaying an image of the accident information. The information output device 15 also includes a speaker capable of outputting the audio of the accident information. Furthermore, the information output device 15 includes a camera that captures images of viewers watching the accident information.

 情報出力装置15は、事故情報を視聴する視聴者の撮像画像を事故推定装置1のコントローラ10に出力する。なお、情報出力装置15は、事故情報を視聴する視聴者の心拍数を計測する心拍センサに接続されるように構成されてもよい。この場合、情報出力装置15は、視聴者の心拍数を示す情報をコントローラ10に出力する。 The information output device 15 outputs a captured image of the viewer viewing the accident information to the controller 10 of the accident estimation device 1. The information output device 15 may be configured to be connected to a heart rate sensor that measures the heart rate of the viewer viewing the accident information. In this case, the information output device 15 outputs information indicating the viewer's heart rate to the controller 10.

 次に、図4~図8を参照して、事故情報生成部14の構成および動作について説明する。図4に示すように、事故情報生成部14は、映像情報生成部31と、損害情報生成部32とを備える。映像情報生成部31は、映像データベース(DB)33と、映像生成モデル34と、許容度判定部35とを備える。 Next, the configuration and operation of the accident information generation unit 14 will be described with reference to Figures 4 to 8. As shown in Figure 4, the accident information generation unit 14 includes a video information generation unit 31 and a damage information generation unit 32. The video information generation unit 31 includes a video database (DB) 33, a video generation model 34, and an tolerance determination unit 35.

 次に事故映像の生成方法について説明する。
 第1の事故映像の生成方法:映像DB33には、様々な事故シーンの実写映像、VR(Virtual Reality)映像、およびCG(Computer Graphics)映像が記憶されている。例えば、映像DB33には、事故シーンIDと、事故映像とが対応付けられた図5に示すようなデータが記憶される。なお、映像DB33のデータは、設計開発者等によって、事前に収集、作成され、映像DB33に記憶される。また、事故映像に加え、事故音声等のデータも記憶して、事故画像再生の際に事故音声も同期して再生するようにしてもよい。
Next, a method for generating an accident video will be described.
First accident video generation method: The video DB 33 stores live-action videos, virtual reality (VR) videos, and computer graphics (CG) videos of various accident scenes. For example, the video DB 33 stores data such as that shown in FIG. 5, in which accident scene IDs are associated with accident videos. The data for the video DB 33 is collected and created in advance by design developers and the like, and stored in the video DB 33. In addition to the accident video, data such as accident audio may also be stored, so that the accident audio is played back synchronously when the accident image is played back.

 映像生成モデル34は、事故推定モデル13から入力される事故シーンIDを用いて映像DB33のデータを検索して、ヒヤリハット情報に対応する事故映像を抽出する。映像生成モデル34は、この抽出した事故映像を、車両2が遭遇したヒヤリハットの状況において発生しそうな事故シーンのシミュレーション動画(画像)とする。なお、車両2が遭遇したヒヤリハットの動画と、この事故シーンのシミュレーション動画を繋ぎ合わせて、事故シーンのシミュレーション動画としてもよい。この場合、現実の画像(動画)が含まれる画像となるので、現実味が高まり、効果的な画像となる。 The video generation model 34 searches the data in the video DB 33 using the accident scene ID input from the accident estimation model 13, and extracts the accident video corresponding to the near-miss information. The video generation model 34 uses this extracted accident video as a simulation video (image) of an accident scene that is likely to occur in the near-miss situation encountered by vehicle 2. Note that the video of the near-miss encountered by vehicle 2 may be joined with the simulation video of this accident scene to create a simulation video of the accident scene. In this case, the image contains real images (video), which increases the sense of realism and makes the image more effective.

 第2の事故映像の生成方法:第1の事故映像の生成方法は、事故発生地点の背景等の周囲環境も考慮した事故種別毎の事故映像を映像DB33に記憶するようにしているため、データ容量が多くなる課題がある。第2の事故映像の生成方法では、事故映像を事故自体の映像、つまり事故対象物体である事故車両、事故物体(人、物品等)の映像と、背景映像、ここでは更に不変映像(事故の時間長レベルでの不変映像で、例えば建物等の静止物の画像等)と変化映像(人、動物、ディスプレイや信号機(表示画像や点滅による変化))のデータベースを設け、これらの映像を合成することにより事故映像を生成する。 Second accident video generation method: The first accident video generation method stores accident video for each type of accident in the video DB 33, taking into account the surrounding environment such as the background of the accident location, but this has the drawback of resulting in a large amount of data. The second accident video generation method generates accident video by combining video of the accident itself, that is, video of the accident vehicle and accident objects (people, objects, etc.), which are the objects involved in the accident, and background video, in this case a database of unchanging video (unchanging video over the duration of the accident, for example, images of still objects such as buildings) and changing video (people, animals, displays and traffic lights (changes due to displayed images or flashing)).

 例えば、映像生成モデル34は、ヒヤリハット情報における車載カメラ24の映像内の静止物の背景画像、例えば交差点形状に関する情報を画像認識し、認識した情報を元にヒヤリハット地点と背景画像、例えば交差点形状が類似した背景映像を映像DB33から選択する。ここでの交差点形状に関する情報は、例えば、T字路と言った道路形態の情報、車線数の情報、および信号の有無を示す情報などである。なお、静止物の判定は、例えば動画中における物体の移動速度と撮影車両の移動速度との比較により行うことができる(両者の移動速度が同じ場合は静止物)。 For example, the video generation model 34 performs image recognition on the background image of stationary objects in the near-miss information captured by the vehicle-mounted camera 24, such as information about the intersection shape, and based on the recognized information, selects the near-miss location and background image, such as a background image with a similar intersection shape, from the video DB 33. Information about the intersection shape here includes, for example, information about the road configuration, such as a T-junction, the number of lanes, and information indicating whether or not there are traffic lights. Note that a stationary object can be determined, for example, by comparing the moving speed of the object in the video with the moving speed of the vehicle filming it (if the moving speeds of both are the same, it is a stationary object).

 また、映像生成モデル34は、車載カメラ24の映像から、ヒヤリハットに関与したと想定される人物の有無、および、その位置に関する当事者情報を画像認識し、認識した当事者情報を元にヒヤリハットにおける当事者の状況と類似した当事者情報の事故映像(背景等を含まない当事者の映像)を映像DB33から選択する。 In addition, the video generation model 34 performs image recognition on the video from the in-vehicle camera 24 to determine the presence or absence of people believed to have been involved in the near miss, as well as party information relating to their location, and, based on the recognized party information, selects accident video (video of the parties involved excluding the background, etc.) from the video DB 33 that contains party information similar to the situation of the parties involved in the near miss.

 また、映像生成モデル34は、車載カメラ24の映像から、ヒヤリハットに関与したと想定される車両の有無、および、その位置に関する情報を画像認識し、認識した関与車両情報を元にヒヤリハットの状況における関与車両と類似した情報の事故映像(背景等を含まない関与車両の映像)を映像DB33から選択する。そして、映像生成モデル34は、選択した背景映像、当事者映像、および関与車両映像を合成した映像を情報出力装置15に出力して、例えば、ヒヤリハットに遭遇した当事者のドライバに視聴させる。 The video generation model 34 also performs image recognition on the video from the in-vehicle camera 24 to determine whether or not there are any vehicles believed to have been involved in the near-miss, as well as information regarding their locations. Based on the recognized involved vehicle information, the video generation model 34 selects accident video (video of the involved vehicle not including the background, etc.) from the video DB 33 that has information similar to that of the involved vehicle in the near-miss situation. The video generation model 34 then outputs a video that combines the selected background video, video of the parties involved, and video of the involved vehicle to the information output device 15, for viewing by, for example, the driver of the party involved in the near-miss.

 なお、この第2の事故映像の生成方法の場合、映像DB33には、背景画像選択用のデータベース(事故背景画像要素(交差点情報等)とシミュレーション画像生成用の背景画像が関連付けられた情報のデータベース)、当事者画像選択用のデータベース(事故当事者要素(各当事者の位置情報等)とシミュレーション画像生成用の当事者画像が関連付けられた情報のデータベース)、および関連車両画像選択用のデータベース(事故関連車両要素(各関連車両の位置情報等)とシミュレーション画像生成用の関連車両画像が関連付けられた情報のデータベース)が含まれることになる。 In the case of this second accident video generation method, the video DB 33 will include a database for selecting background images (a database of information associating accident background image elements (intersection information, etc.) with background images for generating simulation images), a database for selecting images of parties involved in the accident (a database of information associating accident party elements (location information of each party, etc.) with images of parties involved for generating simulation images), and a database for selecting related vehicle images (a database of information associating accident-related vehicle elements (location information of each related vehicle, etc.) with related vehicle images for generating simulation images).

 なお、映像生成モデル34は、動画の場合、ヒヤリハットの画像に事故のシミュレーション画像をつなぎ合わせる(ヒヤリハットの画像における回避動作直前のタイミングで、シミュレーション画像につなぎ合わせる)ことによって生成した映像を情報出力装置15に出力してもよい。 In the case of video, the video generation model 34 may output to the information output device 15 a video generated by stitching a near-miss image with a simulated accident image (splicing the near-miss image with the simulation image just before the avoidance action is taken in the near-miss image).

 これにより、事故推定装置1は、ドライバ自身が体験したヒヤリハットの状況から発生しうる事故の恐ろしさをドライバに理解させることによって、交通安全教育効果の向上に寄与できる。 In this way, the accident estimation device 1 can help drivers understand the horror of accidents that can occur from near-miss situations they have experienced themselves, thereby contributing to improving the effectiveness of traffic safety education.

 また、映像生成モデル34は、映像DB33から選択した事故映像における事故対象物(当事者画像及び関連車両画像(背景画像無))に対し、ヒヤリハット発生時における実写風景画像(実撮影画像あるいは地図データに含まれる事故地点の実写風景画像)を重畳して映像を情報出力装置15に出力するように構成されてもよい。これにより、シミュレーション動画は、風景などが実写画像になるので、視聴者に与えるインパクトを強めることができる。 The video generation model 34 may also be configured to output video to the information output device 15 by superimposing a real-life scenery image (a real-life photographed image or a real-life scenery image of the accident location included in the map data) at the time of the near-miss onto the accident object (images of the parties involved and related vehicle images (no background image)) in the accident video selected from the video DB 33. This allows the scenery and other aspects of the simulation video to appear as real-life images, thereby increasing the impact on the viewer.

 第3の事故映像の生成方法:第2の事故映像の生成方法は、第1の事故映像の生成方法に比べてデータ容量を少なくできるものの、事故当事者及び事故関連車両の映像を映像DB33に記憶するようにしているため、データ容量が多くなると言う課題がある。第3の事故映像の生成方法では、事故対象物体(当事者及び関連車両)の画像をヒヤリハット時における撮影画像の画像認識処理により推定される物理特性等に基づき生成するものである。そして、生成した事故車両、事故物体(人、物品等)の映像と、背景映像とを合成して事故画像を生成する。 Third accident video generation method: The second accident video generation method requires less data than the first accident video generation method, but because video of the parties involved in the accident and accident-related vehicles is stored in the video DB 33, there is the issue of a large data volume. The third accident video generation method generates images of the accident subject objects (parties involved and related vehicles) based on physical characteristics estimated by image recognition processing of images taken at the time of the near miss. The generated images of the accident vehicles and accident objects (people, objects, etc.) are then combined with background video to generate an accident image.

 具体的には、ヒヤリハット時における撮影画像に基づき、ヒヤリハットの当事者及び関連車両の位置、移動速度、重量を推測する。尚、位置、移動速度については、画像認識による各物体の検出と、その検出位置及びおよび位置変化に基づき推定する。また重量については、画像認識による各物体の種別とサイズを検出し、当該検出種別、サイズに基づき各物体の重量を推定する。なお、例えば、物体の種別とサイズに対して重量が関連付けられたデータベースを用いて、各物体の重量を推定すれば良い。 Specifically, the position, speed of movement, and weight of the parties involved in the near miss and related vehicles are estimated based on images taken at the time of the near miss. Note that position and speed of movement are estimated based on the detection of each object using image recognition, and the detected position and position change. Weight is estimated by detecting the type and size of each object using image recognition, and estimating the weight of each object based on the detected type and size. Note that the weight of each object can be estimated, for example, using a database in which weight is associated with the type and size of the object.

 そして、推定された各物体の位置、移動速度、重量に基づき物理法則(主に力学における法則)を用いて、各物体の衝突状況や衝突後の破損状況を推定し、コンピュータグラフィクス技術を用いてヒヤリハットから発生が懸念される事故画像を生成する。 Then, based on the estimated position, movement speed, and weight of each object, the laws of physics (mainly the laws of mechanics) are used to estimate the collision situation and post-collision damage of each object, and computer graphics technology is used to generate images of potential accidents that could occur from near misses.

 また、映像DB33に、例えば、図6および図7に示すテーブルを記憶しておき、これらテーブルのデータを用いて事故画像を生成してもよい。図6に示すテーブルは、事故シーンIDと事故における各種物理的パラメータとが対応付けられたテーブルである。事故における各種物理的パラメータは、例えば、当事者車両、事故当時車両の相対速度、事故当時車両の衝突方向、および事故当時車両の衝突位置を示すパラメータである。図7に示すテーブルは、車種と車両画像とが対応付けられたテーブルである。 Furthermore, the video DB 33 may store tables such as those shown in Figures 6 and 7, and accident images may be generated using the data in these tables. The table shown in Figure 6 associates accident scene IDs with various physical parameters in an accident. The various physical parameters in an accident are, for example, parameters indicating the vehicle involved, the relative speed of the vehicle at the time of the accident, the collision direction of the vehicle at the time of the accident, and the collision position of the vehicle at the time of the accident. The table shown in Figure 7 associates vehicle types with vehicle images.

 図6および図7に示すテーブルが映像DB33に記憶される場合、映像生成モデル34は、ヒヤリハット情報から車載カメラ24の映像と類似した事故における各種物理的パラメータを用いてシミュレーション映像を生成する。 When the tables shown in Figures 6 and 7 are stored in the video DB 33, the video generation model 34 generates a simulation video from near-miss information using various physical parameters of an accident similar to the video captured by the in-vehicle camera 24.

 具体的には、映像生成モデル34は、事故における各種物理的パラメータを用いて対象車両の動き、破損状態を推定し、CGにより事故対象車両画像を生成する。そして、映像生成モデル34は、ヒヤリハットにおける風景(実撮影画像あるいは地図データに含まれる風景画像)にこの事故画像を重畳してシミュレーション映像を生成する。 Specifically, the image generation model 34 estimates the movement and damage state of the target vehicle using various physical parameters in the accident, and generates an image of the accident target vehicle using CG. The image generation model 34 then superimposes this accident image on the scenery at the near miss (actually photographed image or scenery image included in map data) to generate a simulation image.

 この場合、映像生成モデル34は、対象車両の車種の画像データを図7に示すテーブルから読み出し、CGに用いるように構成されてもよい。車両の色は、車種に含めてもよく、車両画像に着色してもよい。 In this case, the image generation model 34 may be configured to read image data of the vehicle model of the target vehicle from the table shown in Figure 7 and use it for CG. The color of the vehicle may be included in the vehicle model, or may be colored into the vehicle image.

 また、映像生成モデル34は、ヒヤリハットの画像から上記したシミュレーション映像を生成する映像情報生成AIモデル34A(図9参照)によって構成されてもよい。映像情報生成AIモデル34Aについては、図9~図15を参照して後述する。 The image generation model 34 may also be configured with an image information generation AI model 34A (see Figure 9) that generates the above-mentioned simulation image from a near-miss image. The image information generation AI model 34A will be described later with reference to Figures 9 to 15.

 また、映像生成モデル34(映像情報生成AIモデル34A)は、事故における自身傷害状況、人身傷害の画像、物品損害状況、および物品損害の画像のうち少なくとも一つの情報を含めた事故シーンのシミュレーション動画を生成して出力する。これにより、事故推定装置1は、ヒヤリハットの状況から発生しうる事故の恐ろしさを、より現実的にドライバに理解させることによって、交通安全教育効果の向上に寄与できる。 In addition, the video generation model 34 (video information generation AI model 34A) generates and outputs a simulated video of the accident scene that includes at least one of the following information: the driver's own injury situation in the accident, images of personal injury, property damage, and images of property damage. This allows the accident estimation device 1 to help drivers more realistically understand the horror of accidents that can occur from near-miss situations, thereby contributing to improving the effectiveness of traffic safety education.

 ただし、ドライバに視聴させる事故シーンのシミュレーション動画内容によっては、交通安全教育効果を十分に向上できない場合がある。例えば、シミュレーション動画内容が過激な場合、動画の過激さに対する許容度が低いドライバは、シミュレーション動画から目を背けてしまい、シミュレーション動画の重要性を理解できない場合がある。 However, depending on the content of the accident scene simulation video that drivers are shown, it may not be possible to fully improve the effectiveness of traffic safety education. For example, if the content of the simulation video is too extreme, drivers with a low tolerance for extreme content may turn away from the simulation video and fail to understand its importance.

 そこで、映像生成モデル34は、事故シーンのシミュレーション動画を視聴した視聴者の反応に応じて、同一の視聴者へ次に視聴させるシミュレーション動画の内容を加工する。他、映像情報生成部31の後段に、視聴者の反応(後述の許容度)に応じた画像加工を行う画像加工部を別途設ける構成でシミュレーション動画の内容を加工することも可能である。 The video generation model 34 therefore processes the content of the simulation video to be viewed next by the same viewer, depending on the viewer's reaction to the accident scene simulation video. Alternatively, the content of the simulation video can be processed by providing a separate image processing unit downstream of the video information generation unit 31 that processes images according to the viewer's reaction (tolerance, described below).

 映像情報生成部31は、初めて動画を視聴するドライバの場合、過激度を初期値に設定したシミュレーション動画を視聴させる。この場合、映像情報生成部31は、例えば、実写映像ではないCGのみの映像とし、人との接触シーンをカットしたシミュレーション動画を生成して視聴させる。また、映像情報生成部31は、登場する車両2の車種や色をヒヤリハットの状況に登場した車両とは異なる車種および色に加工したシミュレーション動画を生成して視聴させる。つまり、映像情報生成部31は、生成画像の実画像に対する再現度(類似度)を調整することにより、視聴ユーザに対する生成画像の刺激の強さ(画像の過激さ)を調整する。 When a driver is viewing a video for the first time, the video information generation unit 31 causes the driver to view a simulation video with the degree of extremity set to a default value. In this case, the video information generation unit 31 generates and causes the driver to view a simulation video that is, for example, CG-only video rather than live-action video, and cuts out scenes of contact with people. The video information generation unit 31 also generates and causes the driver to view a simulation video in which the model and color of the vehicle 2 that appears are different from those of the vehicle that appeared in the near-miss situation. In other words, the video information generation unit 31 adjusts the degree of reproduction (similarity) of the generated image to the actual image, thereby adjusting the intensity of the stimulation of the generated image (the extremity of the image).

 そして、映像情報生成部31は、シミュレーション動画を視聴した視聴者の画像を情報出力装置15から取得し、許容度判定部35により視聴者の過激度に対する許容度を判定する。映像情報生成部31は、情報出力装置15から視聴者の心拍数を取得して、心拍数からも許容度を判定してもよい。 The video information generation unit 31 then acquires images of the viewer who has watched the simulation video from the information output device 15, and determines the viewer's tolerance for extreme content using the tolerance determination unit 35. The video information generation unit 31 may also acquire the viewer's heart rate from the information output device 15 and determine tolerance based on the heart rate as well.

 映像情報生成部31は、許容度が閾値未満の場合、例えば視聴者がシミュレーション動画を注視できていないことを画像認識した場合、次に視聴させるシミュレーション動画の過激度を下げる。映像情報生成AIモデル34Aは、許容度が閾値以上の場合、例えば視聴者がシミュレーション動画を注視できていることを画像認識した場合、次に視聴させるシミュレーション動画の過激度を上げる。 If the tolerance is below the threshold, for example, if it recognizes from an image that the viewer is not paying attention to the simulation video, the video information generation unit 31 will reduce the level of extremism in the next simulation video to be viewed. If the tolerance is above the threshold, for example, if it recognizes from an image that the viewer is paying attention to the simulation video, the video information generation AI model 34A will increase the level of extremism in the next simulation video to be viewed.

 映像情報生成部31は、過激度を下げる場合、例えば、動画の解像度を徐々に低下させる、または、ぼかし処理を施すなどの画像処理を行う。また、映像情報生成部31は、過激度を上げる場合、例えば、動画の解像度を上げるなどの画像処理を行う。他、画像における実画像の比率を調整する(過激度を下げる場合、実画像の比率を下げ、CGの比率を上げる)、動画・静止画の比率を調整する(過激度を下げる場合、動画の比率を下げ、静止画の比率を上げる)等の方法も適用できる。 When reducing the level of extremism, the video information generation unit 31 performs image processing such as gradually reducing the resolution of the video or applying a blurring process. When increasing the level of extremism, the video information generation unit 31 performs image processing such as increasing the resolution of the video. Other methods that can be applied include adjusting the ratio of real images in the image (when reducing the level of extremism, reduce the ratio of real images and increase the ratio of CG), or adjusting the ratio of video to still images (when reducing the level of extremity, reduce the ratio of video and increase the ratio of still images).

 さらに、過激度を上げる場合、映像情報生成AIモデル34Aは、過激なシーン、例えば人との接触シーンをカットせずに視聴させる、登場する車両2の車種および色をヒヤリハットの状況に登場した車両と同一にするなど、より現実味が増すようなシミュレーション画像(動画、あるいは静止画)にする。 Furthermore, when increasing the level of extremeness, the video information generation AI model 34A creates simulation images (video or still images) that appear more realistically, such as by allowing viewers to watch extreme scenes, such as scenes involving contact with people, without cutting them out, or by making the model and color of the vehicle 2 that appears the same as the vehicle that appeared in the near-miss situation.

 これにより、事故推定装置1は、過激度に対する耐性が異なる様々なドライバに、そのドライバの耐性に応じて確実に視聴されるような事故シーンのシミュレーション画像を提供することで、交通安全教育の向上に寄与することができる。 As a result, the accident estimation device 1 can contribute to improving traffic safety education by providing simulated images of accident scenes that can be reliably viewed by various drivers with different tolerances for extreme situations, according to their tolerance level.

 次に、損害情報生成部32の動作について説明する。損害情報生成部32は、事故推定モデル13から入力される事故シーンの事故シーンIDと事故DB12とに基づいて、ヒヤリハットの状況から実際に事故が発生した場合に、事故によって生じる損害に関する損害情報を生成して出力する。 Next, the operation of the damage information generation unit 32 will be described. Based on the accident scene ID of the accident scene input from the accident estimation model 13 and the accident DB 12, the damage information generation unit 32 generates and outputs damage information regarding damages caused by an accident when an actual accident occurs from a near miss situation.

 これにより、事故推定装置1は、ヒヤリハットの状況から実際に事故が発生した場合に、ドライバが被る損害をヒヤリハット発生当事者のドライバに認識させ、安全運転を促すことにより、交通安全教育効果の向上に寄与できる。 As a result, the accident estimation device 1 can help drivers involved in near-misses recognize the damage they would suffer if an actual accident were to occur from a near-miss situation, encouraging them to drive safely and thereby contributing to improving the effectiveness of traffic safety education.

 図8に示すように、事故DB12には、事故シーンIDと損害(対人対物賠償・修理・保険)情報とが対応付けられたテーブルが記憶される。損害(対人対物賠償・修理・保険)情報には、その事故が発生した場合の責任比率、ドライバが被る損害の金額、事故に保険を適用した後の保険等級および保険料の変化などの情報が含まれている。つまり、これらの損害(修理・保険)情報は保険料の算出に用いることができる情報で、保険料算定のための保険料算定情報と言える。 As shown in Figure 8, the accident DB 12 stores a table that associates accident scene IDs with damage (personal injury and property damage compensation, repairs, and insurance) information. Damage (personal injury and property damage compensation, repairs, and insurance) information includes information such as the liability ratio in the event of the accident, the amount of damage suffered by the driver, and changes in insurance grade and insurance premiums after insurance is applied to the accident. In other words, this damage (repair and insurance) information can be used to calculate insurance premiums, and can be considered insurance premium calculation information for calculating insurance premiums.

 損害情報生成部32は、例えば、事故推定モデル13から事故シーンIDが入力されると、その事故シーンIDに対応付けられている損害情報を事故DB12から検索して読み出し、読み出した損害情報を事故情報として出力する。 For example, when an accident scene ID is input from the accident estimation model 13, the damage information generation unit 32 searches and reads out the damage information associated with that accident scene ID from the accident DB 12, and outputs the read damage information as accident information.

 事故推定装置1は、かかる損害情報をドライバに提供することにより、ドライバに事故後の損害状況、例えば車両の修理費用や保険に要する自己負担が増大することを認識させて安全運転を促すことで、交通安全教育効果の向上に寄与できる。 By providing such damage information to drivers, the accident estimation device 1 helps drivers recognize the damage situation after an accident, such as increased vehicle repair costs and insurance costs, and encourages safe driving, thereby contributing to improving the effectiveness of traffic safety education.

 また、図8に示す事故DB12に記憶される情報は、過去に実際に発生した事故の事例に基づいて生成される。つまり、損害情報生成部32は、過去に発生した事例に基づいて事故情報を生成する。これにより、損害情報生成部32は、ヒヤリハットの状況から発生する事故について、より現実味のある事故情報を生成してドライバに提供することができる。なお、図8に示すテーブルは、事故シーンIDに関連付けて損害情報が記憶されたものであるが、事故シーンを特徴づけるデータ、例えば当事者および関連車両の物理的特性データに関連付けて損害情報が記憶されたものでも良い。 Furthermore, the information stored in the accident DB 12 shown in Figure 8 is generated based on cases of accidents that have actually occurred in the past. In other words, the damage information generation unit 32 generates accident information based on cases that have occurred in the past. This allows the damage information generation unit 32 to generate more realistic accident information for accidents that arise from near-miss situations and provide it to the driver. Note that while the table shown in Figure 8 stores damage information in association with accident scene IDs, it is also possible to store damage information in association with data that characterizes the accident scene, for example, physical characteristic data of the parties involved and related vehicles.

 また損害情報生成部32は、ヒヤリハットの画像から上記した損害情報を生成する損害情報生成AIモデル32A(図9参照)によって構成されてもよい。次に、図9~15を参照して、ヒヤリハットの状況から発生する事故のシミュレーション映像と、上記した損害情報とをAIによって生成する事故情報生成部14について説明する。 The damage information generation unit 32 may also be configured with a damage information generation AI model 32A (see Figure 9) that generates the above-mentioned damage information from near-miss images. Next, with reference to Figures 9 to 15, we will explain the accident information generation unit 14, which uses AI to generate simulation footage of an accident that occurs from a near-miss situation and the above-mentioned damage information.

 図9に示すように、事故情報生成部14は、映像情報生成部31と、損害情報生成部32とを備える。映像情報生成部31は、映像情報生成AIモデル34Aを備える。損害情報生成部32は、損害情報生成AIモデル32Aを備える。 As shown in FIG. 9, the accident information generation unit 14 includes a video information generation unit 31 and a damage information generation unit 32. The video information generation unit 31 includes a video information generation AI model 34A. The damage information generation unit 32 includes a damage information generation AI model 32A.

 映像情報生成AIモデル34Aは、ヒヤリハット情報が入力されると、ヒヤリハットの状況から発生する事故シーンを推定するように機械学習されたAIモデルである。映像情報生成AIモデル34Aの機械学習には、例えば、図10に示すような学習データセットが使用される。 The video information generation AI model 34A is an AI model that has been trained to estimate the accident scene that will occur from the circumstances of the near-miss when near-miss information is input. For example, a training dataset such as that shown in Figure 10 is used for the machine learning of the video information generation AI model 34A.

 図10に示すように、映像情報生成AIモデル34Aの学習データセットは、入力データとなる複数のヒヤリハット画像と、正解データとなる複数の事故画像とがそれぞれ1対1に対応付けられたデータセットである。 As shown in Figure 10, the learning dataset for the video information generation AI model 34A is a dataset in which a plurality of near-miss images serving as input data are each associated one-to-one with a plurality of accident images serving as ground truth data.

 図11に示すように、映像情報生成AIモデル34Aは、順次入力される入力データのヒヤリハット情報から、発生しそうな事故シーンのシミュレーション映像を順次推定し、推定結果として出力する。映像情報生成部31は、映像情報生成AIモデル34Aから出力されるシミュレーション映像の推定結果と、入力値に対応する学習データの正解値との誤差が小さくなるように、誤差逆伝搬法等の処理を用いてDNNにおける各階層の重み付け係数等を更新することによって、映像情報生成AIモデル34Aに機械学習を行わせる。 As shown in Figure 11, the video information generation AI model 34A sequentially estimates simulation videos of potential accident scenes from the near-miss information input data that is sequentially input, and outputs them as estimation results. The video information generation unit 31 causes the video information generation AI model 34A to perform machine learning by updating the weighting coefficients of each layer in the DNN using processing such as backpropagation, so as to reduce the error between the estimated results of the simulation videos output from the video information generation AI model 34A and the correct values of the learning data corresponding to the input values.

 これにより、映像情報生成部31は、入力されるヒヤリハット情報から、そのヒヤリハット情報に対応するヒヤリハットの状況において発生しそうな事故シーンのシミュレーション映像を推定できる映像情報生成AIモデル34Aを生成することができる。 As a result, the video information generation unit 31 can generate a video information generation AI model 34A that can estimate, from the input near-miss information, a simulation video of an accident scene that is likely to occur in the near-miss situation corresponding to the near-miss information.

 機械学習済の映像情報生成AIモデル34Aは、図12に示すように、実際に撮影されたヒヤリハット画像が入力されると、ヒヤリハットの状況から発生する事故のシミュレーション映像を推定して出力する。 As shown in Figure 12, when an actual near-miss image is input, the machine-learned video information generation AI model 34A estimates and outputs a simulation video of an accident that occurs from the near-miss situation.

 また、映像情報生成AIモデル34Aは、例えば、シミュレーション画像生成用データに視聴者の反応(前述の許容度)に応じた画像加工パラメータを加えて、当該パラメータに応じた画像処理を行うように構成されてもよい。この場合、例えば、学習データの入力データに視聴者の反応データを加え、正解データを視聴者の反応にも対応した画像データとすればよい。 Furthermore, the video information generation AI model 34A may be configured to add image processing parameters corresponding to the viewer's reaction (the aforementioned tolerance) to the data for generating a simulation image, and perform image processing according to those parameters. In this case, for example, viewer reaction data may be added to the input data of the learning data, and the correct answer data may be image data that also corresponds to the viewer's reaction.

 これにより、映像情報生成AIモデル34Aは、ヒヤリハット情報と、視聴者画像が入力される場合に、事故シーンのシミュレーション動画を視聴した視聴者の反応に応じて、同一の視聴者へ次に視聴させるシミュレーション動画の内容を加工できるようになる他、映像情報生成AIモデル34Aの後段に、視聴者の反応(前述の許容度)に応じた画像加工を行う画像加工部を別途設ける構成で実現することも可能である。 As a result, when near-miss information and viewer images are input, the video information generation AI model 34A can process the content of the simulation video to be viewed next by the same viewer based on the viewer's reaction to the simulation video of the accident scene. It is also possible to implement this configuration by providing a separate image processing unit downstream of the video information generation AI model 34A that processes images based on the viewer's reaction (the aforementioned tolerance).

 また、損害情報生成AIモデル32Aは、ヒヤリハット情報が入力されると、ヒヤリハットの状況から実際に事故が発生した場合に、事故によって生じる損害に関する損害情報を推定するように機械学習されたAIモデルである。損害情報生成AIモデル32Aの機械学習には、例えば、図13に示すような学習データセットが使用される。 Furthermore, the damage information generation AI model 32A is an AI model that has been machine-trained to estimate damage information relating to damage caused by an accident when near-miss information is input and an actual accident occurs based on the circumstances of the near-miss. For example, a learning dataset such as that shown in FIG. 13 is used for the machine learning of the damage information generation AI model 32A.

 図13に示すように、損害情報生成AIモデル32Aの学習データセットは、入力データとなる複数のヒヤリハット画像と、正解データとなる複数の損害情報とがそれぞれ1対1に対応付けられたデータセットである。 As shown in Figure 13, the learning dataset for the damage information generation AI model 32A is a dataset in which a plurality of near-miss images serving as input data are each associated one-to-one with a plurality of pieces of damage information serving as correct answer data.

 図14に示すように、損害情報生成AIモデル32Aは、順次入力される入力データのヒヤリハット情報から、実際に事故が発生した場合に、事故によって生じる損害に関する損害情報を順次推定し、推定結果として出力する。損害情報生成部32は、損害情報生成AIモデル32Aから出力される損害情報の推定結果と、入力値に対応する学習データの正解値との誤差が小さくなるように、誤差逆伝搬法等の処理を用いてDNNにおける各階層の重み付け係数等を更新することによって、損害情報生成AIモデル32Aに機械学習を行わせる。 As shown in Figure 14, the damage information generation AI model 32A sequentially estimates damage information related to damage caused by an accident when an actual accident occurs, from the near-miss information of the input data that is sequentially input, and outputs the estimated results. The damage information generation unit 32 causes the damage information generation AI model 32A to perform machine learning by updating the weighting coefficients of each layer in the DNN using processing such as backpropagation so as to reduce the error between the estimated damage information output from the damage information generation AI model 32A and the correct value of the learning data corresponding to the input value.

 これにより、損害情報生成部32は、入力されるヒヤリハット情報から、実際に事故が発生した場合に、事故によって生じる損害に関する損害情報を推定できる損害情報生成AIモデル32Aを生成することができる。 As a result, the damage information generation unit 32 can generate a damage information generation AI model 32A that can estimate damage information related to damage caused by an accident when an actual accident occurs, from the input near-miss information.

 機械学習済の損害情報生成AIモデル32Aは、図15に示すように、実際に撮影されたヒヤリハット画像が入力されると、ヒヤリハットの状況から実際に事故が発生した場合に、事故によって生じる損害に関する損害情報を推定して出力する。 As shown in Figure 15, when an actual near-miss image is input, the machine-learned damage information generation AI model 32A estimates and outputs damage information regarding the damage that will result from an accident if an actual accident occurs based on the near-miss situation.

[3.事故推定装置のコントローラが実行する処理]
 次に、図16および図17を参照して、事故推定装置1のコントローラ10が実行する処理について説明する。図16および図17は、実施形態に係る事故推定装置1のコントローラ10が実行する処理の一例を示すフローチャートである。この処理は、事故推定装置1が起動(電源スイッチのオン)後に、適当な間隔で繰り返し実行される。
[3. Processing Executed by the Controller of the Accident Estimation Device]
Next, a process executed by the controller 10 of the accident estimation device 1 will be described with reference to Fig. 16 and Fig. 17. Fig. 16 and Fig. 17 are flowcharts showing an example of a process executed by the controller 10 of the accident estimation device 1 according to the embodiment. This process is repeatedly executed at appropriate intervals after the accident estimation device 1 is started up (the power switch is turned on).

 図16に示すように、コントローラ10は、車両2からヒヤリハット情報を受信したか否かを判定する(ステップS101)。コントローラ10は、ヒヤリハット情報を受信しないと判定した場合(ステップS101,No)、ヒヤリハット情報を受信するまで、ステップS101の判定処理を繰り返す。 As shown in FIG. 16, the controller 10 determines whether or not near-miss information has been received from the vehicle 2 (step S101). If the controller 10 determines that near-miss information has not been received (step S101, No), it repeats the determination process of step S101 until near-miss information is received.

 そして、コントローラ10は、ヒヤリハット情報を受信したと判定した場合(ステップS101,Yes)、事故推定モデル13により、ヒヤリハットの状況において発生しそうな事故シーンを推定する(ステップS102)。 If the controller 10 determines that near-miss information has been received (step S101, Yes), it uses the accident estimation model 13 to estimate an accident scene that is likely to occur in the near-miss situation (step S102).

 続いて、コントローラ10は、事故情報生成処理を実行する(ステップS103)。コントローラ10は、この事故情報生成処理により、推定された事故シーンに対する損害情報とシミュレーション動画を生成する。 Next, the controller 10 executes an accident information generation process (step S103). Through this accident information generation process, the controller 10 generates damage information and a simulation video for the estimated accident scene.

 その後、コントローラ10は、事故情報の出力タイミングか否かを判定する(ステップS104)。事故推定装置1が車両2に設置される場合、事故情報の出力タイミングは、例えば、車両2が停車したタイミング、または、車両2が1トリップの走行を終了したタイミング(例えば、目的地到着後の停車時)である。この場合、事故推定装置1は、車両2から停車したタイミング、および、車両2が1トリップの走行を終了したタイミングであることを、車両からの車速データ(停車判定閾値より長時間の車速0状態により判断)や、ナビゲーション装置からの目的地到着信号(目的地データと現在地の一致により判断)データ等に基づき判断すればよい。 Then, the controller 10 determines whether it is time to output accident information (step S104). If the accident estimation device 1 is installed in the vehicle 2, the timing to output the accident information is, for example, when the vehicle 2 stops, or when the vehicle 2 completes one trip of travel (for example, when it stops after arriving at the destination). In this case, the accident estimation device 1 can determine whether the vehicle 2 has stopped or completed one trip of travel based on vehicle speed data from the vehicle (determined by a vehicle speed of 0 for a longer period than the stop determination threshold), a destination arrival signal from the navigation device (determined by a match between the destination data and the current location), etc.

 また、事故推定装置1が車両2以外の場所に設置される場合、事故情報の出力タイミングは、事故推定装置1のユーザが事故情報の出力を要求する操作を行ったタイミングである。この場合、事故推定装置1は、ユーザの操作を受け付ける操作装置から、事故情報の出力を要求する操作が行われたことを示す情報を取得するように構成される。 Furthermore, if the accident estimation device 1 is installed in a location other than the vehicle 2, the timing for outputting the accident information is the timing when the user of the accident estimation device 1 performs an operation to request the output of the accident information. In this case, the accident estimation device 1 is configured to acquire, from an operation device that accepts user operations, information indicating that an operation to request the output of the accident information has been performed.

 コントローラ10は、事故情報の出力タイミングでないと判定した場合(ステップS104,No)、事故情報の出力タイミングになるまでステップS104の判定処理を繰り返す。そして、コントローラ10は、事故情報の出力タイミングであると判定した場合(ステップS104,Yes)、損害情報および事故のシミュレーション動画を含む事故情報を出力する(ステップS105)。 If the controller 10 determines that it is not time to output the accident information (step S104, No), it repeats the determination process of step S104 until it is time to output the accident information. Then, if the controller 10 determines that it is time to output the accident information (step S104, Yes), it outputs the accident information including damage information and a simulation video of the accident (step S105).

 その後、コントローラ10は、シミュレーション動画の視聴者の映像が入力されたか否かを判定する(ステップS106)。コントローラ10は、視聴者の映像が入力されないと判定した場合(ステップS106,No)、視聴者の映像が入力されるまで、ステップS106の判定処理を繰り返す。 Then, the controller 10 determines whether or not the viewer's video of the simulation video has been input (step S106). If the controller 10 determines that the viewer's video has not been input (step S106, No), it repeats the determination process of step S106 until the viewer's video is input.

 そして、コントローラ10は、視聴者の映像が入力されたと判定した場合(ステップS106,Yes)、入力された映像からシミュレーション動画の過激度に対する視聴者の許容度を判定する(ステップS107)。 If the controller 10 determines that the viewer's video has been input (step S106, Yes), it determines the viewer's tolerance for the extremeness of the simulation video from the input video (step S107).

 なお、心拍情報を用いて視聴者の許容度を判断する場合は、ステップS106では視聴者の心拍の入力に対する判断となり、またステップS107は視聴者の心拍に基づく許容度の判定となる。 If heart rate information is used to determine the viewer's tolerance, step S106 determines the viewer's heart rate input, and step S107 determines the tolerance based on the viewer's heart rate.

 そして、コントローラ10は、判定した許容度に応じて、次回出力する事故のシミュレーション動画の過激度を変更し(ステップS108)、処理を終了する。なお、コントローラ10は、ステップS107において判定した許容度が、前回のシミュレーション動画の過激度に対する許容度と同じであれば、過激度の変更は行わない。 Then, the controller 10 changes the level of extremism of the accident simulation video to be output next, based on the determined tolerance (step S108), and ends the process. Note that if the tolerance determined in step S107 is the same as the tolerance for extremism of the previous simulation video, the controller 10 does not change the level of extremism.

 次に、ステップS103の事故情報生成処理の詳細について図17を用いて説明する。事故情報生成処理を開始すると、コントローラ10は、ヒヤリハット情報に基づき推定された事故シーンに基づいて、損害情報を生成する(ステップS201)。 Next, the details of the accident information generation process in step S103 will be explained using Figure 17. When the accident information generation process starts, the controller 10 generates damage information based on the accident scene estimated based on the near-miss information (step S201).

 コントローラ10は、推定された事故シーンにおける当方と先方との過失割合を示す情報、人身傷害状況、および物品損害状況の少なくとも一つを示す情報、事故の発生に起因する将来の保険料の変化(推移)を示す情報を生成する。また、後述する事故シミュレーション画像に基づき生成できる(人や、車両の破損部分の画像を切り取る等して作成)人身傷害状況の画像や物品損害状況の画像と言った画像を損害情報として作成してもよい。 The controller 10 generates information indicating the estimated percentage of fault between the driver and the other party in the accident scene, information indicating at least one of the personal injury status and property damage status, and information indicating future changes (trends) in insurance premiums due to the occurrence of the accident. In addition, damage information may be generated from images of personal injury status and property damage status, which can be generated based on the accident simulation image described below (by cutting out images of people or damaged parts of the vehicle, for example).

 さらに、コントローラ10は、前回視聴させたシミュレーション動画の過激度に対する視聴者の許容度を読み込み、映像生成モデル34により当該許容度に応じた過激度の事故のシミュレーション画像(動画、あるいは静止画)を生成し(ステップS202)、処理を終える。つまり、ステップS103の事故情報生成処理を終える。その後、図16に示すステップS104が実行されることになる。 Furthermore, the controller 10 reads the viewer's tolerance for the level of extremeness of the simulation video previously viewed, and generates a simulation image (video or still image) of an accident of an extreme level according to the tolerance level using the video generation model 34 (step S202), and then ends the process. In other words, the accident information generation process of step S103 ends. Step S104 shown in Figure 16 is then executed.

[4.事故推定装置の適用例]
 実施形態に係る事故推定装置1は、様々な装置に適用が可能である。以下、事故推定装置の適用例について説明する。
[4. Application examples of accident estimation device]
The accident estimation device 1 according to the embodiment can be applied to various devices. Application examples of the accident estimation device will be described below.

[4-1.保険料検討装置への適用例]
 保険料検討装置は、保険料算出基準の作成および各個人の保険料算出に利用される。現状では、保険料は、発生した事故に関するデータに基づき算出されている。このため保険料は、顕在化したデータだけに基づき算出された値で、潜在的な事故の情報がかけたものとなっている。一方、ヒヤリハットは潜在的な事故であるため、事故推定装置1は、保険料検討装置へ適用される場合、ヒヤリハットの発生状況とそれに繋がる事故の大きさ(被害)から、潜在的事故の損害を推定して、保険料に反映できる。
[4-1. Example of application to insurance premium review device]
The insurance premium review device is used to create insurance premium calculation standards and calculate insurance premiums for each individual. Currently, insurance premiums are calculated based on data related to accidents that have occurred. Therefore, insurance premiums are calculated based only on actual data, and do not include information on potential accidents. On the other hand, since near misses are potential accidents, when applied to an insurance premium review device, the accident estimation device 1 can estimate the damage from potential accidents based on the occurrence of near misses and the magnitude (damage) of the accidents that lead to them, and reflect this in the insurance premiums.

 つまり、例えば、実際の事故による損害額と、ヒヤリハットから繋がる事故による損害額(適当な係数(ビッグデータが必要(適当な期間でデータ収集))をかける)との総額に基づき保険料算出基準(例えば、パラメータに適当な期間のヒヤリハット発生状況データが加わる)を作成し、また各個人のデータ(適当な期間のヒヤリハット発生状況データが保険料算出に用いられる)に基づき当該個人の保険料を算出することになる。 In other words, for example, an insurance premium calculation standard (for example, near-miss occurrence status data for an appropriate period is added to the parameters) would be created based on the total amount of damages caused by actual accidents and the amount of damages caused by accidents resulting from near-misses (multiplied by an appropriate coefficient (big data is required (data collected over an appropriate period))), and the insurance premium for each individual would be calculated based on each individual's data (near-miss occurrence status data for an appropriate period is used to calculate the insurance premium).

[4-2.車載機への適用例]
 事故推定装置1は、例えば、カーナビゲーション装置またはドライブレコーダなどの車載機に搭載されてもよい。この場合、車載機は、ヒヤリハットが発生した場合に、ヒヤリハットの状況において発生しそうな事故をヒヤリハットの状況に基づいて推定し、事故に関する事故情報を車両の乗員に通知する。
[4-2. Application example to in-vehicle equipment]
The accident estimation device 1 may be mounted on an on-board device such as a car navigation device or a drive recorder. In this case, when a near-miss occurs, the on-board device estimates an accident that is likely to occur in the near-miss situation based on the near-miss situation, and notifies the vehicle occupants of accident information related to the accident.

 例えば、車載機は、車両が停車したとき、または、1トリップの走行が終了したときに、事故情報を乗員に通知する。これにより、車載機は、ヒヤリハットが発生した場合に、ヒヤリハットの発生からあまり時間を空けずに、車両が停車したとき、または、1トリップの走行が終了したときに、当該ヒヤリハットから発生しそうな事故に対する事故情報を乗員に通知できる。したがって、車載機は、乗員にとって体験して間もないヒヤリハットから発生しそうな事故に関する事故情報を、当事者の乗員に提供することにより、安全運転を促すことで交通安全教育効果の向上に寄与できる。 For example, the in-vehicle device notifies the occupants of accident information when the vehicle comes to a stop or when one trip has ended. This allows the in-vehicle device to notify the occupants of accident information about an accident that is likely to occur as a result of a near miss when the vehicle comes to a stop or when one trip has ended, in the event of a near miss occurring, without much time having passed since the near miss. Therefore, by providing the occupants involved with accident information about an accident that is likely to occur as a result of a near miss that they recently experienced, the in-vehicle device can encourage safe driving and contribute to improving the effectiveness of traffic safety education.

[4-3.端末装置への適用例]
 事故推定装置1は、例えば、自動車学校または運転免許試験場などの交通安全教育を行う機関に設置される端末装置に搭載されてもよい。この場合、端末装置は、ヒヤリハットの状況において発生しそうな事故をヒヤリハットの状況に基づいて推定し、事故に応じた交通安全教育の教材情報を出力する。
[4-3. Example of application to terminal device]
The accident estimation device 1 may be mounted on a terminal device installed in an institution that provides traffic safety education, such as a driving school or a driver's license testing center. In this case, the terminal device estimates an accident that is likely to occur in a near-miss situation based on the near-miss situation, and outputs traffic safety education teaching material information corresponding to the accident.

 これにより、実際の事故だけでなく、ヒヤリハットから連なる事故と言ったいろいろな状況において発生しうる幅広い事故関連情報を、刺激の強い発生事故の形態で情報が提供される。従って、受講者への影響が強い情報が提供されることとなり、交通安全教育効果の向上に寄与できる。 This allows the program to provide a wide range of accident-related information in the form of highly stimulating actual accidents, including not only actual accidents but also accidents that can occur in various situations, such as accidents that follow on from near misses. This means that the information provided will have a strong impact on participants, contributing to improving the effectiveness of traffic safety education.

 例えば、端末装置は、安全運転教育の受講者に対して、その受講者が実際に体験したヒヤリハットから発生しそうな事故を推定し、ヒヤリハットの発生から事故の発生までのシミュレーション動画を交通安全教育の教材情報として提供する。 For example, the terminal device can estimate the type of accident that is likely to occur based on the near miss that the participant has actually experienced, and provide a simulation video showing the process from the near miss to the actual accident as educational material for traffic safety education.

 これにより、端末装置は、受講者自身が体験したヒヤリハットの状況から発生しうる事故の恐ろしさをドライバに理解させることによって、交通安全教育効果の向上に寄与できる。 In this way, the terminal device can help drivers understand the horror of accidents that can occur from near-miss situations experienced by the trainees themselves, thereby contributing to improving the effectiveness of traffic safety education.

[5.情報処理システム]
 次に、実施形態に係る情報処理システムについて説明する。図18は、実施形態に係る情報処理システム100の説明図である。図18に示すように、情報処理システム100は、事故推定装置1と、保険料検討装置101と、車載機102と、端末装置103と、車両2とを含む。
[5. Information Processing System]
Next, an information processing system according to an embodiment will be described. Fig. 18 is an explanatory diagram of an information processing system 100 according to an embodiment. As shown in Fig. 18, the information processing system 100 includes an accident estimation device 1, an insurance premium review device 101, an on-board device 102, a terminal device 103, and a vehicle 2.

 事故推定装置1、保険料検討装置101、車載機102、端末装置103、および車両2は、通信ネットワークNによって情報通信可能に接続される。保険料検討装置101、車載機102、および端末装置103は、それぞれ事故推定装置1が搭載されている。 The accident estimation device 1, insurance premium review device 101, on-board device 102, terminal device 103, and vehicle 2 are connected via a communications network N so that information can be communicated. The insurance premium review device 101, on-board device 102, and terminal device 103 are each equipped with the accident estimation device 1.

 これにより、情報処理システム100は、事故推定装置1が設置される場所、保険料検討装置101が設置される場所、車載機102が設置された車内、および端末装置103が設置される場所のいずれにおいても、前述した事故情報を提供できる。したがって、情報処理システム100は、様々な場所においてユーザに事故情報の提供することにより、交通安全教育効果の向上に寄与できる。 As a result, the information processing system 100 can provide the above-mentioned accident information at any of the locations where the accident estimation device 1 is installed, the location where the insurance premium review device 101 is installed, inside the vehicle where the onboard device 102 is installed, and the location where the terminal device 103 is installed. Therefore, by providing accident information to users in various locations, the information processing system 100 can contribute to improving the effectiveness of traffic safety education.

[6.付記]
 付記として、本発明の特徴を以下の通り示す。
(1)
 発生したヒヤリハットに関するヒヤリハット情報を取得し、
 取得した前記ヒヤリハット情報で、前記ヒヤリハット情報と当該ヒヤリハットから発生が懸念される事故に関する事故種別とを関連付けた事故種別データベースを参照することにより発生した前記ヒヤリハットに対する事故種別情報を出力する、
 コントローラを備える事故推定装置。
(2)
 前記コントローラは、
 出力された事故種別情報で、前記事故種別情報と事故により発生する事故事象を示す仮想事故情報が関連付けられた事故データベースを参照することにより、発生した前記ヒヤリハットに対する前記仮想事故情報を出力する、
 前記(1)に記載の事故推定装置。
(3)
 前記コントローラは、
 前記事故種別データベースに替え、前記ヒヤリハット情報を入力情報とし、前記事故種別情報を正解データとする学習データで学習された事故種別モデルを用いて、発生した前記ヒヤリハットに対する前記事故種別情報を出力する、
 前記(1)に記載の事故推定装置。
(4)
 前記コントローラは、
 前記事故種別データベースに替え、前記ヒヤリハット情報を入力情報とし、前記事故種別情報を正解データとする学習データで学習された事故種別モデルを用いて、発生した前記ヒヤリハットに対する前記事故種別情報を出力する、
 前記(2)に記載の事故推定装置。
(5)
 前記仮想事故情報は、
 仮想事故のシミュレーション画像を含む、
 前記(2)または(4)に記載の事故推定装置。
(6)
 前記仮想事故情報は、
 仮想事故によって生じる損害に関する仮想損害情報を含む、
 前記(2)または(4)に記載の事故推定装置。
(7)
 前記仮想損害情報は、
 前記事故における当方と先方との過失割合を示す情報を含む、
 前記(6)に記載の事故推定装置。
(8)
 前記仮想損害情報は、
 仮想事故における人身傷害状況、人身傷害状況の画像、物品損害状況、および物品損害状況の画像のうちの少なくとも一つを示す情報を含む、
 前記(6)または(7)に記載の事故推定装置。
(9)
 前記仮想損害情報は、
 仮想事故の発生に起因する保険料の変化を示す情報を含む、
 前記(6)、(7)または(8)に記載の事故推定装置。
(10)
 発生したヒヤリハットに関するヒヤリハット情報を取得し、
 ヒヤリハットの発生状況に関するヒヤリハット情報を入力情報とし、当該ヒヤリハットから発生が懸念される事故に関する仮想事故情報を正解情報とする学習データで学習された仮想事故モデルに、取得した前記ヒヤリハット情報を入力することによって前記仮想事故情報を出力する、
 事故推定装置。
(11)
 発生したヒヤリハットに関するヒヤリハット情報を取得し、
 前記ヒヤリハット情報に基づき、当該ヒヤリハットから発生が懸念される事故に関する仮想事故情報を算出し、
 前記仮想事故情報に基づき保険料の算定に関する保険料算定情報を算出し、
 算出した前記保険料算定情報を出力する、
 保険料検討装置。
(12)
 発生したヒヤリハットに関するヒヤリハット情報を取得し、
 前記ヒヤリハット情報に基づき、当該ヒヤリハットから発生が懸念される事故に関する仮想事故情報を算出し、
 前記仮想事故情報を車両の乗員に通知する、
 車載機。
(13)
 発生したヒヤリハットに関するヒヤリハット情報を取得し、
 前記ヒヤリハット情報に基づき、当該ヒヤリハットから発生が懸念される事故に関する仮想事故情報を算出し、
 仮想事故情報を含む交通安全教育の教材情報を出力する、
 端末装置。
(14)
 前記仮想事故情報は、
 ヒヤリハットから発生が懸念される事故の状況を示すシミュレーション動画である、
 前記(13)に記載の端末装置。
(15)
 事故推定装置と、車両に搭載された車載装置とを有する情報処理システムであって、
 事故推定装置は、
 前記車載から、発生したヒヤリハットに関するヒヤリハット情報を取得し、
 前記ヒヤリハット情報に基づき、当該ヒヤリハットから発生が懸念される事故に関する仮想事故情報を算出し、
 算出した前記仮想事故情報を前記車載装置に送信し、
 前記車載装置は、
 車載センサからヒヤリハット発生時に当該ヒヤリハットに関するヒヤリハット情報を収集し、
 収集した前記ヒヤリハット情報を前記事故推定装置に送信し、
 前記事故推定装置から送信された前記仮想事故情報を受信し、
 受信した前記仮想事故情報を車両の乗員に通知する、
 情報処理システム。
(16)
 事故推定装置と、車両に搭載された車載装置と、保険料検討装置とを有する情報処理システムであって、
 前記車載装置は、
 車載センサからヒヤリハット発生時に当該ヒヤリハットに関するヒヤリハット情報を収集し、
 収集した前記ヒヤリハット情報を前記事故推定装置に送信し、
 事故推定装置は、
 前記車載から、発生したヒヤリハットに関するヒヤリハット情報を取得し、
 前記ヒヤリハット情報に基づき、当該ヒヤリハットから発生が懸念される事故に関する仮想事故情報を算出し、
 算出した前記仮想事故情報を前記保険料検討装置に送信し、
 前記保険料検討装置は、
 前記事故推定装置から送信された前記仮想事故情報を受信し、
 受信した前記仮想事故情報に基づき保険料の算定に関する保険料算定情報を算出し、
 算出した前記保険料算定情報を出力する、
 情報処理システム。
(17)
 コンピュータが、
 発生したヒヤリハットに関するヒヤリハット情報を取得し、
 取得した前記ヒヤリハット情報で、前記ヒヤリハット情報と当該ヒヤリハットから発生が懸念される事故に関する事故種別とを関連付けた事故種別データベースを参照することにより発生した前記ヒヤリハットに対する事故種別情報を出力する、
 事故推定方法。
(18)
 発生したヒヤリハットに関するヒヤリハット情報を取得し、
 取得した前記ヒヤリハット情報で、前記ヒヤリハット情報と当該ヒヤリハットから発生が懸念される事故に関する事故種別とを関連付けた事故種別データベースを参照することにより発生した前記ヒヤリハットに対する事故種別情報を出力する手順をコンピュータに
実行させる、
 事故推定プログラム。
(19)
 コンピュータが、
 ヒヤリハットの発生状況に関するヒヤリハット情報を入力情報とし、当該ヒヤリハットから発生が懸念される事故の事故種別を正解情報とする学習情報を用いて、AI(Artificial Intelligence)モデルに学習させる、
 AIモデル生成方法。
(20)
 ヒヤリハットの発生状況に関するヒヤリハット情報を入力情報とし、当該ヒヤリハットから発生が懸念される事故の事故種別を正解情報とする学習情報を用いて、AI(Artificial Intelligence)モデルに学習させる手順をコンピュータに実行させる、
 AI生成プログラム。
[6. Notes]
As an appendix, the features of the present invention are as follows.
(1)
Obtain near miss information about the near miss that occurred,
outputting accident type information for the near miss that has occurred by referring to an accident type database that associates the near miss information with an accident type relating to an accident that is feared to occur from the near miss;
An accident estimation device including a controller.
(2)
The controller
By referring to an accident database in which the output accident type information is associated with virtual accident information indicating an accident event that may occur due to the accident, the virtual accident information for the near miss that has occurred is output.
The accident estimation device according to (1) above.
(3)
The controller
Instead of the accident type database, the near-miss information is used as input information, and an accident type model trained with learning data in which the accident type information is correct answer data is used to output the accident type information for the near-miss that has occurred.
The accident estimation device according to (1) above.
(4)
The controller
Instead of the accident type database, the near-miss information is used as input information, and an accident type model trained with learning data in which the accident type information is correct answer data is used to output the accident type information for the near-miss that has occurred.
The accident estimation device according to (2) above.
(5)
The virtual accident information
Includes simulated images of virtual accidents,
The accident estimation device according to (2) or (4).
(6)
The virtual accident information
Contains hypothetical damage information regarding damages resulting from hypothetical accidents;
The accident estimation device according to (2) or (4).
(7)
The virtual damage information
Including information showing the percentage of fault between us and the other party in the accident,
The accident estimation device according to (6) above.
(8)
The virtual damage information
information indicating at least one of a personal injury situation, an image of the personal injury situation, a property damage situation, and an image of the property damage situation in the virtual accident;
The accident estimation device according to (6) or (7) above.
(9)
The virtual damage information
Contains information showing the change in premiums resulting from the occurrence of hypothetical accidents;
The accident estimation device according to (6), (7) or (8).
(10)
Obtain near miss information about the near miss that occurred,
The acquired near-miss information is input into a virtual accident model trained with learning data in which near-miss information on the occurrence status of the near-miss is used as input information and virtual accident information on accidents that are feared to occur as a result of the near-miss is used as correct answer information, thereby outputting the virtual accident information.
Accident estimation device.
(11)
Obtain near miss information about the near miss that occurred,
Based on the near miss information, hypothetical accident information regarding an accident that is feared to occur from the near miss is calculated;
Calculating insurance premium calculation information related to the calculation of insurance premiums based on the hypothetical accident information;
outputting the calculated insurance premium calculation information;
Insurance premium review device.
(12)
Obtain near miss information about the near miss that occurred,
Based on the near miss information, hypothetical accident information regarding an accident that is feared to occur from the near miss is calculated;
notifying a vehicle occupant of the virtual accident information;
Onboard machine.
(13)
Obtain near miss information about the near miss that occurred,
Based on the near miss information, hypothetical accident information regarding an accident that is feared to occur from the near miss is calculated;
Outputting information on teaching materials for traffic safety education including virtual accident information.
Terminal device.
(14)
The virtual accident information
This is a simulation video showing the situation of an accident that may occur due to a near miss.
The terminal device according to (13) above.
(15)
An information processing system having an accident estimation device and an on-board device mounted on a vehicle,
The accident estimation device
Acquire near-miss information regarding the near-miss that has occurred from the vehicle,
Based on the near miss information, hypothetical accident information regarding an accident that is feared to occur from the near miss is calculated;
transmitting the calculated hypothetical accident information to the in-vehicle device;
The in-vehicle device
When a near miss occurs, near miss information is collected from the vehicle's sensors,
The collected near-miss information is transmitted to the accident estimation device,
receiving the hypothetical accident information transmitted from the accident estimation device;
notifying a vehicle occupant of the received virtual accident information;
Information processing system.
(16)
An information processing system having an accident estimation device, an in-vehicle device mounted on a vehicle, and an insurance premium review device,
The in-vehicle device
When a near miss occurs, near miss information is collected from the vehicle's sensors,
The collected near-miss information is transmitted to the accident estimation device,
The accident estimation device
Acquire near-miss information regarding the near-miss that has occurred from the vehicle,
Based on the near miss information, hypothetical accident information regarding an accident that is feared to occur from the near miss is calculated;
The calculated hypothetical accident information is transmitted to the insurance premium examination device,
The insurance premium review device
receiving the hypothetical accident information transmitted from the accident estimation device;
Calculating insurance premium calculation information related to the calculation of insurance premiums based on the received hypothetical accident information;
outputting the calculated insurance premium calculation information;
Information processing system.
(17)
The computer
Obtain near miss information about the near miss that occurred,
outputting accident type information for the near miss that has occurred by referring to an accident type database that associates the near miss information with an accident type relating to an accident that is feared to occur from the near miss;
Accident estimation method.
(18)
Obtain near miss information about the near miss that occurred,
a computer is caused to execute a procedure of outputting accident type information for the near miss that has occurred by referring to an accident type database that associates the near miss information with an accident type relating to an accident that is feared to occur from the near miss, using the acquired near miss information;
Accident estimation program.
(19)
The computer
Near miss information on the occurrence of near misses is used as input information, and learning information is used to train an AI (Artificial Intelligence) model, with the correct answer being the accident type of the accident that is feared to occur from the near miss.
AI model generation method.
(20)
A computer is caused to execute a procedure for making an AI (Artificial Intelligence) model learn using learning information in which near-miss information relating to the occurrence status of near-misses is used as input information and the accident types of accidents that are feared to occur as a result of the near-misses are used as correct answer information.
AI generation program.

 さらなる効果や変形例は、当業者によって容易に導き出すことができる。このため、本発明のより広範な態様は、以上のように表しかつ記述した特定の詳細および代表的な実施形態に限定されるものではない。したがって、添付の特許請求の範囲およびその均等物によって定義される総括的な発明の概念の精神または範囲から逸脱することなく、様々な変更が可能である。 Further advantages and modifications may readily occur to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details and representative embodiments shown and described above. Accordingly, various modifications may be made without departing from the spirit or scope of the general inventive concept as defined by the appended claims and equivalents thereof.

 1 事故推定装置
 2 車両
 10 コントローラ
 11 通信部
 12 事故DB
 13 事故推定モデル
 13A 事故推定AIモデル
 14 事故情報生成部
 15 情報出力装置
 20 ドライブレコーダ
 21 加速度センサ
 22 舵角センサ
 23 感情センサ
 24 車載カメラ
 25 車載マイク
 26 フラッシュメモリ
 27 コントローラ
 28 通信部
 29 ヒヤリハット検知部
 31 映像情報生成部
 32 損害情報生成部
 32A 損害情報生成AIモデル
 33 映像DB
 34 映像生成モデル
 34A 映像情報生成AIモデル
 100 情報処理システム
 101 保険料検討装置
 102 車載機
 103 端末装置
 N 通信ネットワーク
REFERENCE SIGNS LIST 1 Accident estimation device 2 Vehicle 10 Controller 11 Communication unit 12 Accident DB
13 Accident estimation model 13A Accident estimation AI model 14 Accident information generation unit 15 Information output device 20 Drive recorder 21 Acceleration sensor 22 Steering angle sensor 23 Emotion sensor 24 In-vehicle camera 25 In-vehicle microphone 26 Flash memory 27 Controller 28 Communication unit 29 Near-miss detection unit 31 Video information generation unit 32 Damage information generation unit 32A Damage information generation AI model 33 Video DB
34 Image generation model 34A Image information generation AI model 100 Information processing system 101 Insurance premium review device 102 In-vehicle device 103 Terminal device N Communication network

Claims (20)

 発生したヒヤリハットに関するヒヤリハット情報を取得し、
 取得した前記ヒヤリハット情報で、前記ヒヤリハット情報と当該ヒヤリハットから発生が懸念される事故に関する事故種別とを関連付けた事故種別データベースを参照することにより発生した前記ヒヤリハットに対する事故種別情報を出力する、
 コントローラを備える事故推定装置。
Obtain near miss information about the near miss that occurred,
outputting accident type information for the near miss that has occurred by referring to an accident type database that associates the near miss information with an accident type relating to an accident that is feared to occur from the near miss;
An accident estimation device including a controller.
 前記コントローラは、
 出力された事故種別情報で、前記事故種別情報と事故により発生する事故事象を示す仮想事故情報が関連付けられた事故データベースを参照することにより、発生した前記ヒヤリハットに対する前記仮想事故情報を出力する、
 請求項1に記載の事故推定装置。
The controller
By referring to an accident database in which the output accident type information is associated with virtual accident information indicating an accident event that may occur due to the accident, the virtual accident information for the near miss that has occurred is output.
The accident estimation device according to claim 1 .
 前記コントローラは、
 前記事故種別データベースに替え、前記ヒヤリハット情報を入力情報とし、前記事故種別情報を正解データとする学習データで学習された事故種別モデルを用いて、発生した前記ヒヤリハットに対する前記事故種別情報を出力する、
 請求項1に記載の事故推定装置。
The controller
Instead of the accident type database, the near-miss information is used as input information, and an accident type model trained with learning data in which the accident type information is correct answer data is used to output the accident type information for the near-miss that has occurred.
The accident estimation device according to claim 1 .
 前記コントローラは、
 前記事故種別データベースに替え、前記ヒヤリハット情報を入力情報とし、前記事故種別情報を正解データとする学習データで学習された事故種別モデルを用いて、発生した前記ヒヤリハットに対する前記事故種別情報を出力する、
 請求項2に記載の事故推定装置。
The controller
Instead of the accident type database, the near-miss information is used as input information, and an accident type model trained with learning data in which the accident type information is correct answer data is used to output the accident type information for the near-miss that has occurred.
The accident estimation device according to claim 2 .
 前記仮想事故情報は、
 仮想事故のシミュレーション画像を含む、
 請求項2または4に記載の事故推定装置。
The virtual accident information
Includes simulated images of virtual accidents,
The accident estimation device according to claim 2 or 4.
 前記仮想事故情報は、
 仮想事故によって生じる損害に関する仮想損害情報を含む、
 請求項2または4に記載の事故推定装置。
The virtual accident information
Contains hypothetical damage information regarding damages resulting from hypothetical accidents;
The accident estimation device according to claim 2 or 4.
 前記仮想損害情報は、
 前記事故における当方と先方との過失割合を示す情報を含む、
 請求項6に記載の事故推定装置。
The virtual damage information
Including information showing the percentage of fault between us and the other party in the accident,
The accident estimation device according to claim 6.
 前記仮想損害情報は、
 仮想事故における人身傷害状況、人身傷害状況の画像、物品損害状況、および物品損害状況の画像のうちの少なくとも一つを示す情報を含む、
 請求項6に記載の事故推定装置。
The virtual damage information
information indicating at least one of a personal injury situation, an image of the personal injury situation, a property damage situation, and an image of the property damage situation in the virtual accident;
The accident estimation device according to claim 6.
 前記仮想損害情報は、
 仮想事故の発生に起因する保険料の変化を示す情報を含む、
 請求項6に記載の事故推定装置。
The virtual damage information
Contains information showing the change in premiums resulting from the occurrence of hypothetical accidents;
The accident estimation device according to claim 6.
 発生したヒヤリハットに関するヒヤリハット情報を取得し、
 ヒヤリハットの発生状況に関するヒヤリハット情報を入力情報とし、当該ヒヤリハットから発生が懸念される事故に関する仮想事故情報を正解情報とする学習データで学習された仮想事故モデルに、取得した前記ヒヤリハット情報を入力することによって前記仮想事故情報を出力する、
 事故推定装置。
Obtain near miss information about the near miss that occurred,
The acquired near-miss information is input into a virtual accident model trained with learning data in which near-miss information on the occurrence status of the near-miss is used as input information and virtual accident information on accidents that are feared to occur as a result of the near-miss is used as correct answer information, thereby outputting the virtual accident information.
Accident estimation device.
 発生したヒヤリハットに関するヒヤリハット情報を取得し、
 前記ヒヤリハット情報に基づき、当該ヒヤリハットから発生が懸念される事故に関する仮想事故情報を算出し、
 前記仮想事故情報に基づき保険料の算定に関する保険料算定情報を算出し、
 算出した前記保険料算定情報を出力する、
 保険料検討装置。
Obtain near miss information about the near miss that occurred,
Based on the near miss information, hypothetical accident information regarding an accident that is feared to occur from the near miss is calculated;
Calculating insurance premium calculation information related to the calculation of insurance premiums based on the hypothetical accident information;
outputting the calculated insurance premium calculation information;
Insurance premium review device.
 発生したヒヤリハットに関するヒヤリハット情報を取得し、
 前記ヒヤリハット情報に基づき、当該ヒヤリハットから発生が懸念される事故に関する仮想事故情報を算出し、
 前記仮想事故情報を車両の乗員に通知する、
 車載機。
Obtain near miss information about the near miss that occurred,
Based on the near miss information, hypothetical accident information regarding an accident that is feared to occur from the near miss is calculated;
notifying a vehicle occupant of the virtual accident information;
Onboard machine.
 発生したヒヤリハットに関するヒヤリハット情報を取得し、
 前記ヒヤリハット情報に基づき、当該ヒヤリハットから発生が懸念される事故に関する仮想事故情報を算出し、
 仮想事故情報を含む交通安全教育の教材情報を出力する、
 端末装置。
Obtain near miss information about the near miss that occurred,
Based on the near miss information, hypothetical accident information regarding an accident that is feared to occur from the near miss is calculated;
Outputting information on teaching materials for traffic safety education including virtual accident information.
Terminal device.
 前記仮想事故情報は、
 ヒヤリハットから発生が懸念される事故の状況を示すシミュレーション動画である、
 請求項13に記載の端末装置。
The virtual accident information
This is a simulation video showing the situation of an accident that may occur due to a near miss.
The terminal device according to claim 13.
 事故推定装置と、車両に搭載された車載装置とを有する情報処理システムであって、
 事故推定装置は、
 前記車載装置から、発生したヒヤリハットに関するヒヤリハット情報を取得し、
 前記ヒヤリハット情報に基づき、当該ヒヤリハットから発生が懸念される事故に関する仮想事故情報を算出し、
 算出した前記仮想事故情報を前記車載装置に送信し、
 前記車載装置は、
 車載センサからヒヤリハット発生時に当該ヒヤリハットに関するヒヤリハット情報を収集し、
 収集した前記ヒヤリハット情報を前記事故推定装置に送信し、
 前記事故推定装置から送信された前記仮想事故情報を受信し、
 受信した前記仮想事故情報を車両の乗員に通知する、
 情報処理システム。
An information processing system having an accident estimation device and an on-board device mounted on a vehicle,
The accident estimation device
Acquiring near-miss information relating to the near-miss that has occurred from the in-vehicle device;
Based on the near miss information, hypothetical accident information regarding an accident that is feared to occur from the near miss is calculated;
transmitting the calculated hypothetical accident information to the in-vehicle device;
The in-vehicle device
When a near miss occurs, near miss information is collected from the vehicle's sensors,
The collected near-miss information is transmitted to the accident estimation device,
receiving the hypothetical accident information transmitted from the accident estimation device;
notifying a vehicle occupant of the received virtual accident information;
Information processing system.
 事故推定装置と、車両に搭載された車載装置と、保険料検討装置とを有する情報処理システムであって、
 前記車載装置は、
 車載センサからヒヤリハット発生時に当該ヒヤリハットに関するヒヤリハット情報を収集し、
 収集した前記ヒヤリハット情報を前記事故推定装置に送信し、
 事故推定装置は、
 前記車載装置から、発生したヒヤリハットに関するヒヤリハット情報を取得し、
 前記ヒヤリハット情報に基づき、当該ヒヤリハットから発生が懸念される事故に関する仮想事故情報を算出し、
 算出した前記仮想事故情報を前記保険料検討装置に送信し、
 前記保険料検討装置は、
 前記事故推定装置から送信された前記仮想事故情報を受信し、
 受信した前記仮想事故情報に基づき保険料の算定に関する保険料算定情報を算出し、
 算出した前記保険料算定情報を出力する、
 情報処理システム。
An information processing system having an accident estimation device, an in-vehicle device mounted on a vehicle, and an insurance premium review device,
The in-vehicle device
When a near miss occurs, near miss information is collected from the vehicle's sensors,
The collected near-miss information is transmitted to the accident estimation device,
The accident estimation device
Acquiring near-miss information relating to the near-miss that has occurred from the in-vehicle device;
Based on the near miss information, hypothetical accident information regarding an accident that is feared to occur from the near miss is calculated;
The calculated hypothetical accident information is transmitted to the insurance premium examination device,
The insurance premium review device
receiving the hypothetical accident information transmitted from the accident estimation device;
Calculating insurance premium calculation information related to the calculation of insurance premiums based on the received hypothetical accident information;
outputting the calculated insurance premium calculation information;
Information processing system.
 コンピュータが、
 発生したヒヤリハットに関するヒヤリハット情報を取得し、
 取得した前記ヒヤリハット情報で、前記ヒヤリハット情報と当該ヒヤリハットから発生が懸念される事故に関する事故種別とを関連付けた事故種別データベースを参照することにより発生した前記ヒヤリハットに対する事故種別情報を出力する、
 事故推定方法。
The computer
Obtain near miss information about the near miss that occurred,
outputting accident type information for the near miss that has occurred by referring to an accident type database that associates the near miss information with an accident type relating to an accident that is feared to occur from the near miss;
Accident estimation method.
 発生したヒヤリハットに関するヒヤリハット情報を取得し、
 取得した前記ヒヤリハット情報で、前記ヒヤリハット情報と当該ヒヤリハットから発生が懸念される事故に関する事故種別とを関連付けた事故種別データベースを参照することにより発生した前記ヒヤリハットに対する事故種別情報を出力する手順をコンピュータに
実行させる、
 事故推定プログラム。
Obtain near miss information about the near miss that occurred,
a computer is caused to execute a procedure of outputting accident type information for the near miss that has occurred by referring to an accident type database that associates the near miss information with an accident type relating to an accident that is feared to occur from the near miss, using the acquired near miss information;
Accident estimation program.
 コンピュータが、
 ヒヤリハットの発生状況に関するヒヤリハット情報を入力情報とし、当該ヒヤリハットから発生が懸念される事故の事故種別を正解情報とする学習情報を用いて、AI(Artificial Intelligence)モデルに学習させる、
 AIモデル生成方法。
The computer
Near miss information on the occurrence of near misses is used as input information, and learning information is used to train an AI (Artificial Intelligence) model, with the correct answer being the accident type of the accident that is feared to occur from the near miss.
AI model generation method.
 ヒヤリハットの発生状況に関するヒヤリハット情報を入力情報とし、当該ヒヤリハットから発生が懸念される事故の事故種別を正解情報とする学習情報を用いて、AI(Artificial Intelligence)モデルに学習させる手順をコンピュータに実行させる、
 AI生成プログラム。
A computer is caused to execute a procedure for making an AI (Artificial Intelligence) model learn using learning information in which near-miss information relating to the occurrence status of near-misses is used as input information and the accident types of accidents that are feared to occur as a result of the near-misses are used as correct answer information.
AI generation program.
PCT/JP2024/003950 2024-02-06 2024-02-06 Accident inference device, insurance fee examination device, on-vehicle equipment, terminal device, information processing system, accident inference method, accident inference program, ai model generation method, and ai generation program Pending WO2025169305A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2024/003950 WO2025169305A1 (en) 2024-02-06 2024-02-06 Accident inference device, insurance fee examination device, on-vehicle equipment, terminal device, information processing system, accident inference method, accident inference program, ai model generation method, and ai generation program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2024/003950 WO2025169305A1 (en) 2024-02-06 2024-02-06 Accident inference device, insurance fee examination device, on-vehicle equipment, terminal device, information processing system, accident inference method, accident inference program, ai model generation method, and ai generation program

Publications (1)

Publication Number Publication Date
WO2025169305A1 true WO2025169305A1 (en) 2025-08-14

Family

ID=96699316

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2024/003950 Pending WO2025169305A1 (en) 2024-02-06 2024-02-06 Accident inference device, insurance fee examination device, on-vehicle equipment, terminal device, information processing system, accident inference method, accident inference program, ai model generation method, and ai generation program

Country Status (1)

Country Link
WO (1) WO2025169305A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017038166A1 (en) * 2015-08-28 2017-03-09 ソニー株式会社 Information processing device, information processing method, and program
WO2017145693A1 (en) * 2016-02-22 2017-08-31 パナソニックIpマネジメント株式会社 Safe driving assistance device and safe driving assistance method
JP2018169828A (en) * 2017-03-30 2018-11-01 パイオニア株式会社 Information processing device, information processing system, information processing method, and program
JP2020057079A (en) * 2018-09-28 2020-04-09 株式会社Subaru Notification device for vehicles
JP2020087008A (en) * 2018-11-27 2020-06-04 日本精機株式会社 Vehicle display device
WO2022025244A1 (en) * 2020-07-31 2022-02-03 矢崎総業株式会社 Vehicle accident prediction system, vehicle accident prediction method, vehicle accident prediction program, and trained model generation system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017038166A1 (en) * 2015-08-28 2017-03-09 ソニー株式会社 Information processing device, information processing method, and program
WO2017145693A1 (en) * 2016-02-22 2017-08-31 パナソニックIpマネジメント株式会社 Safe driving assistance device and safe driving assistance method
JP2018169828A (en) * 2017-03-30 2018-11-01 パイオニア株式会社 Information processing device, information processing system, information processing method, and program
JP2020057079A (en) * 2018-09-28 2020-04-09 株式会社Subaru Notification device for vehicles
JP2020087008A (en) * 2018-11-27 2020-06-04 日本精機株式会社 Vehicle display device
WO2022025244A1 (en) * 2020-07-31 2022-02-03 矢崎総業株式会社 Vehicle accident prediction system, vehicle accident prediction method, vehicle accident prediction program, and trained model generation system

Similar Documents

Publication Publication Date Title
US20220286811A1 (en) Method for smartphone-based accident detection
US10902742B2 (en) Method and device for evaluating driving behavior
Georg et al. Teleoperated driving, a key technology for automated driving? comparison of actual test drives with a head mounted display and conventional monitors
CN112180605A (en) Auxiliary driving system based on augmented reality
WO2014141526A1 (en) Vehicle dangerous situation reproduction apparatus and method for using same
JP4814816B2 (en) Accident occurrence prediction simulation apparatus, method and program, safety system evaluation apparatus and accident alarm apparatus
Mortazavi et al. Effect of drowsiness on driving performance variables of commercial vehicle drivers
WO2019198179A1 (en) Passenger state determination device, alarm output control device, and passenger state determination method
JP2019195376A (en) Data processing apparatus, monitoring system, awakening system, data processing method, and data processing program
JP2021092962A (en) On-vehicle machine, processing device, and program
Sakai et al. Effects on user perception of a'modified'speed experience through in-vehicle virtual reality
CN117456796A (en) A device for simulating abnormal vehicle driving conditions for training drivers
WO2025169305A1 (en) Accident inference device, insurance fee examination device, on-vehicle equipment, terminal device, information processing system, accident inference method, accident inference program, ai model generation method, and ai generation program
JP5090891B2 (en) Safe driving teaching system
CN119809881A (en) Trainee safety operation monitoring device for driving skills training
Natarajan et al. Ai-naav: An ai enabled neurocognition aware autonomous vehicle
CN116758780A (en) A vehicle look-around early warning method, device and system
CN116403195A (en) A Fatigue Driving Detection Method, System and Device Based on Bus-by-Wire Control Chassis
Bachmann et al. Responsible integration of autonomous vehicles in an autocentric society
CN113299147A (en) Training system and training method based on traffic accident deep investigation
WO2021241261A1 (en) Information processing device, information processing method, program, and learning method
CN113777951A (en) Automatic driving simulation system and method for collision avoidance decision of weak road user
Pinilla et al. Intelligent driving diagnosis based on a fuzzy logic approach in a real environment implementation
JP7790333B2 (en) Driving diagnosis device
Vivian Towards Vision Zero using Virtual Reality

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24923986

Country of ref document: EP

Kind code of ref document: A1