[go: up one dir, main page]

CN113743290B - Method and device for transmitting information to emergency call center for vehicle - Google Patents

Method and device for transmitting information to emergency call center for vehicle Download PDF

Info

Publication number
CN113743290B
CN113743290B CN202111016361.1A CN202111016361A CN113743290B CN 113743290 B CN113743290 B CN 113743290B CN 202111016361 A CN202111016361 A CN 202111016361A CN 113743290 B CN113743290 B CN 113743290B
Authority
CN
China
Prior art keywords
detection
bleeding
occupant
cabin
image information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111016361.1A
Other languages
Chinese (zh)
Other versions
CN113743290A (en
Inventor
邵昌旭
许亮
李轲
王飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Lingang Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority to CN202111016361.1A priority Critical patent/CN113743290B/en
Publication of CN113743290A publication Critical patent/CN113743290A/en
Priority to PCT/CN2022/078010 priority patent/WO2023029407A1/en
Priority to JP2024513513A priority patent/JP2024538860A/en
Priority to KR1020247009195A priority patent/KR20240046910A/en
Application granted granted Critical
Publication of CN113743290B publication Critical patent/CN113743290B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/01Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium
    • G08B25/10Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium using wireless transmission systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
    • A61B5/021Measuring pressure in heart or blood vessels
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording for evaluating the cardiovascular system, e.g. pulse, heart rate, blood pressure or blood flow
    • A61B5/024Measuring pulse rate or heart rate
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/08Measuring devices for evaluating the respiratory organs
    • A61B5/0816Measuring devices for examining respiratory frequency
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
    • A61B5/1116Determining posture transitions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Measuring devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb using a particular sensing technique
    • A61B5/1128Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb using a particular sensing technique using image analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/45For evaluating or diagnosing the musculoskeletal system or teeth
    • A61B5/4504Bones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7275Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/7465Arrangements for interactive communication between patient and care services, e.g. by using a telephone network
    • A61B5/747Arrangements for interactive communication between patient and care services, e.g. by using a telephone network in case of emergency, i.e. alerting emergency services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • G06V10/422Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation for representing the structure of the pattern or shape of an object therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/90Services for handling of emergency or hazardous situations, e.g. earthquake and tsunami warning systems [ETWS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • G06T2207/30104Vascular flow; Blood flow; Perfusion

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Physiology (AREA)
  • Emergency Management (AREA)
  • Business, Economics & Management (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Cardiology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dentistry (AREA)
  • Geometry (AREA)
  • Environmental & Geological Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Pulmonology (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Critical Care (AREA)
  • Radiology & Medical Imaging (AREA)
  • Rheumatology (AREA)
  • Psychiatry (AREA)
  • Nursing (AREA)
  • Emergency Medicine (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Artificial Intelligence (AREA)

Abstract

The present disclosure relates to a method and apparatus for transmitting information to an emergency call center for a vehicle, the method including acquiring image information of an occupant in a cabin in response to an emergency call being triggered, detecting a bleeding condition of the occupant in the cabin based on the image information, and transmitting the bleeding condition to the emergency call center in response to detecting the bleeding condition.

Description

Method and device for transmitting information to emergency call center for vehicle
Technical Field
The present disclosure relates to the field of computer technology, and in particular, to a method and apparatus for transmitting information to an emergency call center for a vehicle, an electronic device, and a storage medium.
Background
In the road transportation process, traffic accidents may occur to automobiles, and if rescue workers can timely acquire accident information and rescue, accident parties can be timely rescued, so that property loss is reduced, and casualties are reduced.
In order to enable rescue workers to timely learn accident information, an on-board emergency call (EMERGENCY CALL, eCall) system can be integrated on an automobile, and the eCall system belongs to a typical application of the Internet of vehicles. Based on technologies such as automobile sensing, mobile communication, satellite positioning and the like, the system is contacted with a public rescue center at the first time after an accident occurs, and automatically sends the position of a vehicle and vehicle information to the rescue center, and the rescue center carries out rescue on accident personnel after confirming the accident.
However, the conventional emergency call function has difficulty in judging the injury situation of passengers in an accident after the accident, and the emergency call cannot confirm the injury situation of the passengers in the vehicle.
Disclosure of Invention
The disclosure provides an information sending technical scheme.
According to an aspect of the present disclosure, there is provided a method for a vehicle to transmit information to an emergency call center, including:
acquiring image information of an occupant in the cabin in response to the emergency call being triggered;
detecting bleeding of passengers in the cabin based on the image information;
in response to detecting a bleeding situation, the bleeding situation is sent to an emergency call center.
In one possible implementation manner, the detecting, based on the image information, a bleeding condition of an occupant in the cabin includes:
performing face detection and/or human body detection on the image information to determine passengers in the cabin;
and (3) detecting blood on the face and/or the body surface of the passenger, and determining the bleeding condition of the passenger in the cabin.
In one possible implementation manner, the detecting, based on the image information, a bleeding condition of an occupant in the cabin includes:
Based on the color information of the blood and the shape information of the blood flow, whether the occupant is bleeding is detected based on the image information.
In one possible implementation manner, the detecting, based on the image information, a bleeding condition of an occupant in the cabin includes:
Detecting a body surface area of an occupant in the cabin based on the image information;
Dividing a body surface area of the passenger into a plurality of detection areas;
detecting blood information in each detection area to obtain an area detection result of each detection area;
and determining bleeding conditions of the passengers based on the region detection results of the detection regions.
In one possible implementation manner, the detecting the body surface area of the passenger in the cabin based on the image information includes:
Detecting a face surface area of an occupant in the cabin based on the image information;
the dividing the body surface area of the occupant into a plurality of detection areas includes:
the face surface area of the occupant is divided into a plurality of detection areas.
In one possible implementation manner, the detecting blood information in each detection area to obtain an area detection result of each detection area includes:
determining a first confidence level of bleeding in each detection area based on the shape and area of blood flow in each detection area;
determining whether there is connected blood flow between each adjacent detection region;
In response to determining that there is blood flow in the first detection region bordering a second detection region adjacent to the first detection region, increasing the confidence of the plurality of first detection regions and the second detection region to a second confidence;
the determining the bleeding condition of the occupant based on the region detection results of the detection regions includes:
Determining that the occupant is bleeding if the first confidence level or the second confidence level exceeds a confidence threshold;
determining a severity of bleeding based on the area of blood flow in each of the detection areas, the severity of bleeding being positively correlated with the area and the area of blood flow in each of the detection areas.
In one possible implementation manner, the detecting, based on the image information, a bleeding condition of an occupant in the cabin includes:
in response to detecting bleeding of an occupant in the cabin based on the image information, determining a body part of the bleeding and a direction of the blood flow;
Based on the bleeding body part and the direction of the blood flow, the body part where the starting end of the blood flow is located is taken as a bleeding part.
In one possible implementation, the method further includes:
determining the body posture of the passenger in the cabin according to the image information;
And determining the body posture of the passenger in the cabin as the abnormal body posture under the condition that the body posture is the preset abnormal body posture and the duration of the abnormal body posture exceeds the set duration.
In one possible implementation, the method further includes:
and under the condition that the body posture of the passenger in the cabin is determined to be the preset fracture posture, determining that the passenger in the cabin has fracture conditions.
In one possible implementation, the method further includes:
Determining vital sign indicators of the occupant based on the image information, the vital sign indicators including at least one of:
Respiratory rate, blood pressure, heart rate;
And sending the vital sign index to an emergency call center.
In one possible implementation, the method further includes:
Determining a severity level of injury to the occupant in the cabin based on the determined at least one of bleeding status, abnormal body posture, vital sign indicators of the occupant;
the injury severity level is sent to an emergency call center.
According to an aspect of the present disclosure, there is provided an apparatus for transmitting information to an emergency call center for a vehicle, including:
the image information acquisition unit is used for responding to the emergency call to be triggered and acquiring the image information of the passenger in the cabin;
The bleeding condition detection unit is used for detecting bleeding conditions of passengers in the cabin based on the image information;
and the bleeding situation sending unit is used for responding to the detection of the bleeding situation and sending the bleeding situation to the emergency call center.
In one possible implementation manner, the bleeding situation detection unit includes:
the passenger detection subunit is used for carrying out face detection and/or human body detection on the image information and determining passengers in the cabin;
and the first bleeding situation determination subunit is used for detecting blood on the face and/or the body surface of the passenger and determining the bleeding situation of the passenger in the cabin.
In one possible implementation manner, the bleeding situation detection unit is configured to detect whether the occupant bleeds based on the image information based on color information of blood and shape information of blood flow.
In one possible implementation manner, the bleeding situation detection unit includes:
a body surface region detection subunit configured to detect a body surface region of an occupant in the cabin based on the image information;
A detection region dividing subunit configured to divide a body surface region of the occupant into a plurality of detection regions;
A region detection result determining subunit, configured to detect blood information in each detection region, and obtain a region detection result of each detection region;
And a second bleeding situation determination subunit configured to determine a bleeding situation of the occupant based on the area detection results of the detection areas.
In a possible implementation manner, the body surface area detection subunit is configured to detect a face surface area of an occupant in the cabin based on the image information;
the detection region dividing subunit is configured to divide a face surface region of the occupant into a plurality of detection regions.
In a possible implementation manner, the region detection result determining subunit is configured to determine, based on the shape and the area of the blood flow in each detection region, a first confidence that a bleeding situation exists in each detection region; responsive to determining that there is blood flow in the junction of a first detection region of the detection regions with an adjacent second detection region, increasing the confidence of the plurality of first detection regions and the second detection region to a second confidence;
the second blood flow condition determining subunit is configured to determine that the occupant is bleeding if the first confidence or the second confidence exceeds a confidence threshold, and determine a severity of bleeding based on an area of blood flow in each detection area, the severity of bleeding being positively correlated with an area of blood flow and the area of blood flow in each detection area.
In one possible implementation manner, the bleeding situation detection unit includes:
a bleeding part detection subunit for determining a body part of bleeding and a direction of blood flow in response to detection of bleeding of an occupant in the cabin based on the image information;
a bleeding part detection subunit, configured to use a body part where a blood flow start end is located as a bleeding part based on the bleeding body part and a blood flow direction.
In one possible implementation, the apparatus further includes:
a body posture determining unit configured to determine a body posture of the occupant in the cabin based on the image information;
An abnormal body posture determining unit configured to determine that the body posture of the cabin occupant is an abnormal body posture in a case where the body posture is a preset abnormal body posture and a duration of the abnormal body posture exceeds a set duration.
In one possible implementation, the apparatus further includes:
and the fracture condition detection unit is used for determining that the fracture condition exists in the cabin passenger under the condition that the body posture of the cabin passenger is determined to be the preset fracture posture.
In one possible implementation, the apparatus further includes:
a vital sign index determining unit configured to determine a vital sign index of the occupant based on the image information, the vital sign index including at least one of:
Respiratory rate, blood pressure, heart rate;
and the vital sign index sending unit is used for sending the vital sign index to the emergency call center.
In one possible implementation, the apparatus further includes:
An injury severity level determination unit configured to determine an injury severity level of an occupant in a cabin based on the determined at least one of bleeding condition, abnormal body posture, vital sign index of the occupant;
And the injury severity level transmitting unit is used for transmitting the injury severity level to the emergency call center.
According to an aspect of the disclosure, there is provided an electronic device comprising a processor, a memory for storing processor-executable instructions, wherein the processor is configured to invoke the instructions stored in the memory to perform the above method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, the image information of the passenger in the cabin is acquired by being triggered in response to the emergency call, the bleeding situation of the passenger in the cabin is detected based on the image information, and then the bleeding situation is sent to the emergency call center in response to the detected bleeding situation. Therefore, the injury condition of the passengers in the accident can be determined, so that the emergency call center can reasonably schedule rescue force according to the injury condition of the passengers in the accident, and can shorten or omit the inquiry process of the personnel in the call center according to the scene of serious bleeding condition, so as to rescue in seconds.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the technical aspects of the disclosure.
Fig. 1 shows a flowchart of an information transmission method according to an embodiment of the present disclosure;
fig. 2 shows a block diagram of an information transmitting apparatus according to an embodiment of the present disclosure;
FIG. 3 illustrates a block diagram of an electronic device, according to an embodiment of the present disclosure;
fig. 4 shows a block diagram of an electronic device, according to an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" is merely an association relationship describing the associated object, and means that three relationships may exist, for example, a and/or B may mean that a exists alone, while a and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
As described in the background art, the emergency call service can shorten the rescue time and reduce the mortality of the rescuee in the vehicle accident. However, in the related art, it is difficult to determine the damage degree of the accident after the accident, and it is more impossible to confirm the casualties of the personnel in the vehicle, and in the case that there are more emergency calls in the rescue center, the rescue center cannot reasonably schedule the rescue force.
In the embodiment of the disclosure, the image information of the passenger in the cabin is acquired by being triggered in response to the emergency call, the bleeding situation of the passenger in the cabin is detected based on the image information, and then the bleeding situation is sent to the emergency call center in response to the detected bleeding situation. Therefore, the injury condition of the passengers in the accident can be determined, so that the emergency call center can reasonably schedule rescue force according to the injury condition of the passengers in the accident, and rescue can be performed in a short time or in a short time by taking the inquiry process of the personnel in the call center into account according to the scene of serious bleeding.
In one possible implementation, the execution subject of the method may be an intelligent driving control device mounted on a vehicle. In one possible implementation, the method may be performed by a terminal device or a server or other processing device. The terminal device may be a vehicle-mounted device, a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), a handheld device, a computing device, or a wearable device. The vehicle-mounted device may be a vehicle or a domain controller in a vehicle cabin, or may be a device host for executing an information transmission method in an ADAS (ADVANCED DRIVING ASSISTANCE SYSTEM ), OMS (Occupant Monitoring System, occupant monitoring system), or DMS (Driver Monitoring System ), or the like. In some possible implementations, the information sending method may be implemented by a processor calling computer readable instructions stored in a memory.
For convenience of description, in one or more embodiments of the present disclosure, an execution body of a method for transmitting information to an emergency call center for a vehicle may be an in-vehicle device in the vehicle, and hereinafter, an embodiment of the method will be described taking the execution body as an in-vehicle device. It will be appreciated that the implementation of the method is merely an exemplary illustration of the vehicle-mounted device and should not be construed as limiting the method. Fig. 1 illustrates a flowchart of a method for a vehicle to transmit information to an emergency call center according to an embodiment of the present disclosure, as illustrated in fig. 1, the method for a vehicle to transmit information to an emergency call center includes:
in step S11, in response to the emergency call being triggered, acquiring image information of an occupant in the cabin;
the image information is image information of an occupant in a vehicle cabin, the vehicle can be at least one vehicle of private cars, shared cars, network taxi, taxis, trucks and other types of vehicles, and the specific type of the vehicle is not limited in the present disclosure.
The image information may be image information of an area where an occupant in the cabin is located, and the image information may be acquired by an in-vehicle image acquisition apparatus provided in or out of the cabin of the vehicle, which may be an in-vehicle camera or an image acquisition device provided with a camera. The camera can be used for collecting the image information in the vehicle or used for collecting the image information outside the vehicle.
For example, the camera may include a camera in a DMS and/or a camera in an OMS, etc., which may be used to collect image information of the interior of the vehicle, and a camera in an ADAS, which may be used to collect image information of the exterior of the vehicle. Of course, the vehicle-mounted image capturing device may also be a camera in other systems, or may also be a separately configured camera, and the embodiment of the disclosure does not limit a specific vehicle-mounted image capturing device.
The carrier of the image information may be a two-dimensional image or video, for example, the image information may be a visible light image/video or an infrared light image/video, or may be a three-dimensional image formed by a point cloud scanned by a radar, etc., which may be specific to the actual application scenario, and the disclosure is not limited thereto.
The acquired image information can be acquired through communication connection with the vehicle-mounted image acquisition equipment. In one example, the in-vehicle image capturing device may transmit the captured image information in real-time to the in-vehicle controller or remote server via a bus or wireless communication channel, which may receive the real-time image information.
In step S12, detecting bleeding of the passenger in the cabin based on the image information;
Whether blood is present on the occupant may be detected based on image processing techniques to determine whether a bleeding condition is present on the occupant. In an example, the bleeding condition of the occupant may be detected through a neural network, or the blood in the image may also be detected through a target detection technique such as threshold segmentation, so as to detect the bleeding condition of the occupant in the cabin, and for a specific implementation manner of detecting the bleeding condition of the occupant, reference may be made to possible implementation manners provided in the present disclosure, which are not repeated herein.
In step S13, in response to detecting a bleeding situation, the bleeding situation is sent to an emergency call center.
The specific implementation of the blood flow condition sent to the emergency call center may be various, for example, whether there is a blood flow condition or no blood flow condition for the cabin occupant, or in the presence of a blood flow condition, more specific blood flow conditions may be sent, for example, a specific site where blood flow occurs, a severity of blood flow, etc.
Thus, by sending the bleeding situation to the emergency call center, the emergency call center can determine whether the bleeding situation exists at the emergency call initiator, and provide targeted rescue measures, such as carrying corresponding hemostatic articles, transfusion articles, dispatching doctors handling the bleeding situation, and the like, when the bleeding situation is determined to exist.
In the embodiment of the disclosure, the image information of the passenger in the cabin is acquired by being triggered in response to the emergency call, the bleeding situation of the passenger in the cabin is detected based on the image information, and then the bleeding situation is sent to the emergency call center in response to the detected bleeding situation. Therefore, the injury condition of the passengers in the accident can be determined, so that the emergency call center can reasonably schedule rescue force according to the injury condition of the passengers in the accident, and can shorten or omit the inquiry process of the personnel in the call center according to the scene of serious bleeding condition, so as to rescue in seconds.
In one possible implementation manner, the detecting the bleeding situation of the passengers in the cabin based on the image information comprises performing face detection and/or human body detection on the image information to determine the passengers in the cabin, and performing blood detection on the face and/or the body surface of the passengers to determine the bleeding situation of the passengers in the cabin.
After the image information in the cabin is obtained, human body detection and/or human face detection can be performed on the cabin based on the image information, so that a human body detection result and/or a human face detection result in the cabin are obtained, and a passenger detection result in the cabin can be obtained based on the human body detection result and/or the human face detection result in the cabin. For example, the human body detection result and/or the human face detection result in the cabin may be used as the occupant detection result in the cabin. For another example, the detection result of the passenger in the cabin may be obtained after the detection result of the human body and/or the detection result of the human face in the cabin are processed.
In the embodiment of the disclosure, human body detection and/or human face detection is performed in the cabin to obtain a human body detection result and/or a human face detection result in the cabin, wherein the detection result comprises position information of a human body and/or a human face. For example, in the case where one occupant is detected, the occupant detection result may include position information of the occupant, and in the case where a plurality of occupants are detected, the occupant detection result may include position information of each of the detected occupants.
The position information of the occupant may be represented using position information of a bounding box of the occupant. Then, in the image framed by the bounding box, blood detection is performed on the occupant. Alternatively, the position information of the occupant may be represented by the position information of the boundary contour of the occupant, and then blood detection is performed on the occupant in an image surrounded by the boundary contour.
The method comprises the steps of detecting the face of an occupant based on the face detection, detecting blood of the face of the occupant to determine bleeding of the occupant in the cabin when the face is detected, detecting blood of the body surface of the occupant to determine bleeding of the occupant in the cabin when the body is detected, and detecting the body position of the occupant based on the body detection.
In the embodiment of the disclosure, the face detection and/or the human body detection are/is performed on the image information to determine the passengers in the cabin, and then the blood detection is performed on the faces and/or the body surfaces of the passengers to determine the bleeding condition of the passengers in the cabin. Therefore, the face detection and/or the human body detection are/is carried out on the image information, and the blood detection is carried out on the face and/or the body surface of the passenger, so that the image range for the subsequent blood detection can be reduced, the detection efficiency is improved, the interference of other areas of the passenger body on the blood detection can be reduced, and the accuracy of the blood flow condition detection is improved.
The specific implementation manner of blood detection can be detection based on the color information of blood and the shape information of blood flow, and then in one possible implementation manner, the detection of the bleeding condition of the passenger in the cabin based on the image information comprises detection of whether the passenger bleeds based on the color information of blood and the shape information of blood flow.
The color of blood is often bright red after bleeding, and in a computer, the color is represented by defining color parameters in a color space, and a commonly used color space is a red, green and blue (RGB) color space, and in addition, color spaces such as HSL, LMS, CMYK, CIE YUV, HSB (HSV), YCbCr and the like exist in the related art based on standards formulated by the international commission on illumination (Commission Internationale de L' Eclairage, CIE) standard colorimetry system. The expression forms of the color spaces are various, different color spaces can have different characteristics, and the color parameters of the different color spaces can be mutually converted.
According to the definition of different color spaces on the color parameters, the corresponding color parameters can be analyzed from the image information, when the image is stored in a computer, the default color space of most images in the computer is RGB color space, the RGB color space is divided into three color components of red (R), green (G) and blue (B), and the value range of each color component is 0-255. When the computer reads the image information from the storage medium, the computer reads the image information through a digital image processing technology, namely, three-dimensional components of each pixel point in the image information in a default color space can be obtained, and the color parameters of the image information in the default color space are obtained.
Each pixel point is provided with a color parameter, and different color parameters represent different colors, so that the color of the pixel point can be determined based on the color parameters. While for the color of blood, the color parameter may be a range, in one example, the range of the color parameter of blood may be represented as (150,0,0) - (250,50,50) by taking RGB color space as an example, when the color range of the pixel point is within the range, it may be regarded as the color of blood, and further, verification is performed according to the shape of blood.
In addition, the detection of the bleeding situation can be realized based on the neural network technology, in the process of performing image processing through the neural network technology, the image features are often extracted through the image convolution operation, the pixel values of the image are extracted in the operation, then the color information of the blood is extracted to be a high-dimensional feature representation, and the neural network can determine the high-dimensional feature belonging to the blood color according to the high-dimensional feature, so that the bleeding situation can be detected.
Because the color of attachments such as clothes and accessories of the passenger can be similar to the color of blood, whether the passenger bleeds or not can be further determined according to the shape of blood flow, and the shape of blood flow can be obtained according to the shape of real blood flow in an image.
Blood detection may be implemented in terms of a deep neural network, which may be, for example, a focus mechanism based Seq2Seq model, tensorflow model, or the like. The neural network can be a trained network, and can also be trained by adopting an image data set containing the bleeding condition of the passenger in the image content according to the characteristics of the image information of the bleeding of the passenger, and the neural network is trained by marking the bleeding area in the image data set, so that the neural network has higher accuracy in detecting the blood, and the blood of the face and/or the body surface of the passenger can be accurately detected.
In the embodiment of the disclosure, by detecting whether the occupant is bleeding or not based on the image information based on the color information of blood and the shape information of blood flow, the bleeding condition of the occupant in the cabin can be accurately detected.
In one possible implementation manner, the detecting the bleeding condition of the passenger in the cabin based on the image information comprises detecting the body surface area of the passenger in the cabin based on the image information, dividing the body surface area of the passenger into a plurality of detection areas, detecting blood information in each detection area to obtain the area detection result of each detection area, and determining the bleeding condition of the passenger based on the area detection result of each detection area.
Based on the image information, a body surface region of the cabin occupant, which can be represented based on coordinates in the image, can be detected, and the body surface region can be divided into a plurality of detection regions. The specific method for dividing the detection area may be various, for example, the body surface area may be divided by a grid, the division may divide the exposed part of the body surface (such as a face, a neck, etc.) into grids with the same area, for example, the body surface area may be divided according to the body part in the body surface, the arm of the body surface may be divided into one detection area, the chest may be divided into one detection area, the abdomen may be divided into one detection area, etc., when the body surface area is divided according to the body part, the body may be subjected to the detection of the key points of the body to obtain the key points for marking the body part, and then the body surface area may be divided according to the key points to obtain a plurality of detection areas. In addition, the human body surface area may be divided in a plurality of ways, which is not limited in the present disclosure.
After obtaining the plurality of detection areas, blood information may be detected in each detection area, and for a specific way of detecting blood information, reference may be made to possible implementation manners provided in the present disclosure, for example, blood may be detected according to a blood color and a blood flow shape, or blood may be detected through a trained neural network, which is not described herein.
After detecting blood information in each detection area, the area detection result of each detection area can be obtained, wherein the area detection result can be a condition of bleeding or a condition of no bleeding, and when the condition of bleeding exists, the area of bleeding can be specified, and the specific position information of blood flow in the detection area can be detected.
And then based on the area detection results of the detection areas, the area detection results of the detection areas can be subjected to weighted fusion to determine the bleeding condition of the passengers, or the confidence level and the bleeding severity degree of the bleeding of the passengers can be determined by integrating the detection results of the areas, and the possible implementation modes provided by the disclosure can be seen, and details are omitted here.
In the embodiment of the disclosure, the body surface area of the passenger in the cabin is detected based on the image information, the body surface area of the passenger is divided into a plurality of detection areas, and the blood information is detected in each detection area to obtain the area detection result of each detection area, so that the bleeding condition of the passenger is determined according to the area detection result of each detection area, and the accuracy of the determined bleeding condition can be improved.
In one possible implementation manner, the detecting the body surface area of the passenger in the cabin based on the image information comprises detecting the face surface area of the passenger in the cabin based on the image information, and the dividing the body surface area of the passenger into a plurality of detection areas comprises dividing the face surface area of the passenger into a plurality of detection areas.
Based on the image information, a face surface area of the occupant in the cabin, which face surface area may be represented based on coordinates in the image, may be detected, and may be divided into a plurality of detection areas. The face surface area may be divided in various ways, for example, a mesh may be used to divide the face surface area, and the mesh may be a mesh with the same area, so that the face surface area may be divided by a mesh with the same area, or for example, the face surface area may be divided according to a portion in the face surface, the forehead may be divided into a detection area, two sides of the nose may be respectively divided into a detection area, the mouth and a portion below the mouth may be divided into a detection area, and so on. In addition, the face surface area may be divided in a plurality of ways, which is not limited in this disclosure.
In the embodiment of the disclosure, the face surface area of the passenger in the cabin is detected based on the image information, and then the face surface area of the passenger is divided into a plurality of detection areas, so that the bleeding condition of the face of the passenger is determined according to the area detection results of the detection areas, and the accuracy of the determined bleeding condition can be improved.
In one possible implementation, the detecting blood information in each detection area to obtain an area detection result of each detection area includes determining a first confidence that blood flow exists in each detection area based on the shape and the area of blood flow in each detection area, determining whether blood flow meeting each other exists between adjacent detection areas, increasing the confidence of the plurality of first detection areas and the second detection areas to be a second confidence in response to determining that blood flow meeting the first detection area and the second detection area exists in each detection area, and determining the bleeding situation of the passenger based on the area detection result of each detection area, wherein the determining the bleeding situation of the passenger includes determining the severity of the blood flow based on the area of the blood flow in each detection area, and the severity of the blood flow is positively correlated with the area of the blood flow of each detection area.
When detecting blood flow in image information based on image processing technology, it can be regarded as a two-classification problem of determining whether pixel points in an image belong to blood flow or not, and the detection can be realized based on image segmentation technology in deep learning, so that the position of the pixel points belonging to blood flow in a detection area can be obtained, and a plurality of pixel points classified into blood flow are connected to form a blood flow area, namely, a blood flow shape is formed. The blood flow area may be the area of a plurality of pixels classified as blood flow in the image, or may be converted into the real area on the surface of the occupant.
Based on the shape and area of the blood flow in each detection region, a first confidence level that a blood flow condition exists in each detection region can be determined, and in particular can be determined based on a neural network, the first confidence level characterizing the confidence level that a blood flow condition exists in a single detection region. The first confidence level may be positively correlated with the area of the blood flow in the region, i.e., the greater the area, the higher the first confidence level, and the closer the shape of the blood flow in the detection region is to the shape of the true blood flow, the higher the first confidence level.
After the first confidence coefficient of each detection area is obtained, whether the connected blood flow exists between the detection areas can be further determined, specifically, the blood flow can be determined according to the positions of the blood determined in the detection areas, the positions of the blood in the detection areas can be the positions in the image information, and if the positions of the blood are adjacent in the image information, the connected blood flow between the detection areas can be determined.
Since there is blood flow in the junction between the detection regions, indicating that the area of blood flow of the occupant is greater than the area of blood flow in a single detection region, the confidence level of the presence of blood flow of the occupant should be further increased, and therefore, in response to determining that there is blood flow in the junction of a first detection region in the detection regions with an adjacent second detection region, the confidence levels of the plurality of first detection regions and the second detection regions are increased to a second confidence level to increase the confidence level of the presence of blood flow in the first detection regions and the second detection regions.
For example, the first confidence levels of the first detection region and the second detection region are 0.6 and 0.7, respectively, and when it is determined that there is a blood flow in contact with the first detection region and the second detection region, the confidence levels of the first detection region and the second detection region may be raised to 0.7 and 0.8, respectively, to obtain the second confidence level. It should be noted that, the specific magnitude of the first confidence increase may be determined according to actual requirements, which is not specifically limited in this disclosure.
Because the confidence level can represent the confidence level of the bleeding condition of the passenger, a confidence threshold value can be preset, and the bleeding of the passenger can be determined under the condition that the first confidence level or the second confidence level exceeds the confidence threshold value. The specific setting of the confidence threshold may be determined according to actual needs, which is not specifically limited by the present disclosure.
Further, since a greater area of blood flow indicates a greater degree of bleeding, in one possible implementation, the severity of bleeding may be determined based on the area of blood flow in each detection zone, the severity of bleeding being positively correlated with the area of blood flow and the area of blood flow in each of the detection zones.
In an embodiment of the disclosure, a first confidence level of whether bleeding exists in each detection area is determined based on the shape and the area of blood flow in each detection area, whether connected blood flow exists between adjacent detection areas is determined, the confidence levels of the plurality of first detection areas and the plurality of second detection areas are raised to be second confidence levels in response to determining that connected blood flow exists between the first detection areas and the adjacent second detection areas in the detection areas, and the passenger bleeding is determined when the first confidence level or the second confidence level exceeds a confidence threshold value. Thereby, the accuracy of the determined bleeding situation can be improved. In addition, the severity of bleeding is determined based on the blood flow area in each detection area, and the severity of bleeding is sent to the emergency call center, so that the emergency call center can timely know the severity of bleeding of a calling party, reasonably schedule rescue force and provide targeted rescue.
In one possible implementation, the detecting the bleeding situation of the passenger in the cabin based on the image information includes determining a body part of bleeding and a direction of blood flow in response to detecting the bleeding of the passenger in the cabin based on the image information, and taking the body part of the starting end of the blood flow as the bleeding part based on the body part of bleeding and the direction of the blood flow.
The body part of the passenger body surface can be determined based on the human body key point detection, so that the body part where the blood is located can be determined, and the blood possibly spans a plurality of body parts after bleeding at a certain part because of the fluidity of the blood, so that the direction of the blood flow can be further determined, and the body part where the starting end of the blood flow is located is taken as the bleeding part based on the direction of the blood flow.
In the process of determining the blood flow direction, the blood flow direction may be determined based on a plurality of video frames in the image information, blood detection may be performed in the plurality of video frames, and the direction in which the blood flow gradually increases may be determined according to the blood flow in the plurality of video frames, that is, the direction may be taken as the blood flow direction.
In the embodiment of the disclosure, in response to detecting bleeding of an occupant in the cabin based on the image information, a body part of the bleeding and a direction of the blood flow are determined, and the body part where a starting end of the blood flow is located is taken as a bleeding part based on the body part of the bleeding and the direction of the blood flow. Therefore, the bleeding part can be accurately determined, the bleeding part is used as a bleeding condition to be sent to the emergency call center, the emergency call center can conveniently and clearly determine the bleeding part of a calling party, and rescue force can be reasonably scheduled according to the bleeding part, so that targeted rescue can be provided.
In one possible implementation manner, the method further comprises the steps of determining the body posture of the passenger in the cabin according to the image information, and determining the body posture of the passenger in the cabin as the abnormal body posture when the body posture is the preset abnormal body posture and the duration of the abnormal body posture exceeds the set duration.
In the embodiments of the present disclosure, the body posture of the occupant in the cabin can be determined based on the image information, and the body posture of the occupant can be determined by an image recognition technique, for example. The body posture detection may be determined based on the human body key detection, and as an example of this implementation manner, a plurality of human body key points to be detected may be preset, for example, a human body skeleton may be set to include 17 key points, each of which indicates each part of the human body, and by detecting the 17 key points, the positional relationship between each part of the human body may be obtained according to the positional relationship between the 17 key points, which is a specific expression form of the body posture.
As an example of this implementation, the image information may be input into a backbone network, feature extraction is performed on the image information via the backbone network to obtain a feature map, and then the positions of key points of the human body are detected based on the feature map, thereby obtaining the human body posture. The backbone network may adopt a network structure such as ResNet, mobileNet, which is not limited herein.
After the body posture of the cabin occupant is determined, whether the body posture is a preset abnormal body posture or not may be determined, and if the body posture is determined to be the preset abnormal body posture, the health condition of the cabin occupant is determined to be an abnormal condition. The preset abnormal body posture includes at least one of body leaning toward one side, head leaning downward, face up. Because the body posture of the passenger can reflect the health condition of the body of the user to a certain extent, when the passenger is injured after the occurrence of the accident, the body cannot keep a straight posture, and the passenger can present abnormal postures such as body inclination towards one side, head downward inclination or upward facing. These body gestures can accurately represent that the current health status of the user is abnormal.
Therefore, the abnormal body postures can be preset, after the body postures of the passengers in the cabin are determined, whether the body states of the passengers in the cabin are the preset abnormal body postures can be judged, and under the condition that the body postures are the preset abnormal body postures, the health conditions of the passengers in the cabin are determined to be abnormal conditions.
In addition, in order to improve the accuracy of abnormal condition detection, it may be determined that the body posture of the cabin occupant is an abnormal body posture in the case where the body posture is a preset abnormal body posture and the duration of the abnormal body posture exceeds a set duration.
In the embodiment of the disclosure, the body posture of the passenger in the cabin is determined according to the image information, and the body posture of the passenger in the cabin is determined to be the abnormal body posture under the condition that the body posture is the preset abnormal body posture and the duration of the abnormal body posture exceeds the set duration. Therefore, the abnormal condition of the passenger can be accurately determined, the injury severity level of the passenger can be accurately determined later, and the injury severity level of the passenger is sent to the emergency call center, so that the emergency call center can rescue the passenger with serious injury preferentially.
In one possible implementation, the method further comprises determining that a fracture condition exists for the in-cabin occupant if it is determined that the body posture of the in-cabin occupant is a preset fracture posture.
In the case of a fracture, the body posture of the occupant is significantly different from the normal body posture, for example, the non-joints of the occupant are bent, which can make sure that the occupant has the fracture. For example, if there is a bend at the non-joint of the arm, it may be determined that there is a fracture in the arm of the occupant, and if there is a bend at the non-joint of the leg, it may also be determined that there is a fracture in the leg of the occupant.
In one possible implementation, the method further comprises determining vital sign indicators of the occupant based on the image information, the vital sign indicators including at least one of respiratory rate, blood pressure, heart rate, and transmitting the vital sign indicators to an emergency call center.
The vital sign index may be determined based on physiological feature sensing information acquired by a physiological feature sensor, and taking a millimeter wave radar as an example, the principle of the millimeter wave radar for monitoring the respiratory heart rate is to utilize the radar to emit electromagnetic waves, and then detect the frequency of an echo signal to realize detection of the respiratory heart rate of the passenger, where the vital feature sensing information is the echo signal of the millimeter wave radar. Millimeter wave radars are capable of detecting minute vibrations and displacements of the human body by measuring changes in the phase of echo signals, and in one example, the frequency of heart beats and breathing can be determined based on the detection of the amplitude of chest vibrations.
After the vital sign index is determined, the vital sign index can be sent to the emergency call center, so that the emergency call center reasonably dispatches rescue force to carry out targeted rescue on passengers.
In one possible implementation, the method further includes determining a severity level of injury to the occupant in the cabin based on the determined at least one of bleeding, abnormal body posture, vital sign indicators of the occupant, and transmitting the severity level of injury to an emergency call center.
The severity of the injury is used to characterize the severity of the injury to the occupant, and may be classified, for example, as a scale of 0-10, with a scale of 0 indicating that the occupant is not injured the higher the level is, the more severe the injury is.
The severity of injury may be one or more, for example, one injury level may be used to comprehensively characterize the severity of bleeding, abnormal body posture, vital sign indicators, etc. of the occupant, or multiple severity levels may be used to separately characterize the severity of injury of the occupant, for example, bleeding severity levels, fracture severity levels, vital sign debilitation levels, etc. may be preset.
The bleeding severity level can be determined according to the bleeding area and the bleeding position, wherein the bleeding area is positively related to the bleeding severity level, and the bleeding severity level is higher when the bleeding position is positioned on the head, the abdomen and other important parts. The severity level of fracture is positively related to the degree of bending of human bones, the quantity of fractured bones is positively related, and the bleeding severity level is higher when the fracture part is positioned at the head and other important parts. The level of the vital sign weakness is inversely related to the vital sign index, the lower the respiratory rate, the blood pressure and the heart rate is, the higher the level of the vital sign weakness is,
In the case that the severity of bleeding, abnormal body posture, vital sign index, etc. of the occupant is comprehensively represented by an injury severity level, the severity of bleeding, abnormal body posture, vital sign index, etc. of the occupant may be weighted-averaged to obtain an integrated injury severity level, or the injury severity level may be determined based on the most serious one of bleeding, abnormal body posture, vital sign index of the occupant.
In an embodiment of the present disclosure, the severity of injury of the occupant in the cabin is determined based on at least one of determined bleeding conditions, abnormal body posture, vital sign indicators of the occupant, and the severity of injury is transmitted to an emergency call center. Therefore, the emergency call center can rescue the passengers of the calling party with high injury severity level preferentially according to the injury severity level, reduces or omits the inquiry process, provides faster rescue, and reduces personal and property loss.
An application scenario of the embodiments of the present disclosure is described below. In the application scene, after an accident occurs, an emergency call is triggered, then image information of passengers in the cabin is acquired, no obvious bleeding condition of the passengers in the cabin is detected, and the body posture is normal, so that the casualties are judged to be light. After the emergency call is put through, the call center and the personnel in the vehicle simply confirm that no additional rescue is needed.
A further application scenario of embodiments of the present disclosure is described below. In the application scene, after an accident occurs, an emergency call is triggered, then image information of passengers in the cabin is acquired, massive bleeding on the surfaces of the passengers in the cabin is detected, limbs are static and weak in breath, the serious injury level is determined to be high, and the serious injury level is sent to an emergency call center. The emergency call directly sends the ambulance to the accident site, and the personnel at the call center communicate with the car to further know the situation.
It will be appreciated that the above-mentioned method embodiments of the present disclosure may be combined with each other to form a combined embodiment without departing from the principle logic, and are limited to the description of the present disclosure. It will be appreciated by those skilled in the art that in the above-described methods of the embodiments, the particular order of execution of the steps should be determined by their function and possible inherent logic.
In addition, the disclosure further provides a device, an electronic device, a computer readable storage medium and a program for transmitting information to an emergency call center for a vehicle, where the foregoing may be used to implement any one of the information transmission methods provided in the disclosure, and the corresponding technical schemes and descriptions and corresponding descriptions referring to the method parts are not repeated.
Fig. 2 shows a block diagram of an information transmission apparatus according to an embodiment of the present disclosure, and as shown in fig. 2, the apparatus 20 includes:
an image information acquisition unit 21 for acquiring image information of an occupant in the cabin in response to an emergency call being triggered;
A bleeding situation detection unit 22 for detecting bleeding situations of passengers in the cabin based on the image information;
and a bleeding situation transmitting unit 23 for transmitting the bleeding situation to the emergency call center in response to detecting the bleeding situation.
In one possible implementation manner, the bleeding situation detection unit includes:
the passenger detection subunit is used for carrying out face detection and/or human body detection on the image information and determining passengers in the cabin;
and the first bleeding situation determination subunit is used for detecting blood on the face and/or the body surface of the passenger and determining the bleeding situation of the passenger in the cabin.
In one possible implementation manner, the bleeding situation detection unit is configured to detect whether the occupant bleeds based on the image information based on color information of blood and shape information of blood flow.
In one possible implementation manner, the bleeding situation detection unit includes:
a body surface region detection subunit configured to detect a body surface region of an occupant in the cabin based on the image information;
A detection region dividing subunit configured to divide a body surface region of the occupant into a plurality of detection regions;
A region detection result determining subunit, configured to detect blood information in each detection region, and obtain a region detection result of each detection region;
And a second bleeding situation determination subunit configured to determine a bleeding situation of the occupant based on the area detection results of the detection areas.
In a possible implementation manner, the body surface area detection subunit is configured to detect a face surface area of an occupant in the cabin based on the image information;
the detection region dividing subunit is configured to divide a face surface region of the occupant into a plurality of detection regions.
In a possible implementation manner, the region detection result determining subunit is configured to determine, based on the shape and the area of the blood flow in each detection region, a first confidence that a bleeding situation exists in each detection region; responsive to determining that there is blood flow in the junction of a first detection region of the detection regions with an adjacent second detection region, increasing the confidence of the plurality of first detection regions and the second detection region to a second confidence;
the second blood flow condition determining subunit is configured to determine that the occupant is bleeding if the first confidence or the second confidence exceeds a confidence threshold, and determine a severity of bleeding based on an area of blood flow in each detection area, the severity of bleeding being positively correlated with an area of blood flow and the area of blood flow in each detection area.
In one possible implementation manner, the bleeding situation detection unit includes:
a bleeding part detection subunit for determining a body part of bleeding and a direction of blood flow in response to detection of bleeding of an occupant in the cabin based on the image information;
a bleeding part detection subunit, configured to use a body part where a blood flow start end is located as a bleeding part based on the bleeding body part and a blood flow direction.
In one possible implementation, the apparatus further includes:
a body posture determining unit configured to determine a body posture of the occupant in the cabin based on the image information;
An abnormal body posture determining unit configured to determine that the body posture of the cabin occupant is an abnormal body posture in a case where the body posture is a preset abnormal body posture and a duration of the abnormal body posture exceeds a set duration.
In one possible implementation, the apparatus further includes:
and the fracture condition detection unit is used for determining that the fracture condition exists in the cabin passenger under the condition that the body posture of the cabin passenger is determined to be the preset fracture posture.
In one possible implementation, the apparatus further includes:
a vital sign index determining unit configured to determine a vital sign index of the occupant based on the image information, the vital sign index including at least one of:
Respiratory rate, blood pressure, heart rate;
and the vital sign index sending unit is used for sending the vital sign index to the emergency call center.
In one possible implementation, the apparatus further includes:
An injury severity level determination unit configured to determine an injury severity level of an occupant in a cabin based on the determined at least one of bleeding condition, abnormal body posture, vital sign index of the occupant;
And the injury severity level transmitting unit is used for transmitting the injury severity level to the emergency call center.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementation and technical effects of the functions or modules may refer to the descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
The disclosed embodiments also provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method. The computer readable storage medium may be a volatile or nonvolatile computer readable storage medium.
The embodiment of the disclosure also provides electronic equipment, which comprises a processor and a memory for storing instructions executable by the processor, wherein the processor is configured to call the instructions stored by the memory so as to execute the method.
Embodiments of the present disclosure also provide a computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, performs the above method.
The electronic device may be provided as a terminal, server or other form of device.
Fig. 3 illustrates a block diagram of an electronic device 800, according to an embodiment of the disclosure. For example, electronic device 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to FIG. 3, the electronic device 800 can include one or more of a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen between the electronic device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to, a home button, a volume button, an activate button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the electronic device 800. For example, the sensor assembly 814 may detect an on/off state of the electronic device 800, a relative positioning of the components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in position of the electronic device 800 or a component of the electronic device 800, the presence or absence of a user's contact with the electronic device 800, an orientation or acceleration/deceleration of the electronic device 800, and a change in temperature of the electronic device 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a photosensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the electronic device 800 and other devices, either wired or wireless. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (WiFi), a second generation mobile communication technology (2G) or a third generation mobile communication technology (3G), or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 804 including computer program instructions executable by processor 820 of electronic device 800 to perform the above-described methods.
Fig. 4 illustrates a block diagram of an electronic device 1900 according to an embodiment of the disclosure. For example, electronic device 1900 may be provided as a server. Referring to FIG. 4, electronic device 1900 includes a processing component 1922 that further includes one or more processors and memory resources represented by memory 1932 for storing instructions, such as application programs, that can be executed by processing component 1922. The application programs stored in memory 1932 may include one or more modules each corresponding to a set of instructions. Further, processing component 1922 is configured to execute instructions to perform the methods described above.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as the Microsoft Server operating system (Windows Server TM), the apple Inc. promoted graphical user interface-based operating system (Mac OS X TM), the multi-user, multi-process computer operating system (Unix TM), the free and open source Unix-like operating system (Linux TM), the open source Unix-like operating system (FreeBSD TM), or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 1932, including computer program instructions executable by processing component 1922 of electronic device 1900 to perform the methods described above.
The present disclosure may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium include a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical encoding device, punch cards or intra-groove protrusion structures such as those having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
The computer program instructions for performing the operations of the present disclosure may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as SMALLTALK, C ++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (12)

1.一种用于车辆的向紧急呼叫中心发送信息的方法,其特征在于,包括:1. A method for a vehicle to send information to an emergency call center, comprising: 响应于紧急呼叫被触发,获取舱内乘员的影像信息;In response to an emergency call being triggered, acquiring image information of the occupants in the cabin; 基于所述影像信息,检测舱内乘员的流血情况;Based on the image information, detecting bleeding of the occupants in the cabin; 响应于检测到流血情况,将所述流血情况发送给紧急呼叫中心;In response to detecting the bleeding condition, transmitting the bleeding condition to an emergency call center; 所述基于所述影像信息,检测舱内乘员的流血情况,包括:The detecting the bleeding condition of the occupant in the cabin based on the image information includes: 基于所述影像信息,检测所述舱内的乘员的体表区域;Based on the image information, detecting a body surface area of an occupant in the cabin; 将所述乘员的体表区域划分为多个检测区域;Dividing the body surface area of the occupant into a plurality of detection areas; 在每个所述检测区域中检测血液信息,得到各所述检测区域的区域检测结果;Detecting blood information in each of the detection areas to obtain a regional detection result of each of the detection areas; 基于各所述检测区域的区域检测结果,确定所述乘员的流血情况;determining the bleeding condition of the occupant based on the regional detection results of each of the detection areas; 所述在每个所述检测区域中检测血液信息,得到各所述检测区域的区域检测结果,包括:The detecting of blood information in each of the detection areas to obtain the area detection results of each of the detection areas includes: 基于每个检测区域中血流的形状和面积,确定每个所述检测区域存在流血情况的第一置信度;Determining a first confidence level of the presence of bleeding in each detection area based on the shape and area of the blood flow in each detection area; 确定各相邻的检测区域之间是否存在相接的血流;Determine whether there is a connected blood flow between adjacent detection areas; 响应于确定所述检测区域中的第一检测区域与相邻的第二检测区域存在相接的血流,将所述多个第一检测区域和所述第二检测区域的置信度升高为第二置信度;In response to determining that a first detection area in the detection areas has a connected blood flow with an adjacent second detection area, raising the confidence levels of the plurality of first detection areas and the second detection area to a second confidence level; 所述基于各所述检测区域的区域检测结果,确定所述乘员的流血情况,包括:The determining the bleeding condition of the occupant based on the regional detection results of each of the detection areas includes: 在所述第一置信度或所述第二置信度超过置信度阈值的情况下,确定所述乘员流血;When the first confidence level or the second confidence level exceeds a confidence level threshold, determining that the occupant is bleeding; 基于所述每个检测区域中的血流的面积,确定流血的严重程度,所述流血的严重程度与所述各所述检测区域的血流的面积和正相关。The severity of bleeding is determined based on the area of blood flow in each detection region, and the severity of bleeding is positively correlated with the sum of the areas of blood flow in each detection region. 2.根据权利要求1所述方法,其特征在于,所述基于所述影像信息,检测舱内乘员的流血情况,包括:2. The method according to claim 1, characterized in that the detecting the bleeding condition of the occupant in the cabin based on the image information comprises: 对所述影像信息进行人脸检测和/或人体检测,确定出所述舱内的乘员;Performing face detection and/or body detection on the image information to determine the occupants in the cabin; 在所述乘员的面部和/或体表进行血液检测,确定舱内乘员的流血情况。A blood test is performed on the face and/or body of the occupant to determine the bleeding condition of the occupant in the cabin. 3.根据权利要求1或2任一所述方法,其特征在于,所述基于所述影像信息,检测舱内乘员的流血情况,包括:3. The method according to any one of claims 1 or 2, characterized in that the detecting the bleeding condition of the occupant in the cabin based on the image information comprises: 基于血液的颜色信息和血流的形状信息,基于所述影像信息检测所述乘员是否流血。Based on the blood color information and the blood flow shape information, it is detected based on the image information whether the occupant is bleeding. 4.根据权利要求1所述的方法,其特征在于,所述基于所述影像信息,检测所述舱内的乘员的体表区域,包括:4. The method according to claim 1, characterized in that the detecting the body surface area of the occupant in the cabin based on the image information comprises: 基于所述影像信息,检测舱内的乘员的人脸表面区域;Detecting a facial surface area of an occupant in the cabin based on the image information; 所述将所述乘员的体表区域划分为多个检测区域,包括:The dividing the body surface area of the occupant into a plurality of detection areas comprises: 将所述乘员的人脸表面区域划分为多个检测区域。The facial surface area of the occupant is divided into a plurality of detection areas. 5.根据权利要求1-2任一所述方法,其特征在于,所述基于所述影像信息,检测舱内乘员的流血情况,包括:5. The method according to any one of claims 1-2, characterized in that the detecting the bleeding condition of the occupant in the cabin based on the image information comprises: 响应于基于所述影像信息检测到所述舱内的乘员流血,确定流血的身体部位以及血流的方向;In response to detecting bleeding of the occupant in the cabin based on the image information, determining a bleeding body part and a direction of blood flow; 基于所述流血的身体部位以及血流的方向,将血流的起始端所在的身体部位,作为出血部位。Based on the bleeding body part and the direction of blood flow, the body part where the starting end of the blood flow is located is used as the bleeding part. 6.根据权利要求1-2任一所述方法,其特征在于,所述方法还包括:6. The method according to any one of claims 1-2, characterized in that the method further comprises: 根据所述影像信息,确定所述舱内乘员的身体姿态;Determining the body posture of the occupant in the cabin according to the image information; 在所述身体姿态为预设的异常身体姿态、且所述异常身体姿态的持续时长超过设定时长的情况下,确定所述舱内乘员的身体姿态为异常身体姿态。When the body posture is a preset abnormal body posture and the duration of the abnormal body posture exceeds a set duration, it is determined that the body posture of the cabin occupant is an abnormal body posture. 7.根据权利要求6所述方法,其特征在于,所述方法还包括:7. The method according to claim 6, characterized in that the method further comprises: 在确定所述舱内乘员的身体姿态为预设的骨折姿态的情况下,确定所述舱内乘员存在骨折状况。When it is determined that the body posture of the cabin occupant is a preset fracture posture, it is determined that the cabin occupant has a fracture. 8.根据权利要求1-2任一所述方法,其特征在于,所述方法还包括:8. The method according to any one of claims 1-2, characterized in that the method further comprises: 基于所述影像信息确定所述乘员的生命体征指标,所述生命体征指标包括以下至少一项:Determine the vital sign index of the occupant based on the image information, where the vital sign index includes at least one of the following: 呼吸频率、血压、心率;Respiratory rate, blood pressure, heart rate; 将所述生命体征指标发送给紧急呼叫中心。The vital sign indicator is sent to an emergency call center. 9.根据权利要求1-2任一所述方法,其特征在于,所述方法还包括:9. The method according to any one of claims 1-2, characterized in that the method further comprises: 基于确定的所述乘员的流血情况、异常身体姿态、生命体征指标中的至少一项,确定舱内乘员的受伤严重级别;Determining the severity level of the injury of the occupant in the cabin based on at least one of the determined bleeding condition, abnormal body posture, and vital sign indicators of the occupant; 将所述受伤严重级别发送给紧急呼叫中心。The injury severity level is transmitted to an emergency call center. 10.一种用于车辆的向紧急呼叫中心发送信息的装置,其特征在于,包括:10. A device for a vehicle to send information to an emergency call center, comprising: 影像信息获取单元,用于响应于紧急呼叫被触发,获取舱内乘员的影像信息;An image information acquisition unit, configured to acquire image information of passengers in the cabin in response to an emergency call being triggered; 流血情况检测单元,用于基于所述影像信息,检测舱内乘员的流血情况;a bleeding condition detection unit, used to detect the bleeding condition of the occupant in the cabin based on the image information; 流血情况发送单元,用于响应于检测到流血情况,将所述流血情况发送给紧急呼叫中心;a bleeding situation sending unit, configured to send the bleeding situation to an emergency call center in response to detecting the bleeding situation; 所述流血情况检测单元,包括:The bleeding situation detection unit comprises: 体表区域检测子单元,用于基于所述影像信息,检测所述舱内的乘员的体表区域;a body surface area detection subunit, configured to detect a body surface area of an occupant in the cabin based on the image information; 检测区域划分子单元,用于将所述乘员的体表区域划分为多个检测区域;A detection area division subunit, used for dividing the body surface area of the occupant into a plurality of detection areas; 区域检测结果确定子单元,用于在每个所述检测区域中检测血液信息,得到各所述检测区域的区域检测结果;A regional detection result determination subunit, used for detecting blood information in each detection area to obtain a regional detection result of each detection area; 第二流血情况确定子单元,用于基于各所述检测区域的区域检测结果,确定所述乘员的流血情况;a second bleeding condition determination subunit, configured to determine the bleeding condition of the occupant based on the regional detection results of each of the detection areas; 所述区域检测结果确定子单元,用于基于每个检测区域中血流的形状和面积,确定每个所述检测区域存在流血情况的第一置信度;确定各相邻的检测区域之间是否存在相接的血流;响应于确定所述检测区域中的第一检测区域与相邻的第二检测区域存在相接的血流,将所述多个第一检测区域和所述第二检测区域的置信度升高为第二置信度;The area detection result determination subunit is used to determine a first confidence level of bleeding in each detection area based on the shape and area of the blood flow in each detection area; determine whether there is a connected blood flow between adjacent detection areas; and in response to determining that a first detection area in the detection area has a connected blood flow with an adjacent second detection area, increase the confidence levels of the plurality of first detection areas and the second detection area to a second confidence level; 所述第二流血情况确定子单元,用于在所述第一置信度或所述第二置信度超过置信度阈值的情况下,确定所述乘员流血;基于所述每个检测区域中的血流的面积,确定流血的严重程度,所述流血的严重程度与所述各所述检测区域的血流的面积和正相关。The second bleeding situation determination subunit is used to determine that the occupant is bleeding when the first confidence level or the second confidence level exceeds a confidence level threshold; and to determine the severity of the bleeding based on the area of blood flow in each detection area, wherein the severity of the bleeding is positively correlated with the sum of the areas of blood flow in each of the detection areas. 11.一种电子设备,其特征在于,包括:11. An electronic device, comprising: 处理器;processor; 用于存储处理器可执行指令的存储器;a memory for storing processor-executable instructions; 其中,所述处理器被配置为调用所述存储器存储的指令,以执行权利要求1至9中任意一项所述的方法。The processor is configured to call the instructions stored in the memory to execute the method described in any one of claims 1 to 9. 12.一种计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现权利要求1至9中任意一项所述的方法。12. A computer-readable storage medium having computer program instructions stored thereon, wherein the computer program instructions, when executed by a processor, implement the method of any one of claims 1 to 9.
CN202111016361.1A 2021-08-31 2021-08-31 Method and device for transmitting information to emergency call center for vehicle Active CN113743290B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN202111016361.1A CN113743290B (en) 2021-08-31 2021-08-31 Method and device for transmitting information to emergency call center for vehicle
PCT/CN2022/078010 WO2023029407A1 (en) 2021-08-31 2022-02-25 Method and apparatus for vehicle to send information to emergency call center
JP2024513513A JP2024538860A (en) 2021-08-31 2022-02-25 Method and apparatus for transmitting information to an emergency call center for a vehicle
KR1020247009195A KR20240046910A (en) 2021-08-31 2022-02-25 Method and device for transmitting information to vehicle emergency call center

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111016361.1A CN113743290B (en) 2021-08-31 2021-08-31 Method and device for transmitting information to emergency call center for vehicle

Publications (2)

Publication Number Publication Date
CN113743290A CN113743290A (en) 2021-12-03
CN113743290B true CN113743290B (en) 2025-01-17

Family

ID=78734440

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111016361.1A Active CN113743290B (en) 2021-08-31 2021-08-31 Method and device for transmitting information to emergency call center for vehicle

Country Status (4)

Country Link
JP (1) JP2024538860A (en)
KR (1) KR20240046910A (en)
CN (1) CN113743290B (en)
WO (1) WO2023029407A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113743290B (en) * 2021-08-31 2025-01-17 上海商汤临港智能科技有限公司 Method and device for transmitting information to emergency call center for vehicle

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590762A (en) * 2017-08-24 2018-01-16 成都安程通科技有限公司 The system that a kind of vehicle-mounted accident reports automatically
CN109690609A (en) * 2017-03-08 2019-04-26 欧姆龙株式会社 Passenger's auxiliary device, method and program
CN113168772A (en) * 2018-11-13 2021-07-23 索尼集团公司 Information processing apparatus, information processing method, and program

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014128273A1 (en) * 2013-02-21 2014-08-28 Iee International Electronics & Engineering S.A. Imaging device based occupant monitoring system supporting multiple functions
JP6351323B2 (en) * 2014-03-20 2018-07-04 オリンパス株式会社 Image processing apparatus, image processing method, and image processing program
CN105760885A (en) * 2016-02-22 2016-07-13 中国科学院自动化研究所 Bloody image detection classifier implementing method, bloody image detection method and bloody image detection system
CN106373137B (en) * 2016-08-24 2019-01-04 安翰光电技术(武汉)有限公司 Hemorrhage of digestive tract image detecting method for capsule endoscope
CN107332875A (en) * 2017-05-27 2017-11-07 上海与德科技有限公司 A kind of net about car monitoring method and system, cloud server
CN107563933A (en) * 2017-08-24 2018-01-09 成都安程通科技有限公司 A kind of method that vehicle-mounted accident reports automatically
KR102578072B1 (en) * 2017-12-11 2023-09-14 삼성메디슨 주식회사 Ultrasound diagnositic apparatus and controlling mehtod of the same
US11116587B2 (en) * 2018-08-13 2021-09-14 Theator inc. Timeline overlay on surgical video
US11210779B2 (en) * 2018-09-07 2021-12-28 Siemens Healthcare Gmbh Detection and quantification for traumatic bleeding using dual energy computed tomography
JP7457011B2 (en) * 2019-05-22 2024-03-27 パナソニックホールディングス株式会社 Anomaly detection method, anomaly detection program, anomaly detection device, server device, and information processing method
CN111402218A (en) * 2020-03-11 2020-07-10 北京深睿博联科技有限责任公司 Cerebral hemorrhage detection method and device
CN111462009B (en) * 2020-03-31 2023-04-07 上海大学 Bleeding point prediction method based on similarity of divided rectangular areas
CN111784990A (en) * 2020-05-28 2020-10-16 上海擎感智能科技有限公司 Vehicle accident rescue method and system and computer storage medium
CN111652114B (en) * 2020-05-29 2023-08-25 深圳市商汤科技有限公司 Object detection method and device, electronic equipment and storage medium
CN111784686A (en) * 2020-07-20 2020-10-16 山东省肿瘤防治研究院(山东省肿瘤医院) Dynamic intelligent detection method, system and readable storage medium for endoscope bleeding area
CN113011290A (en) * 2021-03-03 2021-06-22 上海商汤智能科技有限公司 Event detection method and device, electronic equipment and storage medium
CN113766479A (en) * 2021-08-31 2021-12-07 上海商汤临港智能科技有限公司 Method and device for transmitting passenger information to rescue call center for vehicle
CN113763670A (en) * 2021-08-31 2021-12-07 上海商汤临港智能科技有限公司 Alarm method and device, electronic equipment and storage medium
CN113743290B (en) * 2021-08-31 2025-01-17 上海商汤临港智能科技有限公司 Method and device for transmitting information to emergency call center for vehicle

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109690609A (en) * 2017-03-08 2019-04-26 欧姆龙株式会社 Passenger's auxiliary device, method and program
CN107590762A (en) * 2017-08-24 2018-01-16 成都安程通科技有限公司 The system that a kind of vehicle-mounted accident reports automatically
CN113168772A (en) * 2018-11-13 2021-07-23 索尼集团公司 Information processing apparatus, information processing method, and program

Also Published As

Publication number Publication date
JP2024538860A (en) 2024-10-24
WO2023029407A1 (en) 2023-03-09
KR20240046910A (en) 2024-04-11
CN113743290A (en) 2021-12-03

Similar Documents

Publication Publication Date Title
CN112141119B (en) Intelligent driving control method and device, vehicle, electronic equipment and storage medium
US20210133468A1 (en) Action Recognition Method, Electronic Device, and Storage Medium
CN112124073B (en) Intelligent driving control method and device based on alcohol detection
CN113486759B (en) Dangerous action recognition method and device, electronic equipment and storage medium
US20210406523A1 (en) Method and device for detecting living body, electronic device and storage medium
WO2021159630A1 (en) Vehicle commuting control method and apparatus, electronic device, medium, and vehicle
US20160066127A1 (en) Method for controlling and an electronic device thereof
CN113763670A (en) Alarm method and device, electronic equipment and storage medium
US20220316260A1 (en) Trunk Control Method and Apparatus, and Storage Medium
CN112001348A (en) Method and device for detecting passenger in vehicle cabin, electronic device and storage medium
CN113766479A (en) Method and device for transmitting passenger information to rescue call center for vehicle
CN111243105B (en) Augmented reality processing method and device, storage medium and electronic equipment
US9924090B2 (en) Method and device for acquiring iris image
CN113486760A (en) Object speaking detection method and device, electronic equipment and storage medium
EP4414965A1 (en) Virtual parking space determination method, display method and apparatus, device, medium, and program
CN108482372A (en) Driving speed control method and device, electronic device and readable storage medium
CN113920492A (en) Method and device for detecting people in vehicle, electronic equipment and storage medium
EP3872753A1 (en) Wrinkle detection method and terminal device
CN113743290B (en) Method and device for transmitting information to emergency call center for vehicle
CN113060144A (en) Distraction reminding method and device, electronic equipment and storage medium
CN110647892B (en) Traffic violation alarm prompt method, terminal and computer readable storage medium
CN110363695B (en) Robot-based crowd queue control method and device
CN108639056A (en) Driving state detection method and device and mobile terminal
CN113505674B (en) Face image processing method and device, electronic equipment and storage medium
CN115035500A (en) Vehicle door control method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40058731

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant