CN114550951B - A rapid medical treatment method, device, computer equipment and storage medium - Google Patents
A rapid medical treatment method, device, computer equipment and storage medium Download PDFInfo
- Publication number
- CN114550951B CN114550951B CN202210171423.4A CN202210171423A CN114550951B CN 114550951 B CN114550951 B CN 114550951B CN 202210171423 A CN202210171423 A CN 202210171423A CN 114550951 B CN114550951 B CN 114550951B
- Authority
- CN
- China
- Prior art keywords
- accident
- injury
- photo
- detection
- interface
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 87
- 230000006378 damage Effects 0.000 claims abstract description 147
- 208000027418 Wounds and injury Diseases 0.000 claims abstract description 143
- 238000001514 detection method Methods 0.000 claims abstract description 143
- 208000014674 injury Diseases 0.000 claims abstract description 137
- 230000008569 process Effects 0.000 claims description 35
- 230000009471 action Effects 0.000 claims description 26
- 208000031074 Reinjury Diseases 0.000 claims description 23
- 206010060820 Joint injury Diseases 0.000 claims description 19
- 230000004044 response Effects 0.000 claims description 13
- 230000008676 import Effects 0.000 claims description 5
- 238000013473 artificial intelligence Methods 0.000 abstract description 13
- 238000005516 engineering process Methods 0.000 abstract description 13
- 238000012545 processing Methods 0.000 description 10
- 238000012544 monitoring process Methods 0.000 description 8
- 238000003672 processing method Methods 0.000 description 8
- 206010039203 Road traffic accident Diseases 0.000 description 6
- 238000004891 communication Methods 0.000 description 6
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000002372 labelling Methods 0.000 description 4
- 238000010801 machine learning Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000002542 deteriorative effect Effects 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000740 bleeding effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003058 natural language processing Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H80/00—ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/20—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the management or administration of healthcare resources or facilities, e.g. managing hospital staff or surgery rooms
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Theoretical Computer Science (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Pathology (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Medical Treatment And Welfare Office Work (AREA)
Abstract
The application discloses a rapid medical treatment method, a rapid medical treatment device, computer equipment and a storage medium, and belongs to the technical field of artificial intelligence. The method comprises the steps of acquiring an accident handling flow, displaying a first operation step at a terminal of a case report person, responding to the triggering operation of the case report person, acquiring an accident photo, guiding the accident photo into a human injury detection model to obtain an injury detection result, displaying a second operation step and the injury detection result at a terminal of a medical staff, responding to the triggering operation of the medical staff, generating an wounded first-aid operation step, and displaying the wounded first-aid operation step at the terminal of the case report person. In addition, the application also relates to a blockchain technology, and the accident photo can be stored in the blockchain. Under the guidance of the accident handling flow operation steps, the medical staff is prompted to communicate with the report staff in real time, and the operation steps of the medical treatment operation flow are completed, so that the complete medical treatment operation flow is completed rapidly, and the medical treatment speed of wounded persons is improved.
Description
Technical Field
The application belongs to the technical field of artificial intelligence, and particularly relates to a rapid medical treatment processing method, a rapid medical treatment processing device, computer equipment and a storage medium.
Background
With the rapid development of social economy, motor vehicles are increasingly increased, so that traffic accidents are increasingly caused, and the current processing flow for the traffic accidents is tedious, time-consuming and labor-consuming. For example, when a road traffic accident occurs, if someone is injured in the road traffic accident, the principal or witness needs to complete the flow operations of alarming, seeking medical attention, claiming and the like, and needs to manually communicate, confirm and process with departments such as traffic police, hospitals, insurance companies and the like respectively.
Disclosure of Invention
The embodiment of the application aims to provide a rapid medical treatment method, a rapid medical treatment device, computer equipment and a storage medium, so as to solve the technical problems that the conventional traffic accident medical treatment process is tedious, time-consuming and labor-consuming, and delay treatment is easy to cause.
In order to solve the technical problems, the embodiment of the application provides a rapid medical treatment method, which adopts the following technical scheme:
A rapid hospitalization processing method, comprising:
Receiving an accident handling instruction, and acquiring operation steps of an accident handling process according to the accident handling instruction, wherein the operation steps of the accident handling process comprise a first operation step and a second operation step;
Generating a first operation guide interface at a terminal of a first user, and displaying the first operation step on the first operation guide interface, wherein the first operation step at least comprises shooting an accident photo;
responding to the triggering operation of the first user on the first operation guiding interface, and obtaining an accident photo;
the accident photo is imported into a pre-trained human injury detection model to obtain injury detection results of injured persons;
generating a second operation guide interface at a terminal of a second user, and displaying the second operation step and the injury detection result on the second operation guide interface;
responding to the triggering operation of the second user on the second operation guiding interface, and generating an wounded first-aid operation step;
Displaying the wounded first aid operation steps on the first operation guiding interface.
Further, the step of obtaining an accident photo in response to the triggering operation of the first user on the first operation guiding interface specifically includes:
responding to the triggering operation of the first user on the screenshot control, and generating a view finding frame on the first operation guiding interface;
and obtaining the accident photo by intercepting the picture content in the view-finding frame.
Further, the accident photo includes a first accident photo and a second accident photo, and after the step of obtaining the accident photo by intercepting the picture content in the viewfinder, the method further includes:
Receiving the first event photo, and carrying out content identification on the first event photo to obtain limb conditions in the first event photo;
analyzing the limb condition in the first event photo to obtain first injury data of the injured person, wherein the first injury data is body surface injury data;
determining a detection action video based on the first injury data, and displaying the detection action video on the first operation guide interface;
obtaining a picture of the wounded person when the detection action video is completed through the view finding frame, and obtaining a wounded condition detection video;
And extracting key frames of the injury detection video to obtain the second accident photo.
Further, the step of importing the accident photo into a pre-trained human injury detection model to obtain injury detection results of injured persons specifically includes:
Importing the second accident photo into a pre-trained human disability detection model to obtain second injury data of the wounded person, wherein the second injury data is joint injury data;
Generating a wound detection result of the wounded based on the first wound data and the second wound data.
Further, the step of importing the second accident photo into a pre-trained human disability detection model to obtain second injury data of the injured person specifically includes:
Detecting the joint point of the second accident photo to obtain the human body joint point of the wounded person;
Labeling human body joint points of the wounded person to obtain human body joint point information;
and carrying out joint injury prediction on the wounded person based on the human body joint point information by using the human body injury detection model to obtain second injury data of the wounded person.
Further, after the step of displaying the wounded first aid operation step on the first operation guidance interface, the method further comprises:
Acquiring an accident responsibility fixing operation step, and displaying the accident responsibility fixing operation step on the first operation guiding interface;
responding to triggering operation of the first user on the first operation guiding interface aiming at accident responsibility, carrying out content identification on the accident photo, determining accident vehicles in the accident photo, and acquiring driving information of the accident vehicles;
invoking a monitoring video in a preset time period before and after an accident occurs, and identifying the driving track of the accident vehicle based on the monitoring video;
and acquiring a pre-constructed traffic control law knowledge graph, and judging accident responsibility of the accident vehicle based on the driving track of the accident vehicle, the driving information of the accident vehicle and the traffic control law knowledge graph.
Further, after the step of acquiring the pre-constructed traffic control law knowledge map and determining the accident responsibility of the accident vehicle based on the driving track of the accident vehicle, the driving information of the accident vehicle and the traffic control law knowledge map, the method further comprises:
acquiring accident claim settlement operation steps, and displaying the accident claim settlement operation steps on the first operation guide interface;
Responding to the triggering operation of the first user on the first operation guide interface aiming at the accident claim, and calling a pre-trained vehicle loss identification model to identify the accident photo so as to obtain the vehicle loss degree;
and generating a pay scheme according to the injury detection result, the vehicle damage degree and the accident responsibility.
In order to solve the technical problems, the embodiment of the application also provides a rapid medical treatment device, which adopts the following technical scheme:
a rapid medical treatment device, comprising:
the operation step acquisition module is used for receiving the accident handling instruction and acquiring the operation steps of the accident handling process according to the accident handling instruction, wherein the operation steps of the accident handling process comprise a first operation step and a second operation step;
the first guiding module is used for generating a first operation guiding interface on the terminal of the first user and displaying the first operation step on the first operation guiding interface, wherein the first operation step at least comprises taking an accident picture;
The first triggering operation module is used for responding to the triggering operation of the first user on the first operation guide interface and obtaining an accident photo;
the injury detection module is used for importing the accident photo into a pre-trained human injury detection model to obtain injury detection results of injured persons;
The second guiding module is used for generating a second operation guiding interface on the terminal of the second user and displaying the second operation step and the injury detection result through the second operation guiding interface;
The second triggering operation module is used for responding to the triggering operation of the second user on the second operation guiding interface and generating an wounded first-aid operation step;
And the first operation guiding module is used for displaying the wounded first operation steps on the first operation guiding interface. In order to solve the above technical problems, the embodiment of the present application further provides a computer device, which adopts the following technical schemes:
a computer device comprising a memory having stored therein computer readable instructions which when executed by a processor implement the steps of the fast hospitalization processing method as claimed in any of the preceding claims.
In order to solve the above technical problems, an embodiment of the present application further provides a computer readable storage medium, which adopts the following technical schemes:
A computer readable storage medium having stored thereon computer readable instructions which when executed by a processor implement the steps of the fast hospitalization processing method as claimed in any of the preceding claims.
Compared with the prior art, the embodiment of the application has the following main beneficial effects:
The application discloses a rapid medical treatment method, a rapid medical treatment device, computer equipment and a storage medium, and belongs to the technical field of artificial intelligence. According to the method, a first operation guiding interface is generated at the terminal of a case report person through the first operation step and displayed on the first operation guiding interface, the case report person is instructed to complete shooting of an accident photo through the first operation step, the accident photo is obtained through responding to triggering operation of the case report person on the first operation guiding interface, the accident photo is imported into a pre-trained human injury detection model to obtain injury detection results of the injured person, a proper medical care hospital is searched and corresponding medical staff is determined according to the injury detection results, the second operation guiding interface is generated at the terminal of the medical staff, the second operation step and the injury detection results are displayed on the second operation guiding interface, triggering operation of the medical staff on the second operation guiding interface is responded, the injured person first-aid operation step is generated, and the injured person first-aid operation step is displayed on the first operation guiding interface to guide the case report person to complete emergency treatment. According to the medical treatment method, the accident photo is obtained by guiding the case report person, the wounded condition detection is carried out on the wounded person in the accident photo, so that a proper medical treatment hospital is searched, corresponding medical care personnel are determined, the medical care personnel and the case report person are prompted to communicate in real time under the guidance of the operation steps of the accident treatment process, the operation steps of the medical treatment process are completed, the complete medical treatment process is completed rapidly, and the medical treatment speed of the wounded person is improved.
Drawings
In order to more clearly illustrate the solution of the present application, a brief description will be given below of the drawings required for the description of the embodiments of the present application, it being apparent that the drawings in the following description are some embodiments of the present application, and that other drawings may be obtained from these drawings without the exercise of inventive effort for a person of ordinary skill in the art.
FIG. 1 illustrates an exemplary system architecture diagram in which the present application may be applied;
figure 2 shows a flow chart of one embodiment of a fast hospitalization processing method according to the present application;
fig. 3 shows a schematic structural view of an embodiment of a rapid hospitalization processing apparatus according to the present application;
Fig. 4 shows a schematic structural diagram of an embodiment of a computer device according to the application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs, the terms used in the description herein are used for the purpose of describing particular embodiments only and are not intended to limit the application, and the terms "comprising" and "having" and any variations thereof in the description of the application and the claims and the above description of the drawings are intended to cover non-exclusive inclusions. The terms first, second and the like in the description and in the claims or in the above-described figures, are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In order to make the person skilled in the art better understand the solution of the present application, the technical solution of the embodiment of the present application will be clearly and completely described below with reference to the accompanying drawings.
As shown in fig. 1, a system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 is used as a medium to provide communication links between the terminal devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 105 via the network 104 using the terminal devices 101, 102, 103 to receive or send messages or the like. Various communication client applications, such as a web browser application, a shopping class application, a search class application, an instant messaging tool, a mailbox client, social platform software, etc., may be installed on the terminal devices 101, 102, 103.
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablet computers, electronic book readers, MP3 players (Moving Picture ExpertsGroup Audio LayerIII, dynamic video expert compression standard audio plane 3), MP4 (Moving PictureExperts Group Audio LayerIV, dynamic video expert compression standard audio plane 4) players, laptop and desktop computers, and the like.
The server 105 may be a server that provides various services, such as a background server that provides support for pages displayed on the terminal devices 101, 102, 103, and may be a stand-alone server, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communications, middleware services, domain name services, security services, content delivery networks (Content Delivery Network, CDN), and basic cloud computing services such as big data and artificial intelligence platforms.
It should be noted that, the fast medical treatment processing method provided by the embodiment of the present application is generally executed by a server, and accordingly, the fast medical treatment processing device is generally disposed in the server.
It should be understood that the number of terminal devices, networks and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow chart of one embodiment of a quick hospitalization processing method according to the present application is shown. The embodiment of the application can acquire and process the related data based on the artificial intelligence technology. Wherein artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) is the theory, method, technique, and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend, and expand human intelligence, sense the environment, acquire knowledge, and use knowledge to obtain optimal results.
Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions. The rapid medical treatment method comprises the following steps:
S201, receiving an accident handling instruction, and acquiring operation steps of an accident handling process according to the accident handling instruction, wherein the operation steps of the accident handling process comprise a first operation step and a second operation step.
Specifically, after receiving an accident handling instruction uploaded by a case reporting person, the server obtains operation steps of an accident handling procedure according to the accident handling instruction, wherein the operation steps of the accident handling procedure are stored in a cloud memory in advance, operations of the case reporting person and medical personnel are standardized through the operation steps of the accident handling procedure, the operation steps of the accident handling procedure comprise a first operation step and a second operation step, the first operation step is used for indicating the case reporting person to finish corresponding case reporting operations, such as indicating and confirming surrounding environment safety, indicating and starting a dangerous alarm flash lamp, indicating and placing a tripod, indicating and shooting an accident scene picture and the like, and the second operation step is used for indicating the medical personnel to give corresponding wounded first aid operation steps according to the accident scene condition, such as indicating how to stop bleeding, preventing secondary injury and the like, and preventing wounded injury from deteriorating.
S202, a first operation guide interface is generated at a terminal of a first user, and the first operation step is displayed on the first operation guide interface, wherein the first operation step at least comprises shooting of an accident picture.
Specifically, in a specific embodiment of the present application, the first user is a claimant, and the claimant may be a wounded person or an accident witness, and after the claimant initiates an accident handling instruction, the server generates a first operation guiding interface at a terminal of the claimant, and displays a first operation step on the first operation guiding interface, where the first operation step at least includes specific operation steps of confirming safety of surrounding environment, starting a hazard warning flashing light, placing a tripod, and taking an accident photo.
S203, responding to the triggering operation of the first user on the first operation guiding interface, and acquiring an accident photo.
Specifically, the first operation guiding interface includes a plurality of controls, each control is configured to correspond to one operation function, for example, a screenshot control on the first operation guiding interface is configured to generate a viewfinder, and screen content in the viewfinder is intercepted. The server responds to the triggering operation of the newspaper man on the screenshot control at the first operation guiding interface, generates a view finding frame at the first operation guiding interface, and acquires shooting guiding information, such as shooting voice guiding information, so as to guide the newspaper man to acquire an accident photo.
S204, importing the accident photo into a pre-trained human injury detection model to obtain injury detection results of injured persons.
Specifically, the human body injury detection model can be obtained based on machine learning model training, and the training data are various human body injury images marked with joint information. The server imports the accident photo into a pre-trained human injury detection model to obtain injury detection results of injured persons.
In a specific embodiment of the present application, after obtaining the injury detection result of the wounded, the server combines the injury detection result, the distance from the emergency hospital and the idle degree of each emergency hospital, selects a matched emergency hospital visit from the docked emergency medical treatment list, and determines the corresponding medical staff for visit. After the hospital is confirmed, the server sends emergency notification information to the hospital, the emergency notification information carries personal information of the wounded person and the wounded condition (comprising accident information, wounded person identity information, wounded part, time about reaching the hospital, wounded condition picture and the like), the server sends the emergency notification information to the wounded person and related personnel, and synchronizes corresponding information with the emergency system of the hospital in an interface mode, and informs relevant on-duty personnel of the hospital to prepare for treatment through short messages.
S205, generating a second operation guide interface on the terminal of the second user, and displaying the second operation step and the injury detection result through the second operation guide interface.
Wherein, aiming at the wounded in the accident, only corresponding rescue modes are adopted aiming at different wounded parts and different wounded conditions, the situation of the wounded can be relieved, if the first-aid mode adopted is not right, the situation of the wounded can not be relieved, and the situation of the wounded is possibly deteriorated. Therefore, the medical personnel can give out special wounded first-aid operation steps according to the wounded condition detection result by sending the wounded condition detection result to the corresponding medical personnel, and instruct the case report personnel to complete rescue according to the wounded first-aid operation steps given by the medical personnel, so that the wounded condition can be effectively relieved.
Specifically, in the specific embodiment of the application, the second user is a medical staff, the server automatically contacts the medical staff of the corresponding department associated with the injury after determining the target medical care hospital, a second operation guiding interface is generated at the terminal of the medical staff, and the second operation step and the injury detection result are displayed on the second operation guiding interface, wherein the second operation step is used for indicating the medical staff to input the first-aid operation step for the injury at the second operation guiding interface. After the medical staff checks the injury detection result, the wounded first aid operation steps aiming at the wounded are input into the second operation guiding interface according to the injury detection result.
In a specific embodiment of the application, after the emergency rescue is found to be needed on site, a server initiates a multiparty video connection, emergency rescue personnel receive a connection request, access the video connection to remotely check the site situation, allocate corresponding medical personnel and first-aid materials to prepare for rescue, and remotely guide the site personnel to do rescue work and wait for the rescue personnel to arrive on site.
S206, generating an wounded first aid operation step in response to the triggering operation of the second user on the second operation guiding interface.
Specifically, the server responds to the triggering operation of the medical staff on the second operation guiding interface, acquires the wounded first-aid operation steps uploaded by the medical staff, and sends the wounded first-aid operation steps to the case report terminal.
S207, displaying the wounded first-aid operation steps on the first operation guiding interface.
Specifically, after receiving the first-aid operation step of the wounded person, the case report terminal displays the first-aid operation step of the wounded person on the first operation guiding interface for the case report person to check, and instructs the case report person to rescue the wounded person in real time according to the first-aid operation step of the wounded person, so that secondary injury is prevented, and injury condition of the wounded person is prevented from deteriorating.
In the embodiment, the accident photo is obtained by guiding the case report person, the wounded condition detection is carried out on the wounded person in the accident photo, so that a proper medical treatment hospital is searched, corresponding medical care personnel are determined, the medical care personnel are prompted to communicate with the case report person in real time under the guidance of the operation step of the accident treatment process, and the operation step of the medical treatment process is completed, so that the complete medical treatment process is completed quickly, and the medical treatment speed of the wounded person is improved.
Further, the step of obtaining an accident photo in response to the triggering operation of the first user on the first operation guiding interface specifically includes:
responding to the triggering operation of the first user on the screenshot control, and generating a view finding frame on the first operation guiding interface;
and obtaining the accident photo by intercepting the picture content in the view-finding frame.
Specifically, the server responds to the triggering operation of the report person on the screenshot control, generates a view finding frame on a first operation guiding interface, acquires shooting guiding information, such as 'please keep a rear camera aligned with a wounded' of shooting voice guiding information, detects the picture content in the view finding frame, and intercepts the picture content in the view finding frame after the picture content in the view finding frame meets the requirement, so as to obtain an accident photo.
Further, the accident photo includes a first accident photo and a second accident photo, and after the step of obtaining the accident photo by intercepting the picture content in the viewfinder, the method further includes:
Receiving the first event photo, and carrying out content identification on the first event photo to obtain limb conditions in the first event photo;
analyzing the limb condition in the first event photo to obtain first injury data of the injured person, wherein the first injury data is body surface injury data;
determining a detection action video based on the first injury data, and displaying the detection action video on the first operation guide interface;
obtaining a picture of the wounded person when the detection action video is completed through the view finding frame, and obtaining a wounded condition detection video;
And extracting key frames of the injury detection video to obtain the second accident photo.
The accident photo comprises a first accident photo and a second accident photo, wherein the first accident photo is used for determining the body surface injury data of the wounded person, and the second accident photo is used for determining the joint injury data of the wounded person. After the first triggering operation of the screenshot control by the case report person, a first accident photo is obtained, after the second triggering operation of the screenshot control by the case report person, an injury detection video is obtained, and a second accident photo is obtained by extracting key frames in the injury detection video.
Specifically, the server responds to the first triggering operation of the report person on the screenshot control, generates a view frame on a first operation guide interface, and obtains a first incident photo by intercepting the picture content in the view frame. And then carrying out content recognition on the first event photo to obtain a background area and a human body area in the first event photo, identifying a human body area to obtain a limb condition, and analyzing the limb condition in the first event photo to obtain first injury data of the injured person, wherein the first injury data is body surface injury data.
The server determines a detection action video based on the first injury data, if the first injury data is displayed as injury of arms of an injured person, the detection action video corresponding to the injury of arms is obtained, the detection action video is displayed on a first operation guiding interface, a second triggering operation of a screen capturing control by a case report person is obtained, a picture of the injured person when the detection action video is completed is obtained through a view finding frame, the injury detection video is obtained, a key frame is extracted from the injury detection video, a second accident photo is obtained, and the second accident photo is used for determining the joint injury data of the injured person.
Further, the step of importing the accident photo into a pre-trained human injury detection model to obtain injury detection results of injured persons specifically includes:
Importing the second accident photo into a pre-trained human disability detection model to obtain second injury data of the wounded person, wherein the second injury data is joint injury data;
Generating a wound detection result of the wounded based on the first wound data and the second wound data.
Specifically, the second accident photo is imported into a pre-trained human disability detection model to obtain second injury data of the injured person, wherein the second injury data are joint injury data, and injury detection results of the injured person are generated based on body surface injury data and joint injury data.
Further, the step of importing the second accident photo into a pre-trained human disability detection model to obtain second injury data of the injured person specifically includes:
Detecting the joint point of the second accident photo to obtain the human body joint point of the wounded person;
Labeling human body joint points of the wounded person to obtain human body joint point information;
and carrying out joint injury prediction on the wounded person based on the human body joint point information by using the human body injury detection model to obtain second injury data of the wounded person.
The server detects the joint point of the second accident photo based on the multi-stage key point affinity field network (PARTAFFINITYFIELDS, PAF) to obtain the information of the joint point of the human body. The PAF can detect human body joint points in real time and generate human body joint point images, the PAF adopts a bottom-up method, a network framework is divided into two paths, one path is used for predicting the joint points by using a CNN network according to a human body contour map and preset confidence parameters, the other path is used for obtaining the PAF value of each joint point by using the CNN network, the PAF value is an affinity value of the joint point, and the PAF can be regarded as a 2D vector for recording the positions and the directions of connecting lines of two adjacent joint points of the joint points. The two paths of CNN networks are multi-stage network structures, each stage has an output value, and each stage represents a dimension. And the two paths of CNN networks perform joint prediction and connection of the human body joint points to obtain human body joint point images.
In a specific embodiment of the application, the human injury detection model can be constructed based on a trans-former network architecture, wherein the trans-former is a model based on an encoder-decoder structure, and the encoder and the decoder are both composed of an attention module and a front neural network, and the model is a first model constructed by using attention, so that the calculation speed is higher, and better results are obtained on the translation task.
Specifically, the server imports the second accident photo into a multi-stage key point affinity field network, performs human body joint point detection on the key frame image based on the preset multi-stage key point affinity field network, identifies human body joint points on the second accident photo, marks the human body joint points in the second accident photo, and obtains human body joint point information, wherein the human body joint point information comprises human body joint point coordinates and human body joint movement angles. And finally, the server imports the human body joint point information into a pre-trained human body injury detection model, and generates second injury data of the user to be detected by carrying out feature coding/decoding and feature mapping on the human body joint point information.
In this embodiment, the present application obtains a first incident photo by capturing the picture content in the viewfinder, analyzes the limb condition in the first incident photo, obtains the body surface injury data of the injured person, determines the detection action video based on the first injury data, displays the detection action video on the first operation guiding interface, obtains the picture when the injured person completes the detection action video through the viewfinder, obtains the injury detection video, performs key frame extraction on the injury detection video, obtains a second incident photo, performs joint point detection on the second incident photo based on the multi-stage key point affinity field network PAF, recognizes the injured person joint point information through the injury detection model, obtains the joint injury data, generates the injury detection result based on the body surface injury data and the joint injury data, and can quickly find the matched medical care hospital and determine the corresponding medical care personnel according to the injury detection result.
Further, after the step of displaying the wounded first aid operation step on the first operation guidance interface, the method further comprises:
Acquiring an accident responsibility fixing operation step, and displaying the accident responsibility fixing operation step on the first operation guiding interface;
responding to triggering operation of the first user on the first operation guiding interface aiming at accident responsibility, carrying out content identification on the accident photo, determining accident vehicles in the accident photo, and acquiring driving information of the accident vehicles;
invoking a monitoring video in a preset time period before and after an accident occurs, and identifying the driving track of the accident vehicle based on the monitoring video;
and acquiring a pre-constructed traffic control law knowledge graph, and judging accident responsibility of the accident vehicle based on the driving track of the accident vehicle, the driving information of the accident vehicle and the traffic control law knowledge graph.
Further, after the step of acquiring the pre-constructed traffic control law knowledge map and determining the accident responsibility of the accident vehicle based on the driving track of the accident vehicle, the driving information of the accident vehicle and the traffic control law knowledge map, the method further comprises:
acquiring accident claim settlement operation steps, and displaying the accident claim settlement operation steps on the first operation guide interface;
Responding to the triggering operation of the first user on the first operation guide interface aiming at the accident claim, and calling a pre-trained vehicle loss identification model to identify the accident photo so as to obtain the vehicle loss degree;
and generating a pay scheme according to the injury detection result, the vehicle damage degree and the accident responsibility.
The knowledge graph as a semantic network has extremely strong expression capability and modeling flexibility. Firstly, a knowledge graph is a semantic representation which can model entities, concepts, attributes and relationships among the entities, concepts and attributes in the real world, and secondly, the knowledge graph is a data exchange standard of a derivative technology of the knowledge graph, the knowledge graph is a data modeling protocol, and related technologies cover all links such as knowledge extraction, knowledge integration, knowledge management, knowledge application and the like.
Specifically, the server acquires an accident responsibility fixing operation step, displays the accident responsibility fixing operation step on a first operation guiding interface, responds to triggering operation of a claimant on the first operation guiding interface aiming at accident responsibility fixing, carries out content identification on an accident photo, determines accident vehicles in the accident photo, acquires driving information of the accident vehicles, calls monitoring videos in a preset time period before and after the accident occurs, identifies driving tracks of the accident vehicles based on the monitoring videos, acquires a pre-constructed traffic law knowledge map, judges accident responsibility of the accident vehicles based on the driving tracks of the accident vehicles, the driving information of the accident vehicles and the traffic law knowledge map, and generates an accident responsibility identification book.
In another embodiment of the application, after a traffic accident occurs, a case person dials a call to give an alarm, the case condition is inquired after the traffic police receives the call, and the server guides the case person to remotely survey and evidence obtaining to judge the accident responsibility in a video mode through video connection with the traffic police on an accident handling post, so that an accident responsibility identification book is generated.
Specifically, after the accident responsibility document is generated, the server can also guide the vehicle owner to automatically complete the vehicle damage claim, the server responds to the triggering operation of the first user on the first operation guiding interface for the accident claim by acquiring the accident claim settlement operation step and displaying the accident claim settlement operation step on the first operation guiding interface, a pre-trained vehicle damage identification model is called to identify the accident photo, the vehicle damage degree is obtained, and a claim scheme is generated according to the injury detection result, the vehicle damage degree and the accident responsibility.
The vehicle loss recognition model is trained based on a machine learning model, the training data is a series of vehicle loss photos collected in advance, vehicle loss features are marked on the vehicle loss photos in advance, and the vehicle loss photos and the corresponding claim schemes of the vehicle loss photos are imported into the machine learning model according to the matching of the vehicle loss features to obtain the vehicle loss recognition model.
In another embodiment of the application, the server and the insurance company system realize data intercommunication, when the wounded is admitted and registered, the server can be connected to inquire the insurance policy condition of the accident-causing vehicle and confirm whether to pay by settling a case or not, if the pay condition is met, a pre-pay process can be initiated, the treatment expense of the wounded is directly reimbursed by the insurance company without the wounded or accident-causing driver paying the treatment expense, and after the treatment is finished, the treatment expense clearing is automatically realized through the server and the insurance company system.
In the embodiment, the application discloses a rapid medical treatment method, and belongs to the technical field of artificial intelligence. According to the method, a first operation guiding interface is generated at the terminal of a case report person through the first operation step and displayed on the first operation guiding interface, the case report person is instructed to complete shooting of an accident photo through the first operation step, the accident photo is obtained through responding to triggering operation of the case report person on the first operation guiding interface, the accident photo is imported into a pre-trained human injury detection model to obtain injury detection results of the injured person, a proper medical care hospital is searched and corresponding medical staff is determined according to the injury detection results, the second operation guiding interface is generated at the terminal of the medical staff, the second operation step and the injury detection results are displayed on the second operation guiding interface, triggering operation of the medical staff on the second operation guiding interface is responded, the injured person first-aid operation step is generated, and the injured person first-aid operation step is displayed on the first operation guiding interface to guide the case report person to complete emergency treatment. According to the medical treatment method, the accident photo is obtained by guiding the case report person, the wounded condition detection is carried out on the wounded person in the accident photo, so that a proper medical treatment hospital is searched, corresponding medical care personnel are determined, the medical care personnel and the case report person are prompted to communicate in real time under the guidance of the operation steps of the accident treatment process, the operation steps of the medical treatment process are completed, the complete medical treatment process is completed rapidly, and the medical treatment speed of the wounded person is improved.
In this embodiment, the electronic device (for example, the server shown in fig. 1) on which the fast medical treatment method operates may perform the fast medical treatment through a wired connection manner or a wireless connection manner. It should be noted that the wireless connection may include, but is not limited to, 3G/4G connection, wiFi connection, bluetooth connection, wiMAX connection, zigbee connection, UWB (ultra wideband) connection, and other now known or later developed wireless connection.
It is emphasized that to further ensure privacy and security of the incident photos, the incident photos may also be stored in a blockchain node.
The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, consensus mechanism, encryption algorithm and the like. The blockchain (Blockchain), essentially a de-centralized database, is a string of data blocks that are generated in association using cryptographic methods, each of which contains information from a batch of network transactions for verifying the validity (anti-counterfeit) of its information and generating the next block. The blockchain may include a blockchain underlying platform, a platform product services layer, an application services layer, and the like.
Those skilled in the art will appreciate that implementing all or part of the processes of the methods of the embodiments described above may be accomplished by way of computer readable instructions, stored on a computer readable storage medium, which when executed may comprise processes of embodiments of the methods described above. The storage medium may be a nonvolatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a random access Memory (Random Access Memory, RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited in order and may be performed in other orders, unless explicitly stated herein. Moreover, at least some of the steps in the flowcharts of the figures may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, the order of their execution not necessarily being sequential, but may be performed in turn or alternately with other steps or at least a portion of the other steps or stages.
With further reference to fig. 3, as an implementation of the method shown in fig. 2, the present application provides an embodiment of a fast-seeking medical treatment apparatus, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the apparatus may be applied to various electronic devices in particular.
As shown in fig. 3, the fast-seeking medical treatment device according to the present embodiment includes:
An operation step obtaining module 301, configured to receive an accident handling instruction, and obtain an operation step of an accident handling procedure according to the accident handling instruction, where the operation step of the accident handling procedure includes a first operation step and a second operation step;
the first guiding module 302 is configured to generate a first operation guiding interface at a terminal of a first user, and display the first operation step on the first operation guiding interface, where the first operation step at least includes taking an accident photo;
A first trigger operation module 303, configured to obtain an accident photo in response to a trigger operation of the first user on the first operation guide interface;
The injury detection module 304 is configured to import the accident photo into a pre-trained human injury detection model, so as to obtain an injury detection result of an injured person;
a second guiding module 305, configured to generate a second operation guiding interface at a terminal of a second user, and display the second operation step and the injury detection result on the second operation guiding interface;
A second trigger operation module 306, configured to generate an wounded first-aid operation step in response to a trigger operation of the second user on the second operation guidance interface;
An emergency operation guiding module 307 for displaying the wounded emergency operation steps on the first operation guiding interface.
Further, the first trigger operation module 303 specifically includes:
The view finding frame generating unit is used for responding to the triggering operation of the first user on the screenshot control and generating a view finding frame on the first operation guiding interface;
and the picture intercepting unit is used for obtaining the accident photo by intercepting the picture content in the view-finding frame.
Further, the accident photo includes a first accident photo and a second accident photo, and the first trigger operation module 303 further includes:
a first event photo obtaining unit, configured to receive the first event photo, and perform content identification on the first event photo to obtain a limb situation in the first event photo;
The body surface injury prediction unit is used for analyzing the limb condition in the first event photo to obtain first injury data of the injured person, wherein the first injury data is body surface injury data;
a detection action display unit for determining a detection action video based on the first injury data and displaying the detection action video on the first operation guidance interface;
the wounded action shooting unit is used for acquiring pictures of the wounded when the detection action video is completed through the view finding frame to obtain a wounded condition detection video;
And the second accident photo acquisition unit is used for extracting key frames of the injury detection video to obtain the second accident photo.
Further, the injury detection module 304 specifically includes:
the joint injury prediction unit is used for importing the second accident photo into a pre-trained human body disability detection model to obtain second injury data of the injured person, wherein the second injury data are joint injury data;
And the injury detection unit is used for generating injury detection results of the wounded based on the first injury data and the second injury data.
Further, the joint injury prediction unit specifically includes:
The joint point detection subunit is used for detecting the joint point of the second accident photo and acquiring a human body joint point of the wounded person;
the joint point labeling subunit is used for labeling the human body joint points of the wounded person to obtain human body joint point information;
and the joint injury prediction subunit is used for predicting the joint injury of the injured person based on the human body joint point information by using the human body injury detection model to obtain second injury data of the injured person.
Further, the rapid medical treatment device further comprises:
the responsibility fixing operation module is used for acquiring accident responsibility fixing operation steps and displaying the accident responsibility fixing operation steps on the first operation guide interface;
The rule guiding module is used for responding to the triggering operation of the first user on the first operation guiding interface aiming at accident responsibility, carrying out content identification on the accident photo, determining accident vehicles in the accident photo and acquiring driving information of the accident vehicles;
The track recognition module is used for calling the monitoring video in a preset time period before and after the accident occurs and recognizing the running track of the accident vehicle based on the monitoring video;
the accident rule module is used for acquiring a pre-constructed traffic control rule knowledge graph and judging accident responsibility of the accident vehicle based on the driving track of the accident vehicle, the driving information of the accident vehicle and the traffic control rule knowledge graph.
Further, the rapid medical treatment device further comprises:
the claim settlement operation module is used for acquiring accident claim settlement operation steps and displaying the accident claim settlement operation steps on the first operation guide interface;
The claim settlement guiding module is used for responding to the triggering operation of the first user on the first operation guiding interface aiming at the accident claim settlement, calling a pre-trained vehicle loss identification model to identify the accident photo, and obtaining the vehicle loss degree;
and the accident claim settlement module is used for generating a claim scheme according to the injury detection result, the vehicle damage degree and the accident responsibility.
In the embodiment, the application discloses a rapid medical treatment device, and belongs to the technical field of artificial intelligence. According to the method, a first operation guiding interface is generated at the terminal of a case report person through the first operation step and displayed on the first operation guiding interface, the case report person is instructed to complete shooting of an accident photo through the first operation step, the accident photo is obtained through responding to triggering operation of the case report person on the first operation guiding interface, the accident photo is imported into a pre-trained human injury detection model to obtain injury detection results of the injured person, a proper medical care hospital is searched and corresponding medical staff is determined according to the injury detection results, the second operation guiding interface is generated at the terminal of the medical staff, the second operation step and the injury detection results are displayed on the second operation guiding interface, triggering operation of the medical staff on the second operation guiding interface is responded, the injured person first-aid operation step is generated, and the injured person first-aid operation step is displayed on the first operation guiding interface to guide the case report person to complete emergency treatment. According to the medical treatment method, the accident photo is obtained by guiding the case report person, the wounded condition detection is carried out on the wounded person in the accident photo, so that a proper medical treatment hospital is searched, corresponding medical care personnel are determined, the medical care personnel and the case report person are prompted to communicate in real time under the guidance of the operation steps of the accident treatment process, the operation steps of the medical treatment process are completed, the complete medical treatment process is completed rapidly, and the medical treatment speed of the wounded person is improved.
In order to solve the technical problems, the embodiment of the application also provides computer equipment. Referring specifically to fig. 4, fig. 4 is a basic structural block diagram of a computer device according to the present embodiment.
The computer device 4 comprises a memory 41, a processor 42, a network interface 43 communicatively connected to each other via a system bus. It should be noted that only computer device 4 having components 41-43 is shown in the figures, but it should be understood that not all of the illustrated components are required to be implemented and that more or fewer components may be implemented instead. It will be appreciated by those skilled in the art that the computer device herein is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and its hardware includes, but is not limited to, a microprocessor, an Application SPECIFIC INTEGRATED Circuit (ASIC), a Programmable gate array (Field-Programmable GATE ARRAY, FPGA), a digital Processor (DIGITAL SIGNAL Processor, DSP), an embedded device, and the like.
The computer equipment can be a desktop computer, a notebook computer, a palm computer, a cloud server and other computing equipment. The computer equipment can perform man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch pad or voice control equipment and the like.
The memory 41 includes at least one type of readable storage medium including flash memory, hard disk, multimedia card, card memory (e.g., SD or DX memory, etc.), random Access Memory (RAM), static Random Access Memory (SRAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), programmable Read Only Memory (PROM), magnetic memory, magnetic disk, optical disk, etc. In some embodiments, the storage 41 may be an internal storage unit of the computer device 4, such as a hard disk or a memory of the computer device 4. In other embodiments, the memory 41 may also be an external storage device of the computer device 4, such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD) or the like, which are provided on the computer device 4. Of course, the memory 41 may also comprise both an internal memory unit of the computer device 4 and an external memory device. In this embodiment, the memory 41 is generally used to store an operating system and various application software installed on the computer device 4, such as computer readable instructions of a fast medical treatment method. Further, the memory 41 may be used to temporarily store various types of data that have been output or are to be output.
The processor 42 may be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor, or other data processing chip in some embodiments. The processor 42 is typically used to control the overall operation of the computer device 4. In this embodiment, the processor 42 is configured to execute computer readable instructions stored in the memory 41 or process data, such as computer readable instructions for executing the fast medical treatment method.
The network interface 43 may comprise a wireless network interface or a wired network interface, which network interface 43 is typically used for establishing a communication connection between the computer device 4 and other electronic devices.
The application discloses computer equipment, and belongs to the technical field of artificial intelligence. According to the method, a first operation guiding interface is generated at the terminal of a case report person through the first operation step and displayed on the first operation guiding interface, the case report person is instructed to complete shooting of an accident photo through the first operation step, the accident photo is obtained through responding to triggering operation of the case report person on the first operation guiding interface, the accident photo is imported into a pre-trained human injury detection model to obtain injury detection results of the injured person, a proper medical care hospital is searched and corresponding medical staff is determined according to the injury detection results, the second operation guiding interface is generated at the terminal of the medical staff, the second operation step and the injury detection results are displayed on the second operation guiding interface, triggering operation of the medical staff on the second operation guiding interface is responded, the injured person first-aid operation step is generated, and the injured person first-aid operation step is displayed on the first operation guiding interface to guide the case report person to complete emergency treatment. According to the medical treatment method, the accident photo is obtained by guiding the case report person, the wounded condition detection is carried out on the wounded person in the accident photo, so that a proper medical treatment hospital is searched, corresponding medical care personnel are determined, the medical care personnel and the case report person are prompted to communicate in real time under the guidance of the operation steps of the accident treatment process, the operation steps of the medical treatment process are completed, the complete medical treatment process is completed rapidly, and the medical treatment speed of the wounded person is improved.
The present application also provides another embodiment, namely, a computer readable storage medium storing computer readable instructions executable by at least one processor to cause the at least one processor to perform the steps of the fast hospitalizing method as described above.
The application discloses a storage medium, and belongs to the technical field of artificial intelligence. According to the method, a first operation guiding interface is generated at the terminal of a case report person through the first operation step and displayed on the first operation guiding interface, the case report person is instructed to complete shooting of an accident photo through the first operation step, the accident photo is obtained through responding to triggering operation of the case report person on the first operation guiding interface, the accident photo is imported into a pre-trained human injury detection model to obtain injury detection results of the injured person, a proper medical care hospital is searched and corresponding medical staff is determined according to the injury detection results, the second operation guiding interface is generated at the terminal of the medical staff, the second operation step and the injury detection results are displayed on the second operation guiding interface, triggering operation of the medical staff on the second operation guiding interface is responded, the injured person first-aid operation step is generated, and the injured person first-aid operation step is displayed on the first operation guiding interface to guide the case report person to complete emergency treatment. According to the medical treatment method, the accident photo is obtained by guiding the case report person, the wounded condition detection is carried out on the wounded person in the accident photo, so that a proper medical treatment hospital is searched, corresponding medical care personnel are determined, the medical care personnel and the case report person are prompted to communicate in real time under the guidance of the operation steps of the accident treatment process, the operation steps of the medical treatment process are completed, the complete medical treatment process is completed rapidly, and the medical treatment speed of the wounded person is improved.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
The application is operational with numerous general purpose or special purpose computer system environments or configurations. Such as a personal computer, a server computer, a hand-held or portable device, a tablet device, a multiprocessor system, a microprocessor-based system, a set top box, a programmable consumer electronics, a network PC, a minicomputer, a mainframe computer, a distributed computing environment that includes any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
It is apparent that the above-described embodiments are only some embodiments of the present application, but not all embodiments, and the preferred embodiments of the present application are shown in the drawings, which do not limit the scope of the patent claims. This application may be embodied in many different forms, but rather, embodiments are provided in order to provide a thorough and complete understanding of the present disclosure. Although the application has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described in the foregoing description, or equivalents may be substituted for elements thereof. All equivalent structures made by the content of the specification and the drawings of the application are directly or indirectly applied to other related technical fields, and are also within the scope of the application.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210171423.4A CN114550951B (en) | 2022-02-24 | 2022-02-24 | A rapid medical treatment method, device, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210171423.4A CN114550951B (en) | 2022-02-24 | 2022-02-24 | A rapid medical treatment method, device, computer equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114550951A CN114550951A (en) | 2022-05-27 |
CN114550951B true CN114550951B (en) | 2025-03-18 |
Family
ID=81677354
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210171423.4A Active CN114550951B (en) | 2022-02-24 | 2022-02-24 | A rapid medical treatment method, device, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114550951B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115019980B (en) * | 2022-08-08 | 2022-10-28 | 阿里健康科技(杭州)有限公司 | Method and device for processing inquiry data, user terminal and server |
CN116028670B (en) * | 2023-03-31 | 2023-06-23 | 中国人民解放军总医院第三医学中心 | Cloud edge cooperative intelligent detection injury classification system |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105005932A (en) * | 2015-08-20 | 2015-10-28 | 南京安通杰科技实业有限公司 | Traffic accident responsibility identification and claim settlement method |
CN106357810A (en) * | 2016-11-02 | 2017-01-25 | 严治 | Pre-hospital medical networking system for traffic emergency treatment and application method thereof |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103902794A (en) * | 2012-12-26 | 2014-07-02 | 比亚迪股份有限公司 | Mobile terminal and method for shooting wound picture through mobile terminal to process injuries |
CN113674523A (en) * | 2020-05-14 | 2021-11-19 | 华为技术有限公司 | Traffic accident analysis method, device and equipment |
-
2022
- 2022-02-24 CN CN202210171423.4A patent/CN114550951B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105005932A (en) * | 2015-08-20 | 2015-10-28 | 南京安通杰科技实业有限公司 | Traffic accident responsibility identification and claim settlement method |
CN106357810A (en) * | 2016-11-02 | 2017-01-25 | 严治 | Pre-hospital medical networking system for traffic emergency treatment and application method thereof |
Also Published As
Publication number | Publication date |
---|---|
CN114550951A (en) | 2022-05-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111310562B (en) | Vehicle driving risk management and control method based on artificial intelligence and related equipment thereof | |
CN112950773A (en) | Data processing method and device based on building information model and processing server | |
CN112329659A (en) | Weak supervision semantic segmentation method based on vehicle image and related equipment thereof | |
CN114550951B (en) | A rapid medical treatment method, device, computer equipment and storage medium | |
CN113642519B (en) | A face recognition system and a face recognition method | |
CN114550053A (en) | A method, device, computer equipment and storage medium for determining responsibility for a traffic accident | |
CN114550052B (en) | Vehicle accident handling method, device, computer equipment and storage medium | |
CN107833328B (en) | Access control verification method and device based on face recognition and computing equipment | |
CN111259682B (en) | Method and device for monitoring safety of construction site | |
CN114821806A (en) | Method and device for determining behavior of operator, electronic equipment and storage medium | |
CN114170272A (en) | Accident reporting and storing method based on sensing sensor in cloud environment | |
CN115909506A (en) | Abnormal behavior identification method, device, equipment and medium | |
CN114332925A (en) | Method, system, device and computer-readable storage medium for detecting pets in elevators | |
CN114638973A (en) | Target image detection method and image detection model training method | |
Bhandari et al. | Development of a real-time security management system for restricted access areas using computer vision and deep learning | |
US20200272819A1 (en) | Translation to braille | |
CN115984890A (en) | Bill text recognition method and device, computer equipment and storage medium | |
CN119229535A (en) | A behavior identification method, device, computer equipment and storage medium | |
KR102639250B1 (en) | Server, method and system for providing safety management monitoring services related to construction sites | |
JP2014215747A (en) | Tracking device, tracking system, and tracking method | |
CN117456430A (en) | Video identification method, electronic equipment and storage medium | |
JP2021009613A (en) | Computer program, information processing device, and information processing method | |
CN114549221A (en) | Vehicle accident loss processing method and device, computer equipment and storage medium | |
CN112633244B (en) | Social relationship identification method and device, electronic equipment and storage medium | |
CN113792569B (en) | Object recognition method, device, electronic equipment and readable medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |