CN115497239A - Intelligent voice processing method, system and device based on satellite and storage medium - Google Patents
Intelligent voice processing method, system and device based on satellite and storage medium Download PDFInfo
- Publication number
- CN115497239A CN115497239A CN202210978804.3A CN202210978804A CN115497239A CN 115497239 A CN115497239 A CN 115497239A CN 202210978804 A CN202210978804 A CN 202210978804A CN 115497239 A CN115497239 A CN 115497239A
- Authority
- CN
- China
- Prior art keywords
- information
- satellite
- correlation
- user terminal
- semantic analysis
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 25
- 238000004891 communication Methods 0.000 claims abstract description 67
- 238000004458 analytical method Methods 0.000 claims description 56
- 238000000605 extraction Methods 0.000 claims description 5
- 230000015654 memory Effects 0.000 claims description 4
- 231100001261 hazardous Toxicity 0.000 claims description 3
- 230000006870 function Effects 0.000 abstract description 6
- 238000013459 approach Methods 0.000 abstract description 2
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 238000012544 monitoring process Methods 0.000 abstract description 2
- 238000000034 method Methods 0.000 description 13
- 238000004590 computer program Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 239000000284 extract Substances 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 4
- 230000000875 corresponding effect Effects 0.000 description 3
- 238000007405 data analysis Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 239000000470 constituent Substances 0.000 description 1
- 230000009429 distress Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000007787 long-term memory Effects 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/02—Mechanical actuation
- G08B13/12—Mechanical actuation by the breaking or disturbance of stretched cords or wires
- G08B13/122—Mechanical actuation by the breaking or disturbance of stretched cords or wires for a perimeter fence
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/02—Services making use of location information
- H04W4/021—Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Telephonic Communication Services (AREA)
Abstract
The invention discloses an intelligent voice processing method based on a satellite, an intelligent voice system based on the satellite, a computer device and a storage medium. By monitoring the position information and the communication voice information of the user terminal, the invention can automatically generate rescue notification information when the user terminal enters or approaches a dangerous area and the user expresses help seeking or related semantics through voice, and prepares for rescue actions, thereby relieving the danger that the user possibly faces an emergent dangerous situation and is difficult to actively send information for help in time, being beneficial to ensuring the safety of the user and enriching the functions of a communication system. The invention is widely applied to the technical field of satellite communication.
Description
Technical Field
The invention relates to the technical field of satellite communication, in particular to an intelligent voice processing method based on a satellite, an intelligent voice system based on the satellite, a computer device and a storage medium.
Background
The satellite combines the heaven and earth integrated communication technology of the ground base station, can expand the usable range of the user terminal, thus improve the convenience of communication. However, the convenience of the above-described technique may cause another problem. The communication range is enlarged, so that users are prompted to try to expand the exploration range of the users more, for example, the users carry the user terminals to go to the forest, desert, sea and other zones for exploration. The zones are influenced by natural environment, communication conditions such as signal quality and the like cannot be guaranteed, and phenomena such as communication signals are sometimes not available; on the other hand, the zones are rare and inconvenient to transport, people who reach the zones are easy to face the threat of natural conditions, and can not get timely rescue when encountering dangerous situations, and even can not send out formal distress signals.
Disclosure of Invention
The invention aims to provide an intelligent voice processing method based on a satellite, an intelligent voice system based on the satellite, a computer device and a storage medium, aiming at the technical problems that the current satellite communication technology prompts a user to explore, but cannot provide technical support in the aspect of safety guarantee for the exploration process of the user and the like.
In one aspect, an embodiment of the present invention includes a satellite-based intelligent speech processing method, including:
detecting position information of a user terminal;
acquiring communication voice information of the user terminal;
determining a first degree of correlation; the first degree of correlation represents a degree of correlation between the position information and a dangerous area;
when the first correlation degree reaches a first threshold value, performing semantic analysis on the communication voice information to obtain a semantic analysis result;
and triggering to generate rescue notification information according to the semantic analysis result.
Further, the determining the first degree of correlation includes:
determining a historical motion track of the user terminal according to the position information;
predicting according to the historical motion trail to obtain a predicted motion trail of the user terminal;
when the predicted motion trail passes through the dangerous area, acquiring the distance from the position information to the dangerous area along the predicted motion trail;
determining the magnitude of the first correlation degree according to the magnitude of the distance; the magnitude of the first degree of correlation is inversely related to the magnitude of the journey.
Further, the determining the first degree of correlation includes:
setting the first degree of correlation to a value greater than the first threshold when the location information is within the hazardous area.
Further, the performing semantic analysis on the communication voice information to obtain a semantic analysis result includes:
performing semantic extraction on the communication voice information to obtain semantic information;
and taking the semantic information as the semantic analysis result.
Further, the performing semantic analysis on the communication voice information to obtain a semantic analysis result includes:
performing semantic prediction on the communication voice information to obtain semantic prediction information;
and taking the semantic prediction information as the semantic analysis result.
Further, the triggering and generating rescue notification information according to the semantic analysis result includes:
acquiring a help-seeking keyword;
determining a second degree of correlation; the second correlation degree represents the correlation degree between the semantic analysis result and the help-seeking keyword;
and when the second correlation reaches a second threshold value, generating the rescue notification information.
Further, the satellite-based intelligent voice processing method further comprises the following steps:
determining a rescue department corresponding to the rescue notification information;
and sending the rescue notification information to the rescue department.
On the other hand, the embodiment of the invention also comprises an intelligent voice system based on the satellite, wherein the intelligent voice system based on the satellite comprises:
a low earth orbit satellite; the low-orbit satellite is used for establishing connection with a user terminal and detecting the position information of the user terminal; acquiring communication voice information of the user terminal;
a communication core network; the communication core network is used for establishing connection with the low orbit satellite, acquiring the position information and the communication voice information from the low orbit satellite, determining a first correlation degree, wherein the first correlation degree represents the correlation degree between the position information and a dangerous area, and when the first correlation degree reaches a first threshold value, sending the communication voice information to an IP multimedia system;
an IP multimedia system; and the IP multimedia system is used for performing semantic analysis on the communication voice information to obtain a semantic analysis result, and triggering and generating rescue notification information according to the semantic analysis result.
In another aspect, embodiments of the present invention further include a computer apparatus including a memory for storing at least one program and a processor for loading the at least one program to perform the satellite-based intelligent speech processing method of the embodiments.
In another aspect, the present invention further includes a storage medium having stored therein a processor-executable program, which when executed by a processor, is configured to perform the satellite-based intelligent speech processing method of the embodiments.
The beneficial effects of the invention are: according to the intelligent voice processing method based on the satellite, by monitoring the position information and the communication voice information of the user terminal, when the user terminal enters or approaches a dangerous area and the user expresses help seeking or related semantics through voice, rescue notification information can be automatically generated to prepare for rescue actions, so that the dangerous situation that the user is difficult to actively send information for help seeking due to sudden dangerous situations when the user carries the user terminal to go to the dangerous area is relieved, the safety of the user is guaranteed, and the functions of a communication system are enriched.
Drawings
FIG. 1 is a schematic diagram illustrating steps of a satellite-based intelligent speech processing method according to an embodiment;
FIG. 2 is a schematic diagram of an embodiment of a satellite-based smart voice system;
FIG. 3 is a schematic diagram illustrating a step of acquiring communication voice information of a user terminal in the embodiment;
FIG. 4 is a diagram illustrating a request for NWDAF data analysis in an embodiment;
FIG. 5 is a schematic diagram of a historical motion trail, a predicted motion trail and a danger area in an embodiment;
fig. 6 is a diagram illustrating an embodiment of requesting IMS to perform voice analysis.
Detailed Description
In this embodiment, referring to fig. 1, the satellite-based intelligent speech processing method includes the following steps:
s1, detecting position information of a user terminal;
s2, acquiring communication voice information of the user terminal;
s3, determining a first correlation degree; the first correlation degree represents a correlation degree between the position information and the dangerous area;
s4, when the first correlation reaches a first threshold, carrying out semantic analysis on the communication voice information to obtain a semantic analysis result;
and S5, triggering and generating rescue notification information according to the semantic analysis result.
In this embodiment, steps S1-S5 may be applied in the satellite-based intelligent speech system shown in fig. 2. Referring to fig. 2, the smart voice system includes a low earth orbit satellite, a communication core network (which may be a 5G core network in particular), and an IP Multimedia System (IMS). Specifically, the low earth orbit satellite establishes connection with a user terminal and a communication core network, and the communication core network establishes connection with an IP multimedia system. The user terminal can be a mobile phone, a notebook computer, a tablet computer or special terminal equipment and the like. Steps S1 and S2 may be performed by the low earth orbit satellite, step S3 by the communication core network, and steps S4 and S5 by the IP multimedia system.
In step S1, the low earth orbit satellite may periodically acquire the location information of the user terminal. The position information can be obtained by the user terminal through a global positioning module of the user terminal, or the user terminal can be respectively subjected to ranging by a plurality of low-orbit satellites, and the current position information of the user terminal is determined by solving the intersection point of a plurality of ranging results.
In step S2, the communication voice information of the user terminal may be instant communication information such as a mobile phone call, or non-instant communication information performed through social software.
When step S2 is executed, the flow shown in fig. 3 may be followed. Referring to fig. 3, in a voice call, a user terminal first establishes a session with an IMS through an SIP message, and interacts related SDP information through the SIP message; after receiving the SIP message of the user terminal, the pcsccm in the IMS extracts related media information according to the SDP message carried in the message, such as: audio and video coding, parameters, audio and video port numbers and the like; the PCSCF sends the media information, the related Call-id and part of SIP message content to the RtpProcess through a UDP message in the form of Cmd commands; after receiving the Cmd, the RtpProcess extracts the Cmd parameter content, creates an Rtp Session interacted with the user terminal, associates the user terminal with the Stream of the RtpProcess, and simultaneously designates the Cmd command to inform the RecProcess to extract the RTP content through the UDP message; after receiving the Cmd, the RecProcesses extracts the Cmd parameter content, monitors the RTP stream and waits for receiving the RTP stream; after the user terminal sends the uplink RTP message, the uplink RTP message is forwarded to the IMS side through the 5G core network, after receiving the RTP stream, the RtpProcess of the IMS side carries out shunting and sends the RTP stream to the RecProcess, and meanwhile, the RtpProcess forwards the RTP stream to an opposite terminal IMS domain or the user terminal according to the RTP processing process. After receiving the RTP stream, the RecProcesses extracts an original code stream according to RTP protocol content, further extracts load (payload) content according to voice coding negotiated by the user terminal and the IMS, and knows that all RTP streams are finished when the payload content is extracted and stored once; the RecProcess generates audio original data through filtering and combination according to all collected RTP original payload contents, synthesizes an audio file according to original negotiation codes, and can also provide a voice transcoding function of a related coding format to achieve the purpose of obtaining communication voice information.
In step S3, referring to fig. 4, a Data service request is sent to a low-orbit satellite by a user terminal, the low-orbit satellite forwards the Data service request to a 5G core Network, an AMF (Access and Mobility Management Function) Network element in the 5G core Network receives the Data service request, and forwards the Data service request to a NWDAF (Network Data analysis Function) Network element in the 5G core Network, and the NWDAF Network element performs Data processing. After the NWDAF performs step S3, the obtained data analysis results such as the first correlation degree may be returned to the user terminal sequentially through the AMF network element and the low-orbit satellite.
In this embodiment, when the communication core network performs step S3, that is, the step of determining the first correlation, the following steps may be specifically performed:
s301, when the position information is located in the dangerous area, setting the first correlation degree to be a value larger than a first threshold value;
s302, when the position information is not in the dangerous area, determining the historical motion track of the user terminal according to the position information;
s303, predicting according to the historical motion track to obtain the predicted motion track of the user terminal;
s304, when the predicted motion track passes through a dangerous area, obtaining the distance from the position information to the dangerous area along the predicted motion track;
s305, determining the magnitude of the first correlation degree according to the magnitude of the route; the magnitude of the first degree of correlation is inversely related to the magnitude of the journey.
Before performing steps S301-S305, the communication core network may pre-store the positions of several dangerous areas, each of which may be represented by its boundary or vertex coordinates.
In step S301, the communication core network may first determine whether the location information is located in the dangerous area, and if the location information is located in the dangerous area, the communication core network sets the first correlation degree to a value greater than a first threshold. In this way, the first correlation degree is greater than the first threshold, and when step S4 is executed, the IMS can be triggered to perform semantic analysis on the communication voice information, so as to obtain a semantic analysis result. Specifically, the range of the first correlation may be limited to [0,1], the first threshold may be set to a value greater than 0 and less than 1, and the first correlation may be directly set to 1 when step S301 is executed, so that the first correlation is greater than the first threshold.
If the communication core network judges that the location information is not within the hazardous area, it may jump to perform steps S302-S305.
In step S302, the communication core network may obtain the history change record of the location information, and determine the historical movement track of the user terminal according to the history change record of the location information. The historical motion trail of the user terminal can reflect the position reached by the person carrying the user terminal in the previous period. Referring to fig. 5, point a indicates a point determined by the location information, i.e., a location where the user terminal is newly detected, and the solid line portion indicates a history motion trajectory of the user terminal, i.e., the user terminal has moved to the location of point a in a past period of time, in fact, according to the trajectory indicated by the solid line.
In step S303, the communication core network may execute a trajectory prediction algorithm such as MDP, POMDP, or LSTM to predict the historical motion trajectory obtained in step S302, so as to obtain a predicted motion trajectory of the user terminal. The predicted motion trajectory is a result of execution of a trajectory prediction algorithm and can represent a position that the user terminal is likely to reach in a future period of time predicted based on a historical motion trajectory. Referring to fig. 5, the dotted line part represents the predicted motion trajectory of the user terminal, i.e., the user terminal is predicted to move from point a in the future according to the trajectory shown by the dotted line.
In step S304, when the predicted movement trajectory passes through the dangerous area, a distance from the position information to the dangerous area along the predicted movement trajectory is acquired. In fig. 5, the predicted movement locus passes through the dangerous area, and the first intersection point of the predicted movement locus and the dangerous area is point B, so that the distance from the position information to the dangerous area along the predicted movement locus is the length of the curve AB.
In step S305, the magnitude of the first correlation is determined based on the magnitude of the route. The magnitude of the first correlation is inversely correlated with the magnitude of the route, that is, the smaller the route determined in step S304, the larger the first correlation, and the larger the route determined in step S304, the smaller the first correlation.
Specifically, referring to fig. 5, a point C is set on the predicted motion trajectory, and the length of the curve BC is an acceptable safety range. A fixed reference value is set (for example, the length of the historical motion track indicated by the solid line in fig. 5 can be set as the reference value), the ratio of the length of the curve BC to the reference value is used as the first threshold, and the ratio of the length of the curve AB to the reference value is used as the first correlation, so that the following requirements can be met: the magnitude of the first correlation degree is negatively correlated with the magnitude of the distance, whether the first correlation degree is overlarge or not can be judged through a fixed first threshold, and when the first correlation degree is overlarge (reaches the first threshold), the user terminal can be judged to be in a state close to a dangerous area, rescue needs to be provided from the outside, and the step S4 is triggered and executed.
In this embodiment, the data such as the contour lines or the feasible paths of the dangerous area may also be acquired by the geographic management system, the curve correlation between the historical motion trajectory and the contour lines or the curve correlation between the historical motion trajectory and the feasible paths may be determined, and the curve correlation may be determined as the first correlation. The principle is that the dangerous area and the surrounding terrain have similarity, and the user terminal generally moves along similar tracks (such as contour terrain or feasible path developed before) when entering or approaching the dangerous area, so if the curve correlation degree of the historical motion track and the contour line is higher or the curve correlation degree of the historical motion track and the feasible path is higher, the closer the user terminal is to the dangerous area, the higher the first correlation degree is, can be reasonably judged.
In step S4, referring to fig. 6, the user terminal sends the user voice request to the low-earth orbit satellite, the low-earth orbit satellite forwards the user voice request to the 5G core network, and the 5G core network receives the user voice request and forwards the user voice request to the IMS, and the IMS performs voice analysis. After the IMS performs step S5, the obtained semantic analysis result may be returned to the user terminal via the 5G core network and the low-earth orbit satellite in sequence.
In this embodiment, when the IMS performs step S4, that is, performs semantic analysis on the communication voice information to obtain a semantic analysis result, the IMS may specifically perform the following steps:
S401A, semantic extraction is carried out on the communication voice information to obtain semantic information;
S402A, using semantic information as a semantic analysis result.
In step S401A, semantic extraction may be performed on the communication voice information through a voice recognition algorithm, so as to obtain semantic information in a text or other form. In step S402A, the semantic information included in the communication voice information itself is directly used as the semantic analysis result obtained in step S4.
By performing steps S401A-S402A, semantic analysis results can be obtained with fewer data processing processes without further processing of the semantic information.
In this embodiment, when the IMS performs step S4, that is, performs semantic analysis on the communication voice information to obtain a semantic analysis result, the IMS may specifically perform the following steps:
S401B, semantic prediction is carried out on the communication voice information to obtain semantic prediction information;
S402B, using semantic prediction information as a semantic analysis result.
In step S401B, semantic extraction may be performed on the communication voice information through a voice recognition algorithm, so as to obtain semantic information in a text or other form, and semantic information is predicted by using a semantic prediction algorithm such as Long-Term Memory (LSTM), so as to obtain semantic prediction information. Semantic prediction information is a reasonable extension made on the basis of semantic information, for example, if the semantic information contains a sentence "we have a danger" then the semantic prediction information obtained may be a sentence "save" (the specific prediction result depends on the semantic prediction algorithm used and the training of the semantic prediction algorithm). In step S402B, the semantic prediction information obtained in step S401B is used as the semantic analysis result obtained in step S4.
By executing steps S401B-S402B, semantic prediction can be further performed on the basis of the semantic information included in the communication voice information itself, so that the content of the semantic information is extended by using the semantic prediction information, and the semantic analysis result includes more and mutually related information, thereby facilitating making a determination with a lower false positive rate according to the semantic analysis result.
In this embodiment, when the IMS performs step S5, that is, the step of triggering generation of the rescue notification information according to the semantic analysis result, the IMS may specifically perform the following steps:
s501, obtaining a help-seeking keyword;
s502, determining a second correlation degree; the second correlation degree represents the correlation degree between the semantic analysis result and the help-seeking keyword;
and S503, when the second correlation reaches a second threshold value, generating rescue notification information according to the help-seeking keyword.
In step S501, words such as "save", "call for help", "SOS", "MAYDAY" and "MAYDAY" may be set as the help keyword.
In step S502, a correlation between the help-seeking keyword and the semantic analysis result obtained in step S4 may be calculated through an algorithm such as word2vec, and a second correlation is obtained. And a second threshold value can be set by the word2vec and other algorithms, when the second correlation reaches the second threshold value, the semantic analysis result can be judged to be closer to the help-seeking keyword, so that the meaning of the semantic analysis result is judged to be the same as that of the help-seeking keyword, otherwise, the semantic analysis result is judged to be not close to the help-seeking keyword, and the meaning of the semantic analysis result is judged to be not the same as that of the help-seeking keyword.
In step S503, when the second correlation degree reaches the second threshold, that is, the semantic analysis result is closer to the help-seeking keyword, and the semantic analysis result and the help-seeking keyword express the same meaning, the rescue notification information may be generated. Specifically, a corresponding rescue notification information template may be called according to the location information of the user terminal, so as to generate rescue notification information. For example, if the location information of the user terminal indicates that the user terminal is on the sea, the rescue notification information template may be "rescue, we are currently on the sea, with coordinates xx"; if the position information of the user terminal indicates that the user terminal is in the forest, the rescue notification information template can be 'rescue, the coordinates of the rescue notification information template are xx' in the forest at present, and the coordinates in the rescue notification information template can be edited according to the position information of the user terminal.
In this embodiment, the satellite-based intelligent speech processing method further includes the following steps:
s6, determining a rescue department corresponding to the rescue notification information;
and S7, sending the rescue notification information to a rescue department.
Steps S6 and S7 may be performed by the communication core network. In step S6, the types of the rescue departments to be notified, that is, the marine rescue department, the forest rescue department, and the like, may be determined according to the geographic type of the user terminal expressed in the rescue notification information, and the specific rescue department to be notified, for example, the marine rescue department or the forest rescue department that is closest to the position coordinate of the user terminal, may be determined according to the position coordinate of the user terminal expressed in the rescue notification information. In step S7, the communication core network may send the rescue notification information to the rescue department through the public internet or the private network, and the rescue department may initiate rescue for the user using the user terminal in response to the rescue notification information.
In this embodiment, the step of obtaining the user terminal license may be performed before the steps of S1 to S7 and the like are performed. When the user terminal permission is obtained, information such as position information and communication voice information can be acquired from the user terminal.
The same technical effects as those of the satellite-based intelligent voice processing method in the embodiment can be achieved by writing a computer program for executing the satellite-based intelligent voice processing method in the embodiment, writing the computer program into a computer device or a storage medium, and executing the satellite-based intelligent voice processing method in the embodiment when the computer program is read out to run.
It should be noted that, unless otherwise specified, when a feature is referred to as being "fixed" or "connected" to another feature, it can be directly fixed or connected to the other feature or indirectly fixed or connected to the other feature. Furthermore, the descriptions of upper, lower, left, right, etc. used in the present disclosure are only relative to the mutual positional relationship of the constituent parts of the present disclosure in the drawings. As used in this disclosure, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. In addition, unless defined otherwise, all technical and scientific terms used in this example have the same meaning as commonly understood by one of ordinary skill in the art. The terminology used in the description of the embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this embodiment, the term "and/or" includes any combination of one or more of the associated listed items.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element of the same type from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present disclosure. The use of any and all examples, or exemplary language ("e.g.," such as "or the like") provided with this embodiment is intended merely to better illuminate embodiments of the invention and does not pose a limitation on the scope of the invention unless otherwise claimed.
It should be recognized that embodiments of the present invention can be realized and implemented by computer hardware, a combination of hardware and software, or by computer instructions stored in a non-transitory computer readable memory. The methods may be implemented in a computer program using standard programming techniques, including a non-transitory computer-readable storage medium configured with the computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner, according to the methods and figures described in the detailed description. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Further, operations of processes described in this embodiment can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes described by the present embodiments (or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) collectively executed on one or more processors, by hardware, or combinations thereof. The computer program includes a plurality of instructions executable by one or more processors.
Further, the method may be implemented in any type of computing platform operatively connected to a suitable interface, including but not limited to a personal computer, mini computer, mainframe, workstation, networked or distributed computing environment, separate or integrated computer platform, or in communication with a charged particle tool or other imaging device, and the like. Aspects of the invention may be embodied in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optically read and/or write storage medium, RAM, ROM, or the like, such that it may be read by a programmable computer, which when read by the storage medium or device, is operative to configure and operate the computer to perform the procedures described herein. Further, the machine-readable code, or portions thereof, may be transmitted over a wired or wireless network. The invention described in this embodiment includes these and other different types of non-transitory computer-readable storage media when such media includes instructions or programs that implement the steps described above in conjunction with a microprocessor or other data processor. The invention also includes the computer itself when programmed according to the methods and techniques described herein.
A computer program can be applied to input data to perform the functions described in the present embodiment to convert the input data to generate output data that is stored to a non-volatile memory. The output information may also be applied to one or more output devices, such as a display. In a preferred embodiment of the invention, the transformed data represents physical and tangible objects, including particular visual depictions of physical and tangible objects produced on a display.
The above description is only a preferred embodiment of the present invention, and the present invention is not limited to the above embodiment, and any modifications, equivalent substitutions, improvements, etc. within the spirit and principle of the present invention should be included in the protection scope of the present invention as long as the technical effects of the present invention are achieved by the same means. The invention is capable of other modifications and variations in its technical solution and/or its implementation, within the scope of protection of the invention.
Claims (10)
1. A satellite-based intelligent voice processing method is characterized by comprising the following steps: detecting position information of a user terminal;
acquiring communication voice information of the user terminal;
determining a first degree of correlation; the first degree of correlation represents a degree of correlation between the position information and a dangerous area;
when the first correlation degree reaches a first threshold value, performing semantic analysis on the communication voice information to obtain a semantic analysis result;
and triggering and generating rescue notification information according to the semantic analysis result.
2. The intelligent satellite-based speech processing method of claim 1, wherein said determining a first degree of correlation comprises:
determining a historical motion track of the user terminal according to the position information;
predicting according to the historical motion trail to obtain a predicted motion trail of the user terminal;
when the predicted motion track passes through the dangerous area, acquiring the distance from the position information to the dangerous area along the predicted motion track;
determining the magnitude of the first correlation degree according to the magnitude of the distance; the magnitude of the first degree of correlation is inversely related to the magnitude of the journey.
3. The intelligent satellite-based speech processing method according to claim 1 or 2, wherein said determining a first degree of correlation comprises:
setting the first degree of correlation to a value greater than the first threshold when the location information is within the hazardous area.
4. The intelligent voice processing method based on satellite according to claim 1, wherein the performing semantic analysis on the communication voice information to obtain a semantic analysis result includes:
semantic extraction is carried out on the communication voice information to obtain semantic information;
and taking the semantic information as the semantic analysis result.
5. The intelligent voice processing method based on satellite according to claim 1, wherein the performing semantic analysis on the communication voice information to obtain a semantic analysis result includes:
performing semantic prediction on the communication voice information to obtain semantic prediction information;
and taking the semantic prediction information as the semantic analysis result.
6. The intelligent satellite-based voice processing method according to claim 1, 2, 4 or 5, wherein the triggering generation of rescue notification information according to the semantic analysis result comprises:
acquiring a help-seeking keyword;
determining a second degree of correlation; the second relevance represents the relevance between the semantic analysis result and the help-seeking keyword;
and when the second correlation reaches a second threshold value, generating the rescue notification information.
7. The intelligent satellite-based speech processing method according to claim 1, further comprising:
determining a rescue department corresponding to the rescue notification information;
and sending the rescue notification information to the rescue department.
8. A satellite-based intelligent speech system, said satellite-based intelligent speech system comprising:
a low earth orbit satellite; the low-orbit satellite is used for establishing connection with a user terminal and detecting the position information of the user terminal; acquiring communication voice information of the user terminal;
a communication core network; the communication core network is used for establishing connection with the low orbit satellite, acquiring the position information and the communication voice information from the low orbit satellite, determining a first correlation degree, wherein the first correlation degree represents the correlation degree between the position information and a dangerous area, and when the first correlation degree reaches a first threshold value, sending the communication voice information to an IP multimedia system;
an IP multimedia system; and the IP multimedia system is used for carrying out semantic analysis on the communication voice information to obtain a semantic analysis result, and triggering and generating rescue notification information according to the semantic analysis result.
9. A computer apparatus comprising a memory for storing at least one program and a processor for loading the at least one program to perform the satellite based intelligent speech processing method of any one of claims 1-7.
10. A computer readable storage medium having stored therein a processor-executable program, wherein the processor-executable program, when executed by a processor, is configured to perform the satellite-based smart speech processing method of any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210978804.3A CN115497239A (en) | 2022-08-16 | 2022-08-16 | Intelligent voice processing method, system and device based on satellite and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210978804.3A CN115497239A (en) | 2022-08-16 | 2022-08-16 | Intelligent voice processing method, system and device based on satellite and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115497239A true CN115497239A (en) | 2022-12-20 |
Family
ID=84466988
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210978804.3A Pending CN115497239A (en) | 2022-08-16 | 2022-08-16 | Intelligent voice processing method, system and device based on satellite and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115497239A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119152654A (en) * | 2024-11-14 | 2024-12-17 | 深圳位置网科技有限公司 | Alarm position uploading method and system based on IMS communication |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6028514A (en) * | 1998-10-30 | 2000-02-22 | Lemelson Jerome H. | Personal emergency, safety warning system and method |
CN103616708A (en) * | 2013-11-15 | 2014-03-05 | 四川长虹电器股份有限公司 | Positioning search and rescue device based on Beidou satellite navigation system |
CN106454724A (en) * | 2016-09-18 | 2017-02-22 | 重庆中交通信信息技术有限公司 | Positioning rescue system and method |
CN107680348A (en) * | 2017-07-31 | 2018-02-09 | 深圳市心上信息技术有限公司 | Fence intelligent alarm method, device, storage medium and computer equipment |
CN112198538A (en) * | 2020-09-11 | 2021-01-08 | 中交第二公路勘察设计研究院有限公司 | Beidou-based field reconnaissance personnel safety monitoring method and system |
CN213247248U (en) * | 2020-03-18 | 2021-05-25 | 王新凤 | Bracelet satellite positioning system |
-
2022
- 2022-08-16 CN CN202210978804.3A patent/CN115497239A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6028514A (en) * | 1998-10-30 | 2000-02-22 | Lemelson Jerome H. | Personal emergency, safety warning system and method |
CN103616708A (en) * | 2013-11-15 | 2014-03-05 | 四川长虹电器股份有限公司 | Positioning search and rescue device based on Beidou satellite navigation system |
CN106454724A (en) * | 2016-09-18 | 2017-02-22 | 重庆中交通信信息技术有限公司 | Positioning rescue system and method |
CN107680348A (en) * | 2017-07-31 | 2018-02-09 | 深圳市心上信息技术有限公司 | Fence intelligent alarm method, device, storage medium and computer equipment |
CN213247248U (en) * | 2020-03-18 | 2021-05-25 | 王新凤 | Bracelet satellite positioning system |
CN112198538A (en) * | 2020-09-11 | 2021-01-08 | 中交第二公路勘察设计研究院有限公司 | Beidou-based field reconnaissance personnel safety monitoring method and system |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN119152654A (en) * | 2024-11-14 | 2024-12-17 | 深圳位置网科技有限公司 | Alarm position uploading method and system based on IMS communication |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6828001B2 (en) | Voice wakeup method and equipment | |
US10609542B2 (en) | Systems, devices, and methods for emergency responses and safety | |
AU2018331263B2 (en) | Method and device for responding to a query | |
US11521378B2 (en) | Refined searching based on detected object configurations | |
US11064426B2 (en) | Intent-based service engine for a 5G or other next generation mobile core network | |
US20180185232A1 (en) | Wearable navigation system for blind or visually impaired persons with wireless assistance | |
CN108665234B (en) | User behavior incentive method, device, computer equipment and storage medium | |
US20180307753A1 (en) | Acoustic event enabled geographic mapping | |
CN115035213A (en) | An image editing method, device, medium and device | |
CN115497239A (en) | Intelligent voice processing method, system and device based on satellite and storage medium | |
CN117519206B (en) | Autonomous driving model, method, device and vehicle based on generative diffusion model | |
CN112132095B (en) | Dangerous state identification method and device, electronic equipment and storage medium | |
CN113571063A (en) | Voice signal recognition method and device, electronic equipment and storage medium | |
US20240022620A1 (en) | System and method of communications using parallel data paths | |
US11425412B1 (en) | Motion cues for video encoding | |
CN111611365A (en) | Flow control method, device, equipment and storage medium of dialog system | |
KR20240068633A (en) | Mobile device locating using anonymized information | |
Dao et al. | Indoor navigation assistance system for visually impaired people using multimodal technologies | |
CN114880706B (en) | Information processing method, device and equipment | |
US20220360963A1 (en) | Emergency communication translation in emergency response data platform | |
CN104581636A (en) | Intelligent rescue method and system for mobile terminal | |
CN109243435B (en) | Voice instruction execution method and system | |
KR101906428B1 (en) | Method for providing speech recognition based ai safety service | |
CN109665387B (en) | Intelligent elevator boarding method and device, computer equipment and storage medium | |
CN119540842B (en) | Method and device for predicting vehicle track in parking scene and readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20221220 |
|
RJ01 | Rejection of invention patent application after publication |