[go: up one dir, main page]

CN111209902B - Method, device, server and readable storage medium for assisting law enforcement - Google Patents

Method, device, server and readable storage medium for assisting law enforcement Download PDF

Info

Publication number
CN111209902B
CN111209902B CN202010312809.3A CN202010312809A CN111209902B CN 111209902 B CN111209902 B CN 111209902B CN 202010312809 A CN202010312809 A CN 202010312809A CN 111209902 B CN111209902 B CN 111209902B
Authority
CN
China
Prior art keywords
case
law enforcement
case type
server
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010312809.3A
Other languages
Chinese (zh)
Other versions
CN111209902A (en
Inventor
黄希
聂贻俊
刘翼
张登星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Pvirtech Co ltd
Original Assignee
Chengdu Pvirtech Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Pvirtech Co ltd filed Critical Chengdu Pvirtech Co ltd
Priority to CN202010312809.3A priority Critical patent/CN111209902B/en
Publication of CN111209902A publication Critical patent/CN111209902A/en
Application granted granted Critical
Publication of CN111209902B publication Critical patent/CN111209902B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/26Government or public services
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Development Economics (AREA)
  • Primary Health Care (AREA)
  • Software Systems (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Educational Administration (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the application provides a method, a device, a server and a readable storage medium for assisting law enforcement, and aims to automatically help law enforcement personnel determine a reasonable law enforcement process. The method for assisting law enforcement comprises the following steps: the server receives media data sent by a law enforcement terminal and receives pre-judged case types sent by the law enforcement terminal, wherein the media data are collected by the law enforcement terminal aiming at a case field, and the pre-judged case types are input to the law enforcement terminal by law enforcement personnel; the server side determines at least one reference case type according to the media data; the server side judges whether the type of the pre-judged case is the same as that of a reference case or not; and under the condition that the pre-judged case type is the same as one reference case type, the server side determines the pre-judged case type as a final case type, establishes a workflow related to the final case type and sends the information of the workflow to the law enforcement terminal.

Description

Method, device, server and readable storage medium for assisting law enforcement
Technical Field
The embodiment of the application relates to the technical field of communication, in particular to a method, a device, a server and a readable storage medium for assisting law enforcement.
Background
With the diversified development of social activities and the steady implementation of legal and national policies, on one hand, when disputes or accidents occur between people, between people and enterprises, between enterprises and enterprises, a case is usually reported to an alarm center to ask police and officials (hereinafter referred to as law enforcement officers) to mediate, support or rescue. On the other hand, law enforcement personnel may also perform inspection activities, such as pollution discharge standard inspection, fire fighting equipment inspection, operation activity validity inspection, etc., on the enterprise routinely or in an assault manner according to the work plan.
Law enforcement officers mediate, check or punish businesses, individuals, etc., in an effort to defend the interests of a legitimate party by legal means. According to the same reason, law enforcement personnel also need to follow the law enforcement flow in compliance with the regulations and accept the supervision of people when carrying out mediation, inspection and punishment. However, in actual implementation, due to the complex case field environment, law enforcement personnel are not easy to integrally master the case, and it is difficult to determine the most reasonable law enforcement procedure, which results in law enforcement loopholes in the law enforcement process.
Disclosure of Invention
The embodiment of the application provides a method, a device, a server and a readable storage medium for assisting law enforcement, and aims to automatically help law enforcement personnel to determine a reasonable law enforcement flow and avoid law enforcement loopholes in the law enforcement process.
In a first aspect, an embodiment of the present application provides a method for assisting law enforcement, which is applied to a server in an assisted law enforcement system, where the assisted law enforcement system further includes a law enforcement terminal, and the law enforcement terminal and the server are communicatively connected, where the method includes:
the server receives media data sent by the law enforcement terminal and receives a pre-judged case type sent by the law enforcement terminal, wherein the media data is the media data collected by the law enforcement terminal aiming at a case field, and the pre-judged case type is the case type input by law enforcement personnel to the law enforcement terminal;
the server side determines at least one reference case type according to the media data;
the server side judges whether the type of the pre-judged case is the same as that of a reference case or not;
and under the condition that the pre-judged case type is the same as one reference case type, the server side determines the pre-judged case type as a final case type, establishes a workflow related to the final case type and sends the information of the workflow to the law enforcement terminal.
A second aspect of embodiments of the present application provides an apparatus for assisting law enforcement, the apparatus comprising:
the receiving module is used for receiving the media data sent by the law enforcement terminal and receiving the pre-judged case type sent by the law enforcement terminal, wherein the media data is the media data collected by the law enforcement terminal aiming at the case site, and the pre-judged case type is the case type input by law enforcement personnel to the law enforcement terminal;
a reference case type determining module for determining at least one reference case type according to the media data;
the judging module is used for judging whether the type of the pre-judged case is the same as that of a reference case or not;
and the workflow establishing module is used for determining the pre-judged case type as a final case type under the condition that the pre-judged case type is the same as a reference case type, establishing a workflow related to the final case type and sending the information of the workflow to the law enforcement terminal.
A third aspect of embodiments of the present application provides a readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps in the method according to the first aspect of the present application.
A fourth aspect of the embodiments of the present application provides an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the steps of the method according to the first aspect of the present application.
By adopting the auxiliary law enforcement method provided by the application, when law enforcement personnel perform law on site, the law enforcement personnel can acquire the media data of the case site through the law enforcement terminal and input the type of the case to be judged in advance to the law enforcement terminal according to the judgment of the law enforcement personnel. And the law enforcement terminal sends the collected media data and the type of the pre-judged case input by law enforcement personnel to the server. Therefore, the server side can automatically judge at least one reference case type which is most possibly in line with the case site according to the media data of the case site. And then the server judges whether the type of the pre-judged case of the law enforcement officer is the same as that of a certain reference case. If the judgment result is the same as the judgment result, the judgment of the law enforcement personnel is consistent with the automatic judgment of the server side, and the pre-judged case type judged by the law enforcement personnel is determined as the final case type. Therefore, the accurate grasp of the case type is ensured, and the integral error of the law enforcement process caused by the improper determination of the case type is avoided.
In addition, the server also establishes a workflow related to the final case type and sends the information of the workflow to the law enforcement terminal, so that law enforcement personnel can follow the workflow displayed in the law enforcement terminal to implement law enforcement operation, and law enforcement loopholes are avoided in the law enforcement process.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments of the present application will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a flow chart of a method of assisting law enforcement in accordance with an embodiment of the present invention;
FIG. 2 is a flow chart of determining a reference case type according to an embodiment of the present invention;
FIG. 3 is a diagram of a training word vector transformation model according to an embodiment of the present invention;
fig. 4 is a schematic view of a device for assisting law enforcement according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the related art, law enforcement officers mediate, check, or penalize businesses, individuals, and the like in order to defend the interests of a legitimate party by legal means. According to the same reason, law enforcement personnel also need to follow the law enforcement flow in compliance with the regulations and accept the supervision of people when carrying out mediation, inspection and punishment. However, in actual implementation, due to the complex case field environment, law enforcement personnel are not easy to integrally master the case, and it is difficult to determine the most reasonable law enforcement procedure, which results in law enforcement loopholes in the law enforcement process.
Therefore, the application provides a method, a device, a server and a readable storage medium for assisting law enforcement through the following embodiments, and aims to automatically help law enforcement personnel to determine a reasonable law enforcement flow and avoid law enforcement loopholes in the law enforcement process.
Referring to fig. 1, fig. 1 is a flowchart of a method for assisting law enforcement according to an embodiment of the present invention. The method is applied to a server side in an auxiliary law enforcement system, the auxiliary law enforcement system further comprises a law enforcement terminal, and the law enforcement terminal is in communication connection with the server side. As shown in fig. 1, the method comprises the steps of:
step S110: the server receives the media data sent by the law enforcement terminal and receives the pre-judged case type sent by the law enforcement terminal, wherein the media data is the media data collected by the law enforcement terminal aiming at the case site, and the pre-judged case type is the case type input by law enforcement personnel to the law enforcement terminal.
The media data collected by the law enforcement terminal aiming at the case site include but are not limited to: video data, audio-video data, image data. During specific implementation, the law enforcement terminal has a media data acquisition function, and after a law enforcement person arrives at a case site, the media data acquisition function of the law enforcement terminal is started so as to acquire the media data of the case site through the law enforcement terminal. And after the law enforcement terminal collects the media data of the case site, the collected media data are sent to the server side.
All types of case types can be displayed on an interface of the law enforcement terminal, and law enforcement personnel can click the case types displayed on the interface, so that the clicked case types are used as pre-judged case types to be input into the law enforcement terminal. For ease of understanding, the interface of the law enforcement terminal may illustratively exhibit: traffic accident case, traffic accident escape case, traffic accident dispute case, dangerous driving case, environmental protection administrative punishment case, environmental pollution and destruction accident case, etc. Law enforcement officers determine a most probable case type by analyzing the situation of the case site, and click the case type on the interface of the law enforcement terminal as a pre-judged case type input into the law enforcement terminal. Therefore, the law enforcement terminal receives the pre-judged case type input by law enforcement personnel and sends the pre-judged case type to the server.
Step S120: and the server determines at least one reference case type according to the media data.
Wherein, the reference case type means: possible case types determined from the media data at the case location. For convenience of understanding, it is assumed that the determined types of reference cases include "hit-and-run cases" and "traffic accident dispute cases" by performing step S120.
For how the server determines the type of the reference case according to the media data, please refer to the following embodiments, which are not repeated herein.
Step S130: and the server judges whether the type of the pre-judged case is the same as that of a reference case.
If the type of the pre-judged case judged by the law enforcement officer is the same as the type of a certain reference case judged by the server, the judgment of the law enforcement officer is identical with that of the server, and the judgment of the law enforcement officer and the judgment of the server are mutually verified, so that the judgment of the law enforcement officer and the judgment of the server have higher credibility.
If the type of the pre-judged case judged by the law enforcement officer is different from the types of all the reference cases judged by the server, the judgment of the law enforcement officer is inconsistent with the judgment of the server, and the judgment of the law enforcement officer and the judgment of the server cannot be mutually verified, so that the judgment of the law enforcement officer and the judgment of the server are uneven and have higher credibility.
For ease of understanding, following the above assumptions, the types of reference cases determined in the above step S120 include "hit-and-run cases" and "traffic accident dispute cases". Further, in step S110, the predetermined case type received by the server is assumed to be "traffic accident dispute case". The prejudged case type is the same as one of the two reference case types.
Step S140: and under the condition that the pre-judged case type is the same as one reference case type, the server side determines the pre-judged case type as a final case type, establishes a workflow related to the final case type and sends the information of the workflow to the law enforcement terminal.
During specific implementation, the server sets corresponding workflows for each case type according to law enforcement procedures specified by laws and regulations in advance, and stores the workflows corresponding to each case type. For convenience of understanding, simply taking "traffic accident dispute case" as an example, the workflow preset for the "traffic accident dispute case" by the server sequentially includes the following steps: (1) inquiring whether both parties accept the mediation, (2) the mediation is mediated if both parties accept the mediation, (3) the accident approval book is included if both parties reach an agreement, (4) both parties sign the accident approval book, and (5) the accident approval book is delivered to both parties on site.
Since law enforcement procedures defined by law and law regulations are not all the same for each case type, workflows set in advance for each case type are not all the same.
Since the server has set and stored the corresponding workflow for each case type in advance, the server may directly start the workflow corresponding to the final case type when step S140 is executed, so as to establish the workflow related to the final case type. In addition, the service end also sends the information of the workflow to the law enforcement terminal, so that the law enforcement terminal displays the workflow to law enforcement personnel, and the law enforcement personnel can follow the workflow to perform law enforcement.
By executing the method for assisting law enforcement comprising the steps S110 to S140, when law enforcement personnel perform law enforcement on site, the law enforcement personnel can collect the media data of the case site through the law enforcement terminal and input the type of the pre-judged case into the law enforcement terminal according to the judgment of the law enforcement personnel. And the law enforcement terminal sends the collected media data and the type of the pre-judged case input by law enforcement personnel to the server. Therefore, the server side can automatically judge at least one reference case type which is most possibly in line with the case site according to the media data of the case site. And then the server judges whether the type of the pre-judged case of the law enforcement officer is the same as that of a certain reference case. If the judgment result is the same as the judgment result, the judgment of the law enforcement personnel is consistent with the automatic judgment of the server side, and the pre-judged case type judged by the law enforcement personnel is determined as the final case type. Therefore, the accurate grasp of the case type is ensured, and the integral error of the law enforcement process caused by the improper determination of the case type is avoided.
On the other hand, the service end also establishes the workflow related to the final case type and sends the information of the workflow to the law enforcement terminal, so that law enforcement personnel can follow the workflow displayed in the law enforcement terminal to implement law enforcement operation, and law enforcement loopholes are avoided in the law enforcement process.
In addition, after the step S130 is executed, in a case that the pre-determined case type is different from a plurality of reference case types, the server may further send a prompt message to the law enforcement terminal, where the prompt message is used to prompt the law enforcement terminal to re-shoot media data.
Thus, the law enforcement terminal can re-collect the media data, and the law enforcement personnel can input the re-determined pre-judged case type to the law enforcement terminal. And the law enforcement terminal sends the re-collected media data and the type of the pre-judged case re-input by law enforcement personnel to the server. Thus, the server can resume the execution of the step S110.
Referring to fig. 2, fig. 2 is a flowchart for determining a type of a reference case according to an embodiment of the present invention. Wherein the media data includes video data and audio data. As shown in fig. 2, the determination process includes the following steps:
step S121: and the server side determines a plurality of candidate case types according to the video data.
In a specific implementation, step S121 may include the following sub-steps:
substep S121-1: the server acquires a plurality of key frames from the video data, wherein the plurality of key frames comprise at least one of the following: the method comprises the steps of obtaining a first frame image of video data and a frame image corresponding to a shooting angle of which the shooting duration exceeds a preset duration in the video data.
Considering that when a law enforcement officer holds a law enforcement terminal to shoot video data of a case site, the law enforcement officer usually aims at a target object (such as a damaged vehicle, a wounded person lying on the ground, and the like) of the case site and then starts the shooting function of the law enforcement terminal. Therefore, the server can take the first frame image of the video data as a key frame and extract the frame image.
In addition, considering that law enforcement officers usually aim at key information of cases during shooting and keep shooting angles to shoot videos for a long time, the server can take a frame of image corresponding to the shooting angle with the shooting duration exceeding the preset duration in the video data as a key frame and extract the key frame.
In particular, the law enforcement terminal may support a "marking" function. During shooting by the law enforcement personnel through the law enforcement terminal, if the law enforcement personnel need to change the shooting angle, a marking key on the law enforcement terminal can be pressed when the shooting angle is changed, and a marking frame is inserted between one frame of image before the marking key is pressed and one frame of image after the marking key is pressed by the law enforcement terminal. In this way, a multi-frame image between two marker frames, that is, a multi-frame image photographed at the same photographing angle. After receiving the video data sent by the law enforcement terminal, the server may divide the video data into a plurality of segments according to each marked frame in the video data, where each segment is a multi-frame image shot at the same shooting angle. Then, the server determines the time length of each segment, and extracts a frame of image from the segments with the time length exceeding a preset time length (for example, 3 seconds) as a key frame.
Considering that if the law enforcement officer is not skilled in the operation of the "marking" function, the law enforcement officer may press the "marking" key before the photographing angle is switched or may press the "marking" key long after the photographing angle has been switched, the photographing angle of several images before and after the marking frame may not be the same as the main photographing angle. In view of this, in order to avoid extracting an erroneous key frame, when the server extracts a key frame for a segment whose time length exceeds a preset time length, the server may extract a frame image at the middle of the segment as a key frame.
Substep S121-2: and the server detects the target of each key frame to obtain a plurality of target object images, and each target object image corresponds to one target object name.
In particular, a plurality of sample images, which are images from a case scene, may be acquired in advance. The sample images are then labeled, for example, for a sample image from a car accident scene that includes damaged vehicles and fallen victims. When the sample image is labeled, the position of the damaged vehicle in the sample image and the position of the fallen injured person in the sample image can be labeled. And then, training a preset model by utilizing the marked sample images, wherein the preset model can adopt a conventional target detection model, such as an R-CNN model, a Fast R-CNN model or a Fast R-CNN model. And finally, taking the trained preset model as a target detection model, and performing persistent storage on the target detection model.
In performing the above substep S121-2, each key frame may be input into the pre-trained target detection model in turn, and the target detection model outputs a detection image for each key frame. The position of the image of the target object is defined in each detection image, and the name of the target object corresponding to each image of the target object.
For convenience of understanding, it is assumed that a key frame is an image captured in a place corresponding to a car accident, and the key frame is input into a target detection model, and a detection image output by the target detection model includes a plurality of rectangular frames, each of which respectively defines an image of a target object, such as an image of a damaged vehicle, an image of a standing party, and the like. In addition, characters are attached to the sides of each rectangular frame to represent the names of the objects corresponding to the object images.
Substep S121-3: and the server side determines a plurality of candidate case types from the plurality of case types according to a plurality of target object names corresponding to the plurality of target object images and a plurality of preset target object names corresponding to the plurality of case types respectively.
In a specific implementation, a plurality of preset target object names can be preset for each case type. For convenience of understanding, the following preset target names are exemplarily set in advance for the "hit-and-run case": damaged vehicles, victims of falling into the ground, ambulances, traffic cones, medical carts, doctors, etc. The following preset target names are preset for the traffic accident dispute cases: damaged vehicles, parties standing, two vehicles in close proximity, etc. The following preset target object names are preset for the dangerous driving case in advance: a driver in a driver's seat, a vehicle that is not in motion, an alcohol detector screen, etc.
First embodiment of substep S121-3:
the server side converts a plurality of target object names corresponding to the target object images into first word vectors, and respectively converts a plurality of preset target object names corresponding to the case types into second word vectors; the server calculates a vector distance for a second word vector and the first word vector corresponding to each case type in the case types, and determines the confidence coefficient of the case type according to the vector distance, wherein the larger the vector distance is, the smaller the confidence coefficient is; and the server side determines a plurality of candidate case types from the plurality of case types according to the respective confidence degrees of the plurality of case types.
Specifically, the plurality of target names determined in the substep S121-2 may be input into a word vector transformation model (e.g., the continuous bag of words model CBOW) trained in advance, so as to obtain a first word vector output by the word vector transformation model. And aiming at each case type, inputting a plurality of preset target object names corresponding to the case type into the word vector conversion model to obtain a second word vector which is output by the word vector conversion model for the case type.
And then, for each case type, calculating a vector distance between the first word vector and a second word vector of the case type, and taking the reciprocal of the vector distance as the confidence corresponding to the case type. Wherein, the higher the confidence of a case type, the higher the probability of indicating that the case type is in accordance with the case scene. For convenience of understanding, the server calculates confidence degrees corresponding to various case types respectively according to the case types such as the "hit-and-run case", "traffic accident dispute case" and the "dangerous driving case" in sequence. Assuming that when the confidence is calculated for a "hit-and-miss case" case, the vector distance between the second word vector and said first word vector of that type of case is calculated to be 0.68, the confidence for that type of case is equal to 1/0.68, i.e. 1.47.
After calculating the confidence degrees corresponding to the various case types, the first 3 case types with the highest confidence degrees can be selected from the various case types as candidate case types. Or selecting the case type with the confidence coefficient higher than the preset confidence coefficient threshold value from various case types as the candidate case type.
Second embodiment of substep S121-3:
according to each case type, a plurality of target object names corresponding to a plurality of target object images are used as a first group of names, a plurality of preset target object names of the case type are used as a second group of names, and then the number of pairs of same names included between the first group of names and the second group of names is determined. If the logarithm of the same name exceeds a preset value (e.g. 3 pairs), the case type is determined as a candidate case type.
Step S122: the server side obtains a plurality of preset keywords corresponding to each candidate case type in the candidate case types, and determines the confidence degree of the candidate case type according to the audio data and the preset keywords corresponding to the candidate case type.
In a specific implementation, step S122 may include the following sub-steps:
substep S122-1: and the server performs voice recognition on the audio data to obtain a text field corresponding to the audio data, and extracts a plurality of keywords corresponding to the audio data from the text field.
In specific implementation, a speech recognition algorithm may be first adopted to perform speech recognition on the audio data to obtain a text segment corresponding to the audio data. And then, performing word segmentation processing on the character segments by adopting a word segmentation algorithm to obtain a plurality of word segments. And finally, removing stop words from the multiple segmented words, and taking the residual segmented words as key words.
Among them, the speech recognition algorithms that can be used include, but are not limited to: an algorithm based on Dynamic time warping (Dynamic time warping), a method based on a Hidden Markov Model (HMM) of a parametric model, a method based on Vector Quantization (VQ) of a nonparametric model, and the like. Where word segmentation algorithms that may be employed include, but are not limited to: a maximum matching word segmentation algorithm, a shortest path word segmentation algorithm, a word segmentation algorithm based on an N-gram model (N-gram model), a neural network word segmentation algorithm and the like.
When stop words are removed from the multiple participles, the stop word list corresponding to each candidate case type can be obtained first, and then the stop word lists are combined into a total stop word list. Finally, each participle in the participles is compared with the total stop word list, and if a participle is a stop word in the total stop word list, the participle can be removed.
For ease of understanding, it is assumed, by way of example, that the deactivation vocabulary corresponding to the "hit-and-run case" includes: stop words such as Do, o, Eye, Do, police, traffic police, you, I, He, how, urgent, make a call, late, etc. These stop words are often found on the scene of a hit-and-run case, but they do not belong to the key information for discriminating the type of case.
The condition that a stop word list corresponding to the traffic accident dispute case comprises: stop words such as Do, police, traffic police, you, I, He, make phone call, ok, come home first, contrast, not good meaning, don't have relations, etc. These stop words are often found on the scene of a traffic accident dispute case, but the stop words do not belong to the key information for judging the type of the case.
If the candidate case types determined through the above step S121 include: the case of causing traffic accidents and the case of disputes of traffic accidents. The deactivation word lists corresponding to the two candidate case types are merged to obtain a total deactivation word list. The total stop word list includes: do, o, yi, yao, police, traffic police, you, me, he, how do, take urgency, make a call, come late, ok, come home first, compare, don't mean, have nothing to do. Then each participle in the participles is compared with the total stop word list, and if a participle is a stop word in the total stop word list, the participle can be removed. And finally, taking the rest participles as keywords.
Because the total stop word list comprises all stop words corresponding to each candidate case type, the stop words can be removed to the maximum extent, and the purity of the extracted keywords is improved.
In the above embodiment, the disabled word lists corresponding to the various candidate case types are combined into the total disabled word list, then the disabled words are removed from the multiple segmented words based on the total disabled word list, and finally the remaining segmented words are used as the keywords. The keywords obtained by the embodiment correspond to various candidate case types.
In other embodiments of the present invention, for each candidate case type, keywords corresponding to the case type may be extracted from the multiple participles of the audio data. In specific implementation, after the speech recognition algorithm and the word segmentation algorithm are used for obtaining a plurality of words, for each candidate case type, according to the stop word list corresponding to the candidate case type, stop words in the stop word list are extracted from the plurality of words, and the remaining words are used as keywords corresponding to the candidate case type. Therefore, the corresponding keywords are respectively extracted for each candidate case type. In these embodiments, when the following substep S122-2 is continuously performed, specifically, the server obtains, for each candidate case type of the plurality of candidate case types, a plurality of preset keywords corresponding to the candidate case type, and determines the confidence of the candidate case type according to the plurality of preset keywords corresponding to the candidate case type and the keywords extracted from the audio data for the candidate case type.
Substep S122-2: the server side obtains a plurality of preset keywords corresponding to each candidate case type in the candidate case types, and determines the confidence degree of the candidate case type according to the keywords corresponding to the audio data and the preset keywords corresponding to the candidate case types.
In specific implementation, a plurality of preset keywords can be preset for each case type. For convenience of understanding, the following preset keywords are exemplarily set in advance for the "hit-and-run case": injury, crash, rolling, overspeed, running a red light, sudden braking, violation of rules and the like. The following preset keywords are preset for the traffic accident dispute case: rubbing, knocking down, insurance, compensation, mediation, negotiation, repair and the like. The following preset keywords are preset for the dangerous driving case: detection, blowing, drinking, alcohol, over standard, getting off, holding, etc.
First embodiment of substep S122-2:
the server side converts a plurality of keywords corresponding to the audio data into a third word vector, and converts a plurality of preset keywords corresponding to each candidate case type into a fourth word vector for each candidate case type; and the server calculates the vector distance between the third word vector and the fourth word vector, and determines the confidence coefficient of the candidate case type according to the vector distance, wherein the greater the vector distance, the smaller the confidence coefficient.
Specifically, the plurality of keywords extracted in the substep S122-1 may be input into a word vector transformation model (e.g., continuous bag of words model CBOW) trained in advance, so as to obtain a third word vector output by the word vector transformation model. And aiming at each candidate case type, inputting a plurality of preset keywords corresponding to the candidate case type into the word vector conversion model to obtain a fourth word vector output by the word vector conversion model for the candidate case type.
And then, for each candidate case type, calculating a vector distance between the third word vector and a fourth word vector of the candidate case type, and taking the reciprocal of the vector distance as a confidence coefficient corresponding to the candidate case type. The confidence of one candidate case type is higher, and the probability that the candidate case type is in accordance with the case scene is higher. For convenience of understanding, the server side calculates confidence degrees corresponding to various candidate case types respectively for the case types of the "hit-and-miss case" and the "traffic accident dispute case" in sequence. Assuming that when the confidence is calculated for the "hit-and-miss case" case, the vector distance between the fourth word vector and said third word vector of this candidate case type is calculated to be 0.29, the confidence of this case type is equal to 1/0.29, i.e. 3.45.
Second embodiment of substep S122-2:
for each candidate case type, the plurality of keywords extracted in the substep S122-1 are used as a first group of keywords, the plurality of preset keywords corresponding to the candidate case type are used as a second group of keywords, and then, how many pairs of identical keywords are included between the first group of keywords and the second group of keywords are determined. And finally, determining the confidence coefficient of the candidate case type according to the logarithm of the same key words. Illustratively, the logarithm of the same keyword may be directly used as the confidence of the candidate case type. Assuming that there are 5 pairs of identical keywords between the first set of keywords and the second set of keywords, the confidence of the candidate case type is equal to 5.
Step S123: and the server side determines at least one reference case type from the candidate case types according to the respective confidence degrees of the candidate case types.
In specific implementation, the top 2 candidate case types with the highest confidence coefficient can be selected from various candidate case types as reference case types. Or selecting a candidate case type with the confidence coefficient higher than a preset confidence coefficient threshold value from various candidate case types as the reference case type.
By determining the reference case type in the manner of the above-described steps S121 to S123, a plurality of candidate case types are determined from the plurality of case types according to the video data first, and then at least one reference case type is further determined from the plurality of candidate case types according to the audio data. On one hand, the method is equivalent to double screening of multiple case types by successively utilizing video data and audio data, and finally, the reference case type is accurately selected, so that the method is favorable for reducing case type determination errors. On the other hand, firstly, according to the video data, a plurality of candidate case types are determined from the plurality of case types, so that the selection range is narrowed, and then according to the audio data, a reference case type is determined from the candidate case types with smaller range, so that the determination efficiency of the reference case type is improved. On the other hand, as the video data is more intuitive compared with the audio data, most case types are screened out by using the video data, and the candidate case types can be more accurately reserved.
Referring to fig. 3, fig. 3 is a schematic diagram of a training word vector transformation model according to an embodiment of the present invention. As shown in fig. 3, first, a plurality of pairs of training data may be collected in advance, each pair of training data including two sets of keywords and a confidence flag, which is not shown in fig. 3 for simplifying the drawing.
Then, for each pair of training data, a first set of keywords in the pair of training data is first input into a preset model (e.g., the continuous bag of words model CBOW) to obtain a first sample word vector. And then inputting a second group of keywords in the pair of training data into a preset model to obtain a second sample word vector. Then, a vector distance between the first sample word vector and the second sample word vector is calculated, and a confidence degree is calculated according to the vector distance. And finally, determining a loss value according to the confidence coefficient and the confidence coefficient mark of the pair of training data, and updating the preset model according to the loss value. In order to calculate the LOSS value LOSS, the calculated confidence level C1 may be normalized by a Sigmoid function to obtain a normalized confidence level C1', and the confidence level flag C2 may be normalized by the Sigmoid function to obtain a normalized confidence level C2'. Finally, the LOSS value LOSS = -ln (1- | C1 '-C2' |) is calculated.
And finally, taking the preset model which is updated for many times as a word vector conversion model, and persistently storing the word vector conversion model.
Based on the same inventive concept, one embodiment of the application provides a device for assisting law enforcement. Referring to fig. 4, fig. 4 is a schematic diagram of a law enforcement assisting device provided at a server in a law enforcement assisting system according to an embodiment of the present invention, the law enforcement assisting system further includes a law enforcement terminal, and the law enforcement terminal is in communication connection with the server. As shown in fig. 4, the apparatus includes:
a receiving module 41, configured to receive media data sent by the law enforcement terminal, and receive a pre-judged case type sent by the law enforcement terminal, where the media data is media data collected by the law enforcement terminal for a case field, and the pre-judged case type is a case type input by a law enforcement officer to the law enforcement terminal;
a reference case type determining module 42, configured to determine at least one reference case type according to the media data;
a judging module 43, configured to judge whether the pre-judged case type is the same as a reference case type;
a workflow establishing module 44, configured to determine the pre-judged case type as a final case type, establish a workflow related to the final case type, and send information of the workflow to the law enforcement terminal, when the pre-judged case type is the same as a reference case type.
Optionally, the apparatus further comprises:
and the prompt information sending module is used for sending prompt information to the law enforcement terminal under the condition that the type of the pre-judged case is different from the types of the plurality of reference cases, and the prompt information is used for prompting the law enforcement terminal to shoot media data again.
Optionally, the media data comprises video data and audio data; the reference case type determining module includes:
the candidate case type determining submodule is used for determining a plurality of candidate case types according to the video data;
the confidence coefficient determining sub-module is used for acquiring a plurality of preset keywords corresponding to the candidate case type aiming at each candidate case type in the candidate case types and determining the confidence coefficient of the candidate case type according to the audio data and the plurality of preset keywords corresponding to the candidate case type;
and the reference case type determining submodule is used for determining at least one reference case type from the candidate case types according to the respective confidence degrees of the candidate case types.
Optionally, the candidate case type determining sub-module includes:
a key frame acquisition unit, configured to acquire a plurality of key frames from the video data, where the plurality of key frames include at least one of: a first frame image of the video data and a frame image corresponding to a shooting angle of which the shooting duration exceeds a preset duration in the video data;
the target detection unit is used for carrying out target detection on each key frame to obtain a plurality of target object images, and each target object image corresponds to one target object name;
and the candidate case type determining unit is used for determining a plurality of candidate case types from the plurality of case types according to a plurality of target object names corresponding to the plurality of target object images and a plurality of preset target object names corresponding to the plurality of case types respectively.
Optionally, the candidate case type determining unit is specifically configured to convert a plurality of target object names corresponding to the plurality of target object images into a first word vector, and convert a plurality of preset target object names corresponding to the plurality of case types into second word vectors, respectively; aiming at each case type in the multiple case types, calculating a vector distance for a second word vector and the first word vector corresponding to the case type, and determining the confidence coefficient of the case type according to the vector distance, wherein the greater the vector distance, the smaller the confidence coefficient; and determining a plurality of candidate case types from the plurality of case types according to the respective confidence degrees of the plurality of case types.
Optionally, the confidence determination submodule includes:
the keyword extraction unit is used for carrying out voice recognition on the audio data to obtain a text field corresponding to the audio data, and extracting a plurality of keywords corresponding to the audio data from the text field;
and the confidence determining unit is used for acquiring a plurality of preset keywords corresponding to the candidate case type aiming at each candidate case type in the candidate case types, and determining the confidence of the candidate case type according to the plurality of keywords corresponding to the audio data and the plurality of preset keywords corresponding to the candidate case type.
Optionally, the confidence determining unit is specifically configured to convert a plurality of keywords corresponding to the audio data into a third word vector, and convert a plurality of preset keywords corresponding to the candidate case type into a fourth word vector; calculating the vector distance between the third word vector and the fourth word vector, and determining the confidence coefficient of the candidate case type according to the vector distance, wherein the greater the vector distance, the smaller the confidence coefficient.
Based on the same inventive concept, another embodiment of the present application provides a readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps in the method of assisting law enforcement as described in any of the above embodiments of the present application.
Based on the same inventive concept, another embodiment of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the computer program, the electronic device implements the steps of the method for assisting law enforcement according to any of the above embodiments of the present application.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one of skill in the art, embodiments of the present application may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the true scope of the embodiments of the application.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The method, the device, the server and the readable storage medium for assisting law enforcement provided by the present application are introduced in detail above, and a specific example is applied in the present application to explain the principle and the implementation of the present application, and the description of the above embodiment is only used to help understanding the method and the core idea of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A method of assisting law enforcement, the method being applied to a server in a system for assisting law enforcement, the system further comprising a law enforcement terminal, the law enforcement terminal being in communication with the server, the method comprising:
the server receives media data sent by the law enforcement terminal and receives a pre-judged case type sent by the law enforcement terminal, wherein the media data is the media data collected by the law enforcement terminal aiming at a case field, and the pre-judged case type is the case type input by law enforcement personnel to the law enforcement terminal;
the server side determines at least one reference case type according to the media data;
the server side judges whether the type of the pre-judged case is the same as that of a reference case or not;
and under the condition that the pre-judged case type is the same as one reference case type, the server side determines the pre-judged case type as a final case type, establishes a workflow related to the final case type and sends the information of the workflow to the law enforcement terminal.
2. The method of claim 1, further comprising:
and under the condition that the type of the pre-judging case is different from the types of a plurality of reference cases, the server side sends prompt information to the law enforcement terminal, wherein the prompt information is used for prompting the law enforcement terminal to shoot media data again.
3. The method of claim 1, wherein the media data comprises video data and audio data; the step that the server side determines at least one reference case type according to the media data comprises the following steps:
the server side determines a plurality of candidate case types according to the video data;
the server side acquires a plurality of preset keywords corresponding to each candidate case type in the candidate case types, and determines the confidence of the candidate case type according to the audio data and the preset keywords corresponding to the candidate case type;
and the server side determines at least one reference case type from the candidate case types according to the respective confidence degrees of the candidate case types.
4. The method according to claim 3, wherein the step of the server determining a plurality of candidate case types according to the video data comprises:
the server acquires a plurality of key frames from the video data, wherein the plurality of key frames comprise at least one of the following: the method comprises the steps that a first frame image of video data and one frame image of a video segment at the same shooting angle are kept shot in the video data, wherein the duration of shooting the video segment at the same shooting angle exceeds a preset duration;
the server side carries out target detection on each key frame to obtain a plurality of target object images, and each target object image corresponds to one target object name;
and the server side determines a plurality of candidate case types from the plurality of case types according to a plurality of target object names corresponding to the plurality of target object images and a plurality of preset target object names corresponding to the plurality of case types respectively.
5. The method according to claim 4, wherein the step of the server determining a plurality of candidate case types from the plurality of case types according to a plurality of target object names corresponding to the plurality of target object images and a plurality of preset target object names corresponding to each of the plurality of case types comprises:
the server side converts a plurality of target object names corresponding to the target object images into first word vectors, and respectively converts a plurality of preset target object names corresponding to the case types into second word vectors;
the server calculates a vector distance for a second word vector and the first word vector corresponding to each case type in the case types, and determines the confidence coefficient of the case type according to the vector distance, wherein the larger the vector distance is, the smaller the confidence coefficient is;
and the server side determines a plurality of candidate case types from the plurality of case types according to the respective confidence degrees of the plurality of case types.
6. The method according to claim 3, wherein the step of the server obtaining a plurality of preset keywords corresponding to each candidate case type among the plurality of candidate case types, and determining the confidence level of the candidate case type according to the audio data and the plurality of preset keywords corresponding to the candidate case type includes:
the server performs voice recognition on the audio data to obtain a text field corresponding to the audio data, and extracts a plurality of keywords corresponding to the audio data from the text field;
the server side obtains a plurality of preset keywords corresponding to each candidate case type in the candidate case types, and determines the confidence degree of the candidate case type according to the keywords corresponding to the audio data and the preset keywords corresponding to the candidate case types.
7. The method according to claim 6, wherein the step of determining the confidence level of the candidate case type according to the plurality of keywords corresponding to the audio data and the plurality of preset keywords corresponding to the candidate case type comprises:
the server side converts a plurality of keywords corresponding to the audio data into a third word vector, and converts a plurality of preset keywords corresponding to the candidate case type into a fourth word vector;
and the server calculates the vector distance between the third word vector and the fourth word vector, and determines the confidence coefficient of the candidate case type according to the vector distance, wherein the greater the vector distance, the smaller the confidence coefficient.
8. An apparatus for assisting law enforcement, the apparatus comprising a server disposed in an auxiliary law enforcement system, the auxiliary law enforcement system further comprising a law enforcement terminal, the law enforcement terminal and the server being communicatively coupled, the apparatus comprising:
the receiving module is used for receiving the media data sent by the law enforcement terminal and receiving the pre-judged case type sent by the law enforcement terminal, wherein the media data is the media data collected by the law enforcement terminal aiming at the case site, and the pre-judged case type is the case type input by law enforcement personnel to the law enforcement terminal;
a reference case type determining module for determining at least one reference case type according to the media data;
the judging module is used for judging whether the type of the pre-judged case is the same as that of a reference case or not;
and the workflow establishing module is used for determining the pre-judged case type as a final case type under the condition that the pre-judged case type is the same as a reference case type, establishing a workflow related to the final case type and sending the information of the workflow to the law enforcement terminal.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
10. A server comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the steps of the method according to any of claims 1 to 7 are performed by the processor when executing the computer program.
CN202010312809.3A 2020-04-20 2020-04-20 Method, device, server and readable storage medium for assisting law enforcement Active CN111209902B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010312809.3A CN111209902B (en) 2020-04-20 2020-04-20 Method, device, server and readable storage medium for assisting law enforcement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010312809.3A CN111209902B (en) 2020-04-20 2020-04-20 Method, device, server and readable storage medium for assisting law enforcement

Publications (2)

Publication Number Publication Date
CN111209902A CN111209902A (en) 2020-05-29
CN111209902B true CN111209902B (en) 2020-08-21

Family

ID=70787286

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010312809.3A Active CN111209902B (en) 2020-04-20 2020-04-20 Method, device, server and readable storage medium for assisting law enforcement

Country Status (1)

Country Link
CN (1) CN111209902B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112613396B (en) * 2020-12-19 2022-10-25 河北志晟信息技术股份有限公司 Task emergency degree processing method and system
CN113435798B (en) * 2021-08-26 2021-12-07 北京交研智慧科技有限公司 Law enforcement training method, device and readable storage medium
CN113824922B (en) * 2021-11-02 2022-02-25 共道网络科技有限公司 Audio and video stream control method and device based on internet court trial

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101799893A (en) * 2009-12-29 2010-08-11 江苏神彩科技发展有限公司 Environment-friendly mobile law enforcement system
CN106649443A (en) * 2016-09-18 2017-05-10 江苏智通交通科技有限公司 Method and system for archival management of video data of law enforcement recorder
CN109657025A (en) * 2018-12-17 2019-04-19 武汉星视源科技有限公司 Inspection of the scene of a crime Information Collection System and inspection of the scene of a crime management system
CN110400245A (en) * 2019-08-02 2019-11-01 盐城正邦环保科技有限公司 A kind of system and data correlation method of auto-associating law-enforcing recorder data
CN110459060A (en) * 2019-08-19 2019-11-15 公安部交通管理科学研究所 A classification and push method for suspected vehicle information
CN110472427A (en) * 2019-08-02 2019-11-19 重庆康格瑞物联网技术有限公司 A kind of whole with no paper agriculture site inspection or enforcement approach and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10535145B2 (en) * 2017-07-14 2020-01-14 Motorola Solutions, Inc. Context-based, partial edge intelligence facial and vocal characteristic recognition
US10521704B2 (en) * 2017-11-28 2019-12-31 Motorola Solutions, Inc. Method and apparatus for distributed edge learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101799893A (en) * 2009-12-29 2010-08-11 江苏神彩科技发展有限公司 Environment-friendly mobile law enforcement system
CN106649443A (en) * 2016-09-18 2017-05-10 江苏智通交通科技有限公司 Method and system for archival management of video data of law enforcement recorder
CN109657025A (en) * 2018-12-17 2019-04-19 武汉星视源科技有限公司 Inspection of the scene of a crime Information Collection System and inspection of the scene of a crime management system
CN110400245A (en) * 2019-08-02 2019-11-01 盐城正邦环保科技有限公司 A kind of system and data correlation method of auto-associating law-enforcing recorder data
CN110472427A (en) * 2019-08-02 2019-11-19 重庆康格瑞物联网技术有限公司 A kind of whole with no paper agriculture site inspection or enforcement approach and system
CN110459060A (en) * 2019-08-19 2019-11-15 公安部交通管理科学研究所 A classification and push method for suspected vehicle information

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
交警执法记录信息管理系统的设计与实现;郭葱荣;《中国优秀硕士学位论文全文数据库 信息科技辑》;20180415(第04期);I138-1182 *
执法记录仪在现场勘查中的应用;沈明辉 等;《派出所工作》;20160731(第7期);68-70 *

Also Published As

Publication number Publication date
CN111209902A (en) 2020-05-29

Similar Documents

Publication Publication Date Title
CN111209902B (en) Method, device, server and readable storage medium for assisting law enforcement
CN109902575B (en) Anti-walking method and device based on unmanned vehicle and related equipment
CN107977776B (en) Information processing method, device, server and computer readable storage medium
CN110704682B (en) Method and system for intelligently recommending background music based on video multidimensional characteristics
US20200007638A1 (en) Method and apparatus for pushing information
CN106815574B (en) Method and device for establishing detection model and detecting behavior of connecting and calling mobile phone
CN109800633A (en) A kind of illegal judgment method of Manpower Transportation, device and electronic equipment
CN109993044B (en) Telecommunications fraud identification system, method, apparatus, electronic device, and storage medium
CN111612104B (en) Vehicle loss assessment image acquisition method, device, medium and electronic equipment
CN108323209B (en) Information processing method, system, cloud processing device and computer storage medium
US11961339B2 (en) Collision analysis platform using machine learning to reduce generation of false collision outputs
CN108109348A (en) Hierarchical alarm method and device
CN111160697A (en) Method for judging on duty of bus driver based on pattern recognition and related equipment
WO2017005071A1 (en) Communication monitoring method and device
CN116561668A (en) Chat session risk classification method, device, equipment and storage medium
US20230078210A1 (en) Method and system for asynchronous reporting of emergency incidents
CN111078927A (en) Method, device and storage medium for identifying driver identity based on family tree data
CN111192150B (en) Method, device, equipment and storage medium for processing vehicle danger-giving agent service
CN113342978A (en) City event processing method and device
CN113393643B (en) Abnormal behavior early warning method and device, vehicle-mounted terminal and medium
CN116208802A (en) Video data multi-mode compliance detection method, storage medium and compliance detection device
CN117993988A (en) Extraction order processing
CN111860090B (en) Vehicle verification method and device
CN113989888A (en) Method for judging driver's own driving, computer-readable storage medium and photographing apparatus
CN112668895A (en) Digital resource quality supervision system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: The invention relates to a method, a device, a server and a readable storage medium for assisting law enforcement

Effective date of registration: 20201230

Granted publication date: 20200821

Pledgee: China Minsheng Banking Corp Chengdu branch

Pledgor: CHENGDU PVIRTECH Co.,Ltd.

Registration number: Y2020980010152

PC01 Cancellation of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20220628

Granted publication date: 20200821

Pledgee: China Minsheng Banking Corp Chengdu branch

Pledgor: CHENGDU PVIRTECH Co.,Ltd.

Registration number: Y2020980010152