Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The following describes a vehicle track recognition method, apparatus, electronic device, and storage medium of the embodiments of the present disclosure with reference to the accompanying drawings.
Fig. 1 is a flow chart of a vehicle track recognition method according to an embodiment of the disclosure.
As shown in fig. 1, the method comprises the steps of:
Step 101, obtaining a plurality of vehicle track segments monitored by each camera in two adjacent cameras.
In this embodiment, two cameras with an adjacent relationship refer to a vehicle that can travel from a road segment monitored by one of the two cameras to a road segment monitored by the other camera, that is, the cameras disposed in the two road segments with a road topology relationship or a communication relationship, which are referred to as the two cameras with an adjacent relationship.
In this embodiment, the track segment of the vehicle may be generated in advance based on the position information of the vehicle determined by the tracking algorithm. As an implementation manner, for each video image sequence acquired by each camera, a target detection algorithm, for example, a target detection algorithm Yolov3, is used to detect each vehicle in the video image sequence, so as to obtain position information of a detection frame of each vehicle in each frame image in the video image sequence, determine the position information of each vehicle based on the position information of the detection frame of each vehicle, and generate a track segment of each vehicle under the corresponding camera according to the position information of each vehicle in each frame image.
As another implementation manner, for each video image sequence acquired by each camera, a target detection algorithm, for example, a target detection algorithm Yolov3, is used to detect each vehicle in the video image sequence, so as to obtain position information of a detection frame of each vehicle in a first frame image in the video image sequence, and then, based on a Multiple Object TRACKING WITH TRACKLET-PLANE MATCHING (TPM), a track segment corresponding to each vehicle under each camera is determined.
The vehicle track segment and the camera have a corresponding relationship. For example, the vehicle track segment 1 is a vehicle track segment monitored by the camera M, and the vehicle track segment 2 is a vehicle track segment monitored by the camera N.
Step 102, inquiring a plurality of intersection setting areas set in intersections monitored by each camera.
In this embodiment, each camera has a corresponding monitoring intersection, and a predefined area, that is, a set area, is preset in each monitored intersection, and the set area is used for identifying each intersection.
As shown in FIG. 2, taking the monitoring of intersections by the A camera as an example, the areas marked by the frames with the numbers of L1-1, L1-2, L1-3, L1-4, L1-5 and L1-6 are respectively set areas of the intersections monitored by the A camera.
The intersection setting area may be a circular area, an elliptical area, or the like, and the shape of the setting area is not limited in this embodiment.
Step 103, determining a target setting area through which each vehicle track segment passes from a plurality of intersection setting areas, wherein the target setting area comprises an entrance setting area and an exit setting area.
In this embodiment, according to the position information carried in the vehicle track segments of each vehicle and the position information corresponding to the intersection setting area of the intersection monitored by each camera, the setting area where each vehicle track segment passes is determined, and the setting area where each vehicle track segment passes is taken as the target setting area, where the target setting area includes the entrance setting area and the exit setting area, that is, the entrance setting area and the exit setting area where the track segment of the vehicle passes under the corresponding camera can be determined, that is, it can be determined from which setting area the vehicle enters and from which setting area the vehicle exits.
And 104, screening the plurality of vehicle track fragments monitored by the two adjacent cameras according to the communication relation among the plurality of intersection setting regions monitored by the two adjacent cameras and the inlet setting region and the outlet setting region through which the plurality of vehicle track fragments monitored by the two adjacent cameras pass to obtain reserved target track fragments.
In this embodiment, according to the communication relationship between the plurality of intersection setting areas monitored by the two cameras having the adjacent relationship, that is, the communication relationship between the plurality of intersection setting areas monitored by the one camera having the adjacent relationship and the plurality of intersection setting areas monitored by the other camera, it is determined whether the plurality of vehicles monitored by one of the cameras can travel from the corresponding intersection setting area monitored by the one camera to each intersection setting area monitored by the other camera, for example, whether the vehicle a monitored under the a camera can travel from the intersection 1 to one of the intersections monitored by the B camera. The track segments of the vehicle record the position information of the vehicle in each video frame, so that whether the vehicle can reach the monitoring area of the other camera from the monitoring area of the current monitoring camera or not can be determined according to the entrance setting area and the exit setting area of the track segments of the vehicle monitored under a single camera, and the track segments of the vehicle which cannot travel from the intersection setting areas monitored by one camera to the intersection setting areas monitored by the other camera can be screened out and deleted according to the communication relation between the entrance setting areas and the exit setting areas monitored by the two cameras, so that the track segments of the vehicle which cannot travel from the intersection setting areas monitored by the one camera to the intersection setting areas monitored by the other camera can be obtained, the reserved target track segments belong to the plurality of track segments monitored by the two cameras with adjacent relation, and the track segments of the vehicle which travel between the intersections monitored by the two cameras can be reserved, so that the matching and splicing efficiency of the track segments of the vehicle can be improved.
And 105, matching the target track segments to splice the matched target track segments to obtain the target track of each vehicle.
In this embodiment, matching is performed on target track segments monitored by two cameras, as an implementation manner, vehicle features of vehicles to which the target track segments monitored by two cameras belong are obtained, the matched target track segments are determined through a hungarian matching algorithm based on the vehicle features, and the matched target track segments are spliced to obtain target tracks of all vehicles.
The matched target track segments belong to track segments monitored by two cameras, so that multi-target tracking under multiple cameras is realized, and the tracking efficiency is improved.
In the vehicle track recognition method of the embodiment, a plurality of vehicle track fragments monitored by each of two adjacent cameras are obtained, a plurality of intersection setting areas set in intersections monitored by each camera are queried, an entrance setting area and an exit setting area through which each vehicle track fragment passes are determined from the plurality of intersection setting areas, and according to a communication relation between the plurality of setting areas monitored by the two adjacent cameras and the entrance setting area and the exit setting area through which the plurality of vehicle track fragments monitored by the two adjacent cameras pass, the plurality of vehicle track fragments monitored by the two adjacent cameras are screened to obtain reserved target track fragments, the target track fragments are matched to splice the matched target track fragments to obtain target tracks of each vehicle, and the track fragments monitored under the two cameras are screened according to the communication relation between the entrance setting areas and the exit setting areas monitored by the two adjacent cameras respectively.
Based on the above embodiments, this embodiment provides another possible implementation manner of the vehicle track recognition method, as shown in fig. 3, including the following steps:
step 301, obtaining a plurality of vehicle track segments monitored by each of two cameras in adjacent relation.
Step 302, inquiring a plurality of intersection setting areas set in intersections monitored by each camera.
As shown in FIG. 2, the areas marked by the rectangular frames numbered L1-1, L1-2, L1-3, L1-4, L1-5 and L1-6 are the intersection setting areas monitored by the camera A, and the areas marked by the rectangular frames numbered L2-1, L2-2, L2-3, L2-4, L2-5 and L2-6 are the intersection setting areas monitored by the camera B.
The principle of step 301 and step 302 may be the same as that of the explanation of step 101 and step 102, and will not be repeated in this embodiment.
Step 303, for each camera, matching each position information of the vehicle included in each vehicle track segment monitored by the camera with the position information of a plurality of intersection setting areas in the intersections monitored by the camera, so as to determine an entrance setting area and an exit setting area where each vehicle track segment passes.
In this embodiment, for each camera, based on the position information, the entrance setting area and the exit setting area through which each vehicle track segment monitored by the corresponding camera passes are determined, so that determination of the vehicle entrance intersection and the vehicle exit road in the single camera monitoring area is realized, and subsequent screening according to the vehicle entrance intersection and the vehicle exit road in each single camera monitoring area is facilitated, and screening efficiency and screening accuracy are improved.
The method includes that according to the position information of a detection frame of a vehicle contained in a vehicle track segment, the entrance setting area and the exit setting area, which are passed by the vehicle track segment, are identified, specifically, according to each camera, the position information of the vehicle in each frame image is contained in each vehicle track segment monitored by the camera, so that according to the position information of the detection frame of the vehicle contained in each vehicle track segment monitored by the camera, the position information of a plurality of intersection setting areas in an intersection monitored by the same camera is matched, the initial position information of the detection frame of the vehicle contained in the vehicle track segment is matched with the position information of a plurality of intersection setting areas in the intersection monitored by the camera, and when the entrance setting area is determined, the first intersection setting area matched with the initial position information of the vehicle detection frame is used as the entrance setting area passed by the vehicle track segment; and the intersection setting area matched with the end position information of the detection frame is taken as an exit setting area through which the vehicle track segment passes, namely, when the vehicle disappears in the video acquired by the camera in determining the exit setting area, the intersection setting area matched with the end position information corresponding to the moment of the disappearance of the detection frame of the vehicle is taken as the exit setting area through which the vehicle track segment passes. In this embodiment, the entrance setting area and the exit setting area through which the vehicle track segment passes are identified based on the relationship between the change of the position information of the detection frame of the vehicle and the position information of each setting area included in the vehicle track segment, and the identification efficiency and accuracy are high.
It should be understood that the end position information of the vehicle detection frame, that is, the position information of the detection frame of the vehicle in the last frame image in the video image sequence corresponding to the track segment of the vehicle.
Further, in one implementation manner of the present embodiment, in order to improve accuracy of matching of the entry setting regions, a setting region in which initial position information of a detection frame of a vehicle included in a vehicle track segment is completely included is first set as an entry setting region in each intersection setting region, so that a false recognition rate is reduced, and a recognition efficiency is improved.
The position information of the detection frame comprises the position of the detection frame and the size of the detection frame and is used for indicating the area information of the vehicle in the image, and the area information comprises the size of the area and the position of the area.
For example, in the a camera, for a monitored vehicle track segment of the vehicle 1, it is recognized that the entrance set area of the vehicle 1 is L1-1, and the exit set area is L1-4.
Step 304, screening the plurality of vehicle track fragments monitored by the two cameras according to the communication relation among the plurality of intersection setting regions monitored by the two cameras and the inlet setting region and the outlet setting region through which the plurality of vehicle track fragments monitored by the two cameras pass, so as to obtain reserved target track fragments.
In one implementation manner of this embodiment, for any two vehicle track segments belonging to two cameras having an adjacent relationship, if there is a communication relationship between an exit setting area through which one vehicle track segment passes and an entrance setting area through which the other vehicle track segment passes, or if there is a communication relationship between an entrance setting area through which one vehicle track segment passes and an exit setting area through which the other vehicle track segment passes, the two vehicle track segments are regarded as reserved target track segments.
For example, as shown in fig. 2, the camera a and the camera B are two cameras having an adjacent relationship, that is, the vehicle may travel from the monitoring area of the camera a to the monitoring area of the camera B, or the vehicle may travel from the monitoring area of the camera B to the monitoring area of the camera a.
The entrance setting area and the exit setting area of each vehicle track segment monitored by the camera A are respectively:
The inlet set area of the vehicle track segment 1 is L1-1, and the outlet set area is L1-4; the inlet set area of the vehicle track segment 2 is L1-5, and the outlet set area is L1-4; the inlet set area of the vehicle track segment 3 is L1-6, and the outlet set area is L1-4; the entry setting area through which the vehicle track segment 4 passes is L1-1, and the exit setting area is L1-5.
The inlet setting area and the outlet setting area of each vehicle track segment monitored by the B camera are respectively:
the inlet set area of the vehicle track segment 5 is L2-1, and the outlet set area is L2-4; the inlet set area of the vehicle track segment 6 is L2-1, and the outlet set area is L2-5; the inlet set area of the vehicle track segment 7 is L2-1, and the outlet set area is L2-6; the entry set area through which the vehicle track segment 8 passes is L2-6, and the exit set area is L2-5.
In fig. 2, there is a communication relationship between the setting area L1-4 monitored by the a camera and the setting area L2-1 monitored by the B camera, that is, when the vehicle exits from the setting area L1-4 monitored by the a camera, the vehicle enters the setting area L2-1 monitored by the B camera. The set area L1-5 monitored by the camera A and the set area monitored by the camera B are not communicated, that is, after the vehicle exits from the set area L1-5 monitored by the camera A, the vehicle cannot enter any set area monitored by the camera B.
In a scenario of the present embodiment, in a case where there is a communication relationship between an exit setting area where one vehicle track segment passes and an entrance setting area where another vehicle track segment passes, for example, in fig. 2, a vehicle travels from an a camera monitoring area to a B camera monitoring area, where an exit setting area where a camera monitored by the a camera is L1-4, and because there is a communication relationship between the setting area where the a camera monitored is L1-4 and the setting area where the B camera monitored is L2-1, there is a communication relationship between the exit setting area where the vehicle track segment 1 passes is L1-4 and the entrance setting area where the vehicle track segment 5 passes is L2-1; a communication relationship exists between the outlet set area through which the vehicle track segment 2 passes being L1-4 and the inlet set area through which the vehicle track segment 6 passes being L2-1; a communication relationship exists between the exit set area through which the vehicle track segment 3 passes being L1-4 and the entrance set area through which the vehicle track segment 7 passes being L2-1, so that the vehicle track segment 1, the vehicle track segment 2 and the vehicle track segment 3 with the exit set area L1-4 monitored in the a camera are reserved target track segments, and the vehicle track segment 4 is deleted vehicle track segment. Meanwhile, in the vehicle track frequency band monitored by the B camera, the vehicle track segments 5, 6 and 7 are reserved target track segments, and the vehicle track segment 8 is a deleted vehicle track segment.
Note that, in fig. 2, the determination method of the track segment passing through the other setting area is the same, and is not listed in this embodiment.
It should be understood that in another scenario, when there is a communication relationship between the entrance setting area where one vehicle track segment passes and the exit setting area where another vehicle track segment passes, for example, in fig. 2, the vehicle drives from the B camera monitoring area to the a camera monitoring area, so that the principle is the same as that in the previous scenario where the vehicle drives from the a camera monitoring area to the B camera monitoring area, which is not described in detail in this embodiment.
In this embodiment, for the vehicle track segments belonging to the two cameras, the vehicle track segments are screened according to the communication relationship between the exit setting area and the entrance setting area passing between any two vehicle track segments, so that the vehicle track segments of the vehicle which cannot run in the monitoring area between the two cameras can be accurately removed, the operation speed is high, and the accuracy is high.
And 305, matching the target track segments to splice the matched target track segments to obtain the target tracks of the vehicles.
In one implementation manner of the embodiment, vehicle characteristics of vehicles to which each target track segment monitored by each of the two cameras belongs are obtained, wherein the vehicle characteristics can be vehicle re-identification ReID characteristics, the target track segments are matched according to the vehicle characteristics, the target track segments which are obtained by matching and belong to the same vehicle are spliced to obtain target tracks of each vehicle, the ReID characteristics of the vehicles are utilized to identify the target track segments which belong to the same vehicle, so that the target track segments which belong to the same vehicle are spliced to obtain the target tracks of the corresponding vehicle in the two cameras with adjacent relation, and the track identification and tracking of the vehicles in the multiple cameras are realized.
According to the method, a plurality of vehicle track fragments monitored by each camera in the adjacent relation are obtained, a plurality of intersection setting areas set in intersections monitored by each camera are inquired, an entrance setting area and an exit setting area which are passed by each vehicle track fragment are determined from the plurality of intersection setting areas, the track fragments monitored by the two cameras are screened according to the communication relation between the plurality of setting areas monitored by the two adjacent cameras and the entrance setting area and the exit setting area which are passed by the plurality of vehicle track fragments monitored by the two adjacent cameras, the plurality of vehicle track fragments monitored by the two adjacent cameras are screened to obtain reserved target track fragments, the target track fragments are matched to splice the matched target track fragments to obtain target tracks of all vehicles, and the target track fragments of the vehicles monitored by the two cameras are spliced through identifying the communication relation between the entrance setting areas and the exit setting areas which are passed by the track fragments of the vehicles under the single camera.
Based on the above embodiments, the present embodiment provides a vehicle track recognition device.
Fig. 4 is a vehicle track recognition device according to an embodiment of the present application, as shown in fig. 4, the device includes:
The acquiring module 41 is configured to acquire a plurality of vehicle track segments monitored by each of two cameras in an adjacent relationship.
The query module 42 is configured to query a plurality of intersection setting areas set in intersections monitored by each of the cameras.
A determining module 43, configured to determine, from the plurality of intersection setting areas, a target setting area through which each of the vehicle track segments passes; wherein the target setting area includes an inlet setting area and an outlet setting area.
And the screening module 44 is configured to screen the plurality of vehicle track segments monitored by the two cameras according to the communication relationship between the plurality of intersection setting regions monitored by the two cameras, and the inlet setting region and the outlet setting region through which the plurality of vehicle track segments monitored by the two cameras pass, so as to obtain a reserved target track segment.
And the processing module 45 is configured to match the target track segments, so as to splice the matched target track segments to obtain target tracks of the vehicles.
Further, in one implementation of the embodiment of the present application, the screening module 44 is specifically configured to:
And regarding any two vehicle track segments, taking the two vehicle track segments as reserved target track segments when a communication relationship exists between the exit set area through which one vehicle track segment passes and the entrance set area through which the other vehicle track segment passes or when a communication relationship exists between the entrance set area through which one vehicle track segment passes and the exit set area through which the other vehicle track segment passes, wherein the any two vehicle track segments belong to the two adjacent cameras.
In one implementation manner of the embodiment of the present application, the determining module 43 is specifically configured to:
And matching the position information of the vehicles contained in each vehicle track segment monitored by the cameras with the position information of a plurality of intersection setting areas in the intersection monitored by the cameras so as to determine an entrance setting area and an exit setting area through which each vehicle track segment passes.
In one implementation of the embodiment of the present application, the determining module 43 is specifically further configured to:
for each vehicle track segment, matching initial position information of a detection frame of a vehicle contained in the vehicle track segment with position information of a plurality of intersection setting areas in the camera monitoring intersection, and taking the first intersection setting area matched with the initial position information of the detection frame as an entrance setting area through which the vehicle track segment passes; and
And taking the intersection setting area matched with the end position information of the detection frame as an exit setting area through which the vehicle track segment passes.
In one implementation of the embodiment of the present application, the processing module 45 is specifically configured to:
Acquiring vehicle characteristics of vehicles to which each target track segment monitored by each of the two cameras belongs;
and matching the target track segments according to the characteristics of each vehicle, and splicing the matched target track segments belonging to the same vehicle to obtain the target track of each vehicle.
It should be noted that, the explanation of the foregoing method embodiment is also applicable to the apparatus of this embodiment, and the principle is the same, and will not be repeated in this embodiment.
In the vehicle track recognition device of the embodiment, a plurality of vehicle track fragments monitored by each camera in adjacent relation are obtained, a plurality of intersection setting areas set in intersections monitored by each camera are queried, an entrance setting area and an exit setting area which are passed by each vehicle track fragment are determined from the plurality of intersection setting areas, and according to the communication relation between the plurality of setting areas monitored by the two adjacent cameras and the entrance setting area and the exit setting area which are passed by the plurality of vehicle track fragments monitored by the two adjacent cameras, the plurality of vehicle track fragments monitored by the two adjacent cameras are screened to obtain reserved target track fragments, the target track fragments are matched to splice the matched target track fragments to obtain target tracks of all vehicles, and the track fragments of the vehicles monitored under the two cameras are screened by recognizing the communication relation between the entrance setting areas and the exit setting areas of the vehicles monitored under the single camera respectively, so that the screening efficiency is improved, and the target track recognition efficiency of the vehicles is improved by matching the target track of the target track.
In order to achieve the above embodiments, the present embodiment provides an electronic device, including:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the preceding method embodiment.
In order to implement the above-described embodiments, the present embodiment provides a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the method described in the foregoing method embodiments.
In order to implement the above-described embodiments, the present embodiments provide a computer program product comprising a computer program which, when executed by a processor, implements a method as described in the method embodiments described above.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 5 is a schematic block diagram of an example electronic device 800 of an embodiment of the disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 5, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a ROM (Read-Only Memory) 802 or a computer program loaded from a storage unit 808 into a RAM (Random Access Memory ) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The computing unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An I/O (Input/Output) interface 805 is also connected to bus 804.
Various components in device 800 are connected to I/O interface 805, including: an input unit 806 such as a keyboard, mouse, etc.; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, etc.; and a communication unit 809, such as a network card, modem, wireless communication transceiver, or the like. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 801 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 801 include, but are not limited to, a CPU (Central Processing Unit ), a GPU (Graphic Processing Units, graphics processing unit), various specialized AI (ARTIFICIAL INTELLIGENCE ) computing chips, various computing units running machine learning model algorithms, DSPs (DIGITAL SIGNAL Processor ), and any suitable Processor, controller, microcontroller, etc. The computing unit 801 performs the respective methods and processes described above, such as a vehicle track recognition method. For example, in some embodiments, the vehicle track recognition method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 800 via ROM 802 and/or communication unit 809. When the computer program is loaded into the RAM 803 and executed by the computing unit 801, one or more steps of the vehicle track recognition method described above may be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the vehicle trajectory identification method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above can be implemented in digital electronic circuitry, integrated Circuit System, FPGA (Field Programmable GATE ARRAY ), ASIC (Application-SPECIFIC INTEGRATED Circuit, application-specific integrated Circuit), ASSP (Application SPECIFIC STANDARD Product, application-specific standard Product), SOC (System On Chip ), CPLD (Complex Programmable Logic Device, complex programmable logic device), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, RAM, ROM, EPROM (ELECTRICALLY PROGRAMMABLE READ-Only-Memory, erasable programmable read-Only Memory) or flash Memory, an optical fiber, a CD-ROM (Compact Disc Read-Only Memory), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., CRT (Cathode-Ray Tube) or LCD (Liquid CRYSTAL DISPLAY) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: LAN (Local Area Network ), WAN (Wide Area Network, wide area network), internet and blockchain networks.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual PRIVATE SERVER" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be noted that, artificial intelligence is a subject of studying a certain thought process and intelligent behavior (such as learning, reasoning, thinking, planning, etc.) of a computer to simulate a person, and has a technology at both hardware and software level. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing, and the like; the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, a machine learning/deep learning technology, a big data processing technology, a knowledge graph technology and the like.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.