[go: up one dir, main page]

CN111611414B - Vehicle searching method, device and storage medium - Google Patents

Vehicle searching method, device and storage medium Download PDF

Info

Publication number
CN111611414B
CN111611414B CN201910134010.7A CN201910134010A CN111611414B CN 111611414 B CN111611414 B CN 111611414B CN 201910134010 A CN201910134010 A CN 201910134010A CN 111611414 B CN111611414 B CN 111611414B
Authority
CN
China
Prior art keywords
vehicle
matching
image
features
preset number
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910134010.7A
Other languages
Chinese (zh)
Other versions
CN111611414A (en
Inventor
隋煜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201910134010.7A priority Critical patent/CN111611414B/en
Publication of CN111611414A publication Critical patent/CN111611414A/en
Application granted granted Critical
Publication of CN111611414B publication Critical patent/CN111611414B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a vehicle retrieval method, a vehicle retrieval device and a storage medium, and belongs to the technical field of image retrieval. The method comprises the following steps: the method comprises the steps of obtaining a shooting image of a vehicle to be retrieved, calling a target network model, inputting the shooting image into the target network model, outputting vehicle image features, wherein the vehicle image features are used for describing global information of the vehicle and comprise specific dimension sections used for describing specific local areas of the vehicle, the target network model is used for determining the vehicle image features of any vehicle based on the shooting image of any vehicle, retrieving data associated with the vehicle from a database based on the vehicle image features, and the database stores a plurality of matching image features, wherein each matching image feature comprises matching local area features corresponding to the specific local areas. The application can avoid the need of carrying out feature extraction for many times and improves the retrieval efficiency.

Description

Vehicle searching method, device and storage medium
Technical Field
The embodiment of the application relates to the technical field of image retrieval, in particular to a vehicle retrieval method, a vehicle retrieval device and a storage medium.
Background
At present, the image retrieval technology is widely applied in the field of intelligent transportation. For example, in some application scenarios, there may be a need to retrieve a vehicle, where the retrieval may be implemented by means of image retrieval based on a captured image of the vehicle.
In the related art, in general, not only the entire vehicle search may be performed based on the captured image of the vehicle, but also a partial area image may be scratched from the captured image to perform the secondary search based on the partial area image. In an implementation, a complete vehicle feature of the vehicle may be extracted, and a plurality of matching images matching the complete vehicle feature may be retrieved from a database. Then, the partial region images are respectively extracted from the photographed image and the acquired plurality of matching images, and the region features of the partial region images are respectively extracted, so that the matching image which is most matched with the partial region in the photographed image is determined according to the extracted region features, and the relevant information of the vehicle can be retrieved from the database based on the determined matching image.
However, in the above implementation, since feature extraction needs to be performed multiple times, the operation is complicated, resulting in low retrieval efficiency.
Disclosure of Invention
The embodiment of the application provides a vehicle retrieval method, a vehicle retrieval device and a storage medium, which can solve the problem of low retrieval efficiency. The technical scheme is as follows:
in a first aspect, a vehicle retrieval method is provided, the method comprising:
Acquiring a shooting image of a vehicle to be retrieved;
invoking a target network model, inputting the photographed image into the target network model, and outputting vehicle image features, wherein the vehicle image features are used for describing global information of a vehicle and comprise specific dimension segments for describing specific local areas of the vehicle, and the target network model is used for determining the vehicle image features of any vehicle based on the photographed image of the any vehicle;
based on the vehicle image features, data associated with the vehicle is retrieved from a database storing a plurality of matching image features, each matching image feature including a matching local region feature corresponding to the particular local region.
Optionally, the retrieving data associated with the vehicle from a database based on the vehicle image features includes:
determining cosine similarity between the vehicle image features and each matching image feature in the database, and obtaining a first similarity score corresponding to each matching image feature;
according to the sequence of the first similarity scores from large to small, obtaining the matching image features corresponding to the first similarity scores of the preset quantity from the database;
Determining the matching local area characteristics corresponding to the specific local area from each acquired matching image characteristic to obtain the preset number of matching local area characteristics;
determining cosine similarity between the features in the specific dimension section and each of the preset number of matched local region features to obtain the preset number of second similarity scores;
and retrieving data associated with the vehicle from the database based on the preset number of first similarity scores and the preset number of second similarity scores.
Optionally, each matching image feature is identical to the data structure of the vehicle image feature, and the determining, from each obtained matching image feature, a matching local region feature corresponding to the specific local region includes:
determining a location of a feature within the particular dimension segment in the vehicle image feature;
and obtaining the matching characteristic corresponding to the position from each obtained matching image characteristic, and obtaining the matching local area characteristic corresponding to the specific local area in each matching image characteristic.
Optionally, when the database stores correspondence between a plurality of matching image features and vehicle information, the retrieving data associated with the vehicle from the database based on the preset number of first similarity scores and the preset number of second similarity scores includes:
Respectively carrying out weighted summation on each first similar score in the preset number of first similar scores and a corresponding second similar score in the preset number of second similar scores to obtain a preset number of third similar scores;
determining a maximum third similar score from the preset number of third similar scores;
determining the matching image features corresponding to the maximum third phase scores from the preset number of matching image features;
and acquiring vehicle data corresponding to the determined matching image features from the corresponding relations between the plurality of matching image features of the database and the vehicle information, and obtaining data associated with the vehicle.
Optionally, the target network model is obtained by training a network model to be trained based on a plurality of image samples, a vehicle category label in each image sample and position information of a specific local area.
In a second aspect, there is provided a vehicle retrieval device, the device comprising:
the acquisition module is used for acquiring a shooting image of the vehicle to be retrieved;
the calling module is used for calling a target network model, inputting the shot image into the target network model, outputting vehicle image characteristics, wherein the vehicle image characteristics are used for describing global information of a vehicle, specific dimension sections included in the vehicle image characteristics are used for describing specific local areas of the vehicle, and the target network model is used for determining the vehicle image characteristics of any vehicle based on the shot image of the any vehicle;
And the retrieval module is used for retrieving data associated with the vehicle from a database based on the vehicle image features, wherein the database stores a plurality of matching image features, and each matching image feature comprises matching local area features corresponding to the specific local area.
Optionally, the retrieving module is configured to:
determining cosine similarity between the vehicle image features and each matching image feature in the database, and obtaining a first similarity score corresponding to each matching image feature;
according to the sequence of the first similarity scores from large to small, obtaining the matching image features corresponding to the first similarity scores of the preset quantity from the database;
determining the matching local area characteristics corresponding to the specific local area from each acquired matching image characteristic to obtain the preset number of matching local area characteristics;
determining cosine similarity between the features in the specific dimension section and each of the preset number of matched local region features to obtain the preset number of second similarity scores;
and retrieving data associated with the vehicle from the database based on the preset number of first similarity scores and the preset number of second similarity scores.
Optionally, the retrieving module is configured to:
each matching image feature is identical to the data structure of the vehicle image feature, and the position of the feature in the specific dimension section in the vehicle image feature is determined;
and obtaining the matching characteristic corresponding to the position from each obtained matching image characteristic, and obtaining the matching local area characteristic corresponding to the specific local area in each matching image characteristic.
Optionally, the retrieving module is configured to:
respectively carrying out weighted summation on each first similar score in the preset number of first similar scores and a corresponding second similar score in the preset number of second similar scores to obtain a preset number of third similar scores;
determining a maximum third similar score from the preset number of third similar scores;
determining the matching image features corresponding to the maximum third phase scores from the preset number of matching image features;
and acquiring vehicle data corresponding to the determined matching image features from the corresponding relations between the plurality of matching image features of the database and the vehicle information, and obtaining data associated with the vehicle.
Optionally, the target network model is obtained by training a network model to be trained based on a plurality of image samples, a vehicle category label of each image sample and position information of a specific local area.
In a third aspect, there is provided a computer-readable storage medium having stored thereon instructions that, when executed by a processor, implement the vehicle retrieval method of the first aspect described above.
In a fourth aspect, there is provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the vehicle retrieval method of the first aspect described above.
The technical scheme provided by the embodiment of the application has the beneficial effects that:
and acquiring a shooting image of the vehicle to be searched, calling a target network model, inputting the shooting image into the target network model, and outputting the vehicle image characteristics of the vehicle. The vehicle image features are used for describing global information of the vehicle in whole, and the specific dimension segments included in the vehicle image features are used for describing specific local areas of the vehicle, namely, the features capable of describing the global area and the specific local areas of the vehicle can be extracted at one time through the target network model. Then, based on the extracted vehicle image features, the data associated with the vehicle can be retrieved from the database, so that the need of multiple feature extraction can be avoided, and the retrieval efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart illustrating a method of vehicle retrieval according to an exemplary embodiment;
FIG. 2 is a schematic illustration of a vehicle, according to an exemplary embodiment;
FIG. 3 is a schematic diagram of one feature shown in accordance with an exemplary embodiment;
fig. 4 is a schematic structural view of a vehicle retrieval device according to an exemplary embodiment;
fig. 5 is a schematic structural view of a terminal according to an exemplary embodiment.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
Before describing the vehicle searching method provided by the embodiment of the application in detail, the application scene and the implementation environment related to the embodiment of the application are briefly described.
First, an application scenario according to an embodiment of the present application is briefly described.
In the field of intelligent traffic, there is a general need to search vehicles, for example, when a traffic police wants to query the escape route of a hit vehicle, it can determine which traffic gate the vehicle has occurred at, further, during the search, accurate search can be performed according to certain specific local areas of the vehicle with significant features, for example, the specific local areas are pendant areas, etc. At present, a vehicle can be generally searched in an image searching mode, that is, a shooting image of the vehicle can be obtained, the whole vehicle characteristic of the shooting image is extracted, then all the whole vehicle matching characteristics matched with the whole vehicle characteristic are queried from a database, and then all the matching images matched with the vehicle are searched according to all the queried whole vehicle matching characteristics. Then, specific local area feature extraction can be performed on the photographed image and all the matched images, and then specific local area feature matching can be performed, so that the matched vehicle closest to the vehicle can be retrieved. However, in the current implementation, the search efficiency of the vehicle is low due to the need to perform feature extraction multiple times. Therefore, the embodiment of the application provides a vehicle searching method, which can avoid the need of carrying out feature extraction for multiple times and improve the searching efficiency, and the specific implementation of the method is shown in the following embodiment.
Next, an implementation environment related to the embodiment of the present application will be briefly described.
The vehicle retrieval method provided by the embodiment of the application can be executed by intelligent equipment, and the intelligent equipment is configured with or is connected with a camera so as to shoot the vehicle through the camera. In practice, the smart device may be installed in a scene such as a bayonet, an electronic toll booth, or the like. In one possible implementation, the smart device may also be connected to a server, which may be configured with a database to store relevant data of a vehicle via the database, so that the smart device may retrieve a vehicle in the database based on a captured image of the vehicle.
In some embodiments, the smart device may be a smart camera device, or the smart device may be a terminal, a tablet computer, a portable computer, or the like, which is not limited in this embodiment of the present application.
After describing application scenarios and implementation environments related to the embodiments of the present application, a vehicle searching method provided by the embodiments of the present application will be described in detail with reference to the accompanying drawings.
Fig. 1 is a flowchart showing a vehicle search method according to an exemplary embodiment, and the vehicle search method is described by taking an example in which the vehicle search method is executed by an intelligent device, and the vehicle search method may include the following steps:
Step 101: a captured image of a vehicle to be retrieved is acquired.
In daily life, a camera is usually installed in a scene such as a bayonet, an electronic toll gate, a speed limit area, etc., and the shooting range of the camera is adjusted to shoot a passing vehicle through the camera, so as to obtain a shot image of the vehicle.
In some embodiments, the smart device may store a captured image captured by the camera. Further, the intelligent device may acquire a captured image of the vehicle to be retrieved after receiving the retrieval instruction. The retrieval instruction may be triggered by a user, who may be triggered by a preset operation. That is, the smart device may provide a search option and an image selection option, and when a user wants to search for a certain vehicle, may select a photographed image of the vehicle that is desired to be searched for based on the image selection option and click on the search option to trigger a search instruction, at which time the image capturing device performs an operation of acquiring the photographed image.
The preset operation may be a clicking operation, a sliding operation, a panning operation, or the like, which is not limited in the embodiment of the present application.
In one possible implementation manner, after the intelligent device acquires the captured image, denoising and other processing may be performed on the captured image, which is not limited by the embodiment of the present application.
Step 102: and calling a target network model, inputting the shot image into the target network model, and outputting vehicle image characteristics, wherein the vehicle image characteristics are used for describing global information of the vehicle, the specific dimension section is used for describing a specific local area of the vehicle, and the target network model is used for determining the vehicle image characteristics of any vehicle based on the shot image of the any vehicle.
The target network model is obtained through deep learning training. In one possible implementation manner, the target network model may include an input layer, a convolution layer, a pooling layer and an output layer, and after the intelligent device inputs the captured image to the target network model, the target network model processes the captured image sequentially through the input layer, the convolution layer, the pooling layer and the output layer, and outputs the vehicle image feature.
It should be noted that, the foregoing is merely an example in which the target network model includes an input layer, a convolution layer, a pooling layer, and an output layer, and in another embodiment, the target network model may further include other network layers, for example, may further include a sampling layer, etc., which is not limited in this embodiment of the present application.
The number of the specific local areas of the vehicle may be one or a plurality of. In addition, the vehicle image features also include global associated features, which may include other features in the vehicle other than a particular local region. Further, when the number of the specific local areas is plural, the global association feature further includes an information feature for describing an association relationship between the plural specific local areas.
In some embodiments, referring to fig. 2, the specific local area may include a roof area 1, an annual index area 2, a left ornament area 3, a right ornament area 4, a pendant area 5, a body area 6, a left lamp area 7, and a right lamp area 8.
In addition, when the number of the specific local areas is plural, the features of the plural specific local areas and the global associated features may be arranged in the vehicle image feature according to a preset rule, and the data length of the features within each specific dimension section and the global associated features may be a preset data length. The preset rule can be set according to actual requirements, the preset data length can be set by a user according to actual requirements, and can also be set by default by the intelligent device, which is not limited by the embodiment of the application.
For example, referring to fig. 3, fig. 3 is a schematic diagram illustrating a structure of a vehicle image feature according to an exemplary embodiment, where the feature of each specific local area is described by 128 data lengths, and the global association feature is described by 512 data lengths, that is, the data length of the vehicle image feature is 1536.
It should be noted that, in implementation, different data lengths may also be used to describe the characteristics of each specific local region of the plurality of specific local regions. And, the arrangement order of the features of the plurality of specific local areas and the global related features in the vehicle image feature may be set according to actual requirements, which is not limited by the embodiment of the present application.
It should be noted that the global associated features and the features of the specific local areas of the vehicle are output at one time through the target network model, so that the need of multiple feature extraction is avoided, and the vehicle retrieval efficiency is improved.
Further, the target network model is obtained by training a network model to be trained based on a plurality of image samples, a vehicle type label of each image sample and position information of a specific local area.
In implementation, a plurality of image samples can be obtained, vehicles are divided into areas on each of the plurality of image samples according to an area division rule, and the types of the vehicles in each image sample are marked to obtain the vehicle type label of each image sample and the position information of a specific local area. And then, inputting the plurality of image samples, the vehicle type label of each image sample and the position information of the specific local area into a network model to be trained for deep training to obtain the target network model, so that the target network model can determine the vehicle image characteristics of any vehicle based on the photographed image of the any vehicle.
In one possible implementation manner, the network model to be trained may be a deep convolutional neural network, further, the network model to be trained may be a google indication network, a residual network (res net), and the like, which is not limited in the embodiment of the present application.
Further, during training, the training sample may include other information besides the location information of the specific local area, which is not limited in the embodiment of the present application.
Step 103: based on the vehicle image features, data associated with the vehicle is retrieved from a database storing a plurality of matching image features, each matching image feature including a matching local region feature corresponding to the particular local region.
Specifically, each of the matching image features is identical to the data structure of the vehicle image feature. In some embodiments, retrieving data associated with the vehicle from a database based on the vehicle image features may include the following steps:
1031: and determining cosine similarity between the vehicle image features and each matching image feature in the database, and obtaining a first similarity score corresponding to each matching image feature.
In this embodiment, the vehicle may be subjected to vehicle matching based on the vehicle image features, that is, the matching image features of the matching vehicle that is similar to the vehicle as a whole in the database are determined. In an implementation, the intelligent device determines cosine similarity between the vehicle image feature and each matching image feature in the database to determine a degree of matching between each matching image feature in the database and the vehicle image feature, and obtains a first similarity score corresponding to each matching image feature.
For ease of description, the first similarity score corresponding to each of the matching image features determined by the smart device is denoted herein as s i Wherein the value range of i is [1, N]The N is the number of matching image features in the database.
1032: and acquiring the matching image features corresponding to the first similarity scores of the preset number from the database according to the sequence of the first similarity scores from large to small.
Because the larger the first similarity score is, the more similar the matching image feature corresponding to the first similarity score is to the vehicle image feature, and further the greater the overall similarity between the matching vehicle corresponding to the matching image feature and the vehicle corresponding to the vehicle image feature can be illustrated. Therefore, the intelligent device can acquire a preset number of matching image features with high similarity with the vehicle image features from the database according to a certain proportion or number based on the obtained first similarity score. In implementation, the obtained first similarity scores corresponding to each matching image feature may be ranked in order of the first similarity scores from the top to the bottom. The intelligent equipment determines a preset number of first similar scores from the sorted first similar scores, and then obtains matching image features corresponding to the preset number of first similar scores from a database.
The preset number may be set by a user in a user-defined manner according to actual needs, or may be set by default by the intelligent device, which is not limited in the embodiment of the present application.
1033: and determining the matching local area characteristics corresponding to the specific local area from each acquired matching image characteristic to obtain the preset number of matching local area characteristics.
After the intelligent device obtains a preset number of matching image features with larger similarity from the database, the similarity between the vehicle to be retrieved and a specific local area of the matching vehicle can be further determined based on the preset number of matching image features. For this purpose, the intelligent device determines, from each of the acquired matching image features, a matching local region feature corresponding to the specific local region.
In one possible implementation manner, each matching image feature is identical to the data structure of the vehicle image feature, and accordingly, determining, from each obtained matching image feature, a specific implementation of the matching local area feature corresponding to the specific local area may include: and determining the position of the feature in the specific dimension section in the vehicle image feature, and acquiring the matching feature corresponding to the position from each acquired matching image feature to obtain the matching local region feature corresponding to the specific local region in each matching image feature.
The fact that each matching image feature is identical to the data structure of the vehicle image feature means that the location of the matching local region feature of the specific local region in each matching image feature is identical to the location of the feature in the specific dimension section of the specific local region in the vehicle image feature, and the data length is identical, for example, the data structure of each matching image feature is shown in fig. 3.
When each matching image feature is identical to the data structure of the vehicle image feature, please refer to fig. 3, assuming that the position of the feature in the specific dimension section in the vehicle image feature is [0,127], the intelligent device obtains the matching local region feature corresponding to the [0,127] position from each matching image feature obtained in the step 1032, and obtains the matching local region feature corresponding to the specific local region in each matching image feature.
When the number of the specific local areas is plural, the user may specify the specific local area to be matched according to the actual requirement, and at this time, the intelligent device determines the matching local area feature corresponding to the specific local area from each obtained matching image feature, and then searches the vehicle according to the following implementation manner.
1034: and determining cosine similarity between the features in the specific dimension section and each matched local region feature in the preset number of matched local region features to obtain the preset number of second similarity scores.
That is, the intelligent device determines, from the predetermined number of matched vehicles determined above, a degree of matching between the specific local area of each matched vehicle and the specific local area of the vehicle, and obtains a predetermined number of second similarity scores.
For ease of description, the predetermined number of second similar scores determined by the smart device is denoted herein as q i Wherein the value range of i is [1, K]The K represents the preset number.
1035: based on the predetermined number of first similarity scores and the predetermined number of second similarity scores, data associated with the vehicle is retrieved from the database.
Since the first similarity score is used to represent the overall matching degree of the vehicle, and the second similarity score is used to represent the matching degree of a specific local area in the matched vehicle similar to the vehicle to be searched, the data related to the vehicle to be searched can be accurately searched from the database based on the preset number of first similarity scores and the preset number of second similarity scores.
In one possible implementation, when the database stores correspondence between a plurality of matching image features and vehicle information, retrieving data associated with the vehicle from the database based on the preset number of first similarity scores and the preset number of second similarity scores may include: and respectively carrying out weighted summation on each first similar score in the preset number of first similar scores and a corresponding second similar score in the preset number of second similar scores to obtain the preset number of third similar scores, determining the maximum third similar score from the preset number of third similar scores, determining the matching image characteristic corresponding to the maximum third similar score from the preset number of matching image characteristics, and acquiring vehicle data corresponding to the determined matching image characteristic from the corresponding relation between the plurality of matching image characteristics of the database and vehicle information to obtain data related to the vehicle.
Continuing with the above example, the smart device will s i And q i And carrying out weighted summation calculation to obtain a preset number of third similar scores. The larger the third similarity score is, the higher the matching degree between the corresponding matching image feature and the vehicle image feature is, so that the intelligent equipment determines the maximum similarity score from the obtained preset number of third similarity scores, and then determines the matching image feature corresponding to the third similarity score, and the matching image feature of the matching vehicle which is similar to the whole vehicle and has similar specific local areas is obtained. The intelligence is thatThe equipment can inquire vehicle data corresponding to the matched image features from the data to obtain data related to the vehicle to be retrieved.
It should be noted that, in this embodiment, only the matching image features of the vehicle need to be stored in the database, and a large number of vehicle pictures or vehicle feature images obtained by deep learning need not be stored in the database, so that the storage space is saved.
It should be noted that, the foregoing is described by taking the smart device to execute the steps 1031 to 1035 as an example, and in another embodiment, the steps may also be executed by a server, and the determined result is sent to the smart device, so that the operation burden of the smart device may be reduced.
In the embodiment of the application, a shooting image of a vehicle to be searched is acquired, a target network model is called, the shooting image is input into the target network model, and the vehicle image characteristics of the vehicle are output. The vehicle image features are used for describing global information of the vehicle in whole, and the specific dimension segments included in the vehicle image features are used for describing specific local areas of the vehicle, namely, the features capable of describing the global area and the specific local areas of the vehicle can be extracted at one time through the target network model. Then, based on the extracted vehicle image features, the data associated with the vehicle can be retrieved from the database, so that the need of multiple feature extraction can be avoided, and the retrieval efficiency is improved.
Fig. 4 is a schematic structural diagram of a vehicle retrieval device, which may be implemented in software, hardware, or a combination of both, according to an exemplary embodiment. The vehicle retrieval device may include:
an acquisition module 410, configured to acquire a captured image of a vehicle to be retrieved;
the invoking module 420 is configured to invoke a target network model, input the captured image into the target network model, and output vehicle image features, where the vehicle image features are used to describe global information of a vehicle and include specific dimension segments used to describe specific local areas of the vehicle, and the target network model is used to determine vehicle image features of any vehicle based on the captured image of the any vehicle;
The retrieving module 430 is configured to retrieve data associated with the vehicle from a database based on the vehicle image feature, where the database stores a plurality of matching image features, and each matching image feature includes a matching local region feature corresponding to the specific local region.
Optionally, the retrieving module 430 is configured to:
determining cosine similarity between the vehicle image features and each matching image feature in the database, and obtaining a first similarity score corresponding to each matching image feature;
according to the sequence of the first similarity scores from large to small, obtaining the matching image features corresponding to the first similarity scores of the preset quantity from the database;
determining the matching local area characteristics corresponding to the specific local area from each acquired matching image characteristic to obtain the preset number of matching local area characteristics;
determining cosine similarity between the features in the specific dimension section and each of the preset number of matched local region features to obtain the preset number of second similarity scores;
and retrieving data associated with the vehicle from the database based on the preset number of first similarity scores and the preset number of second similarity scores.
Optionally, the retrieving module 430 is configured to:
each matching image feature is identical to the data structure of the vehicle image feature, and the position of the feature in the specific dimension section in the vehicle image feature is determined;
and obtaining the matching characteristic corresponding to the position from each obtained matching image characteristic, and obtaining the matching local area characteristic corresponding to the specific local area in each matching image characteristic.
Optionally, the retrieving module 430 is configured to:
respectively carrying out weighted summation on each first similar score in the preset number of first similar scores and a corresponding second similar score in the preset number of second similar scores to obtain a preset number of third similar scores;
determining a maximum third similar score from the preset number of third similar scores;
determining the matching image features corresponding to the maximum third phase scores from the preset number of matching image features;
and acquiring vehicle data corresponding to the determined matching image features from the corresponding relations between the plurality of matching image features of the database and the vehicle information, and obtaining data associated with the vehicle.
Optionally, the target network model is obtained by training a network model to be trained based on a plurality of image samples, and the vehicle class label and the position information of a specific local area of each image sample.
In the embodiment of the application, a shooting image of a vehicle to be searched is acquired, a target network model is called, the shooting image is input into the target network model, and the vehicle image characteristics of the vehicle are output. The vehicle image features are used for describing global information of the vehicle in whole, and the specific dimension segments included in the vehicle image features are used for describing specific local areas of the vehicle, namely, the features capable of describing the global area and the specific local areas of the vehicle can be extracted at one time through the target network model. Then, based on the extracted vehicle image features, the data associated with the vehicle can be retrieved from the database, so that the need of multiple feature extraction can be avoided, and the retrieval efficiency is improved.
It should be noted that: in the vehicle searching device provided in the above embodiment, when the vehicle searching method is implemented, only the division of the above functional modules is used for illustration, in practical application, the above functional allocation may be implemented by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to implement all or part of the functions described above. In addition, the vehicle searching device and the vehicle searching method provided in the foregoing embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiments, which are not described herein again.
Fig. 5 shows a block diagram of a terminal 500 according to an exemplary embodiment of the present application. The terminal 500 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. The terminal 500 may also be referred to by other names of user devices, portable terminals, laptop terminals, desktop terminals, etc.
In general, the terminal 500 includes: a processor 501 and a memory 502.
Processor 501 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and the like. The processor 501 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 501 may also include a main processor and a coprocessor, the main processor being a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ); a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 501 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 501 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 502 may include one or more computer-readable storage media, which may be non-transitory. Memory 502 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 502 is used to store at least one instruction for execution by processor 501 to implement the vehicle retrieval method provided by the method embodiments of the present application.
In some embodiments, the terminal 500 may further optionally include: a peripheral interface 503 and at least one peripheral. The processor 501, memory 502, and peripheral interface 503 may be connected by buses or signal lines. The individual peripheral devices may be connected to the peripheral device interface 503 by buses, signal lines or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 504, touch display 505, camera 506, audio circuitry 507, positioning component 508, and power supply 509.
Peripheral interface 503 may be used to connect at least one Input/Output (I/O) related peripheral to processor 501 and memory 502. In some embodiments, processor 501, memory 502, and peripheral interface 503 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 501, memory 502, and peripheral interface 503 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 504 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuitry 504 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 504 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 504 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 504 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 504 may also include NFC (Near Field Communication ) related circuitry, which is not limited by the present application.
The display 505 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 505 is a touch display, the display 505 also has the ability to collect touch signals at or above the surface of the display 505. The touch signal may be input as a control signal to the processor 501 for processing. At this time, the display 505 may also be used to provide virtual buttons and/or virtual keyboards, also referred to as soft buttons and/or soft keyboards. In some embodiments, the display 505 may be one, providing a front panel of the terminal 500; in other embodiments, the display 505 may be at least two, respectively disposed on different surfaces of the terminal 500 or in a folded design; in still other embodiments, the display 505 may be a flexible display disposed on a curved surface or a folded surface of the terminal 500. Even more, the display 505 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The display 505 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 506 is used to capture images or video. Optionally, the camera assembly 506 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 506 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuitry 507 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 501 for processing, or inputting the electric signals to the radio frequency circuit 504 for voice communication. For the purpose of stereo acquisition or noise reduction, a plurality of microphones may be respectively disposed at different portions of the terminal 500. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 501 or the radio frequency circuit 504 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, audio circuitry 507 may also include a headphone jack.
The location component 508 is used to locate the current geographic location of the terminal 500 to enable navigation or LBS (Location Based Service, location-based services). The positioning component 508 may be a positioning component based on the United states GPS (Global Positioning System ), the Beidou system of China, or the Galileo system of Russia.
A power supply 509 is used to power the various components in the terminal 500. The power supply 509 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power supply 509 comprises a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 500 further includes one or more sensors 510. The one or more sensors 510 include, but are not limited to: an acceleration sensor 511, a gyro sensor 512, a pressure sensor 513, a fingerprint sensor 514, an optical sensor 515, and a proximity sensor 516.
The acceleration sensor 511 can detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 500. For example, the acceleration sensor 511 may be used to detect components of gravitational acceleration on three coordinate axes. The processor 501 may control the touch display 505 to display a user interface in a landscape view or a portrait view according to a gravitational acceleration signal acquired by the acceleration sensor 511. The acceleration sensor 511 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 512 may detect a body direction and a rotation angle of the terminal 500, and the gyro sensor 512 may collect a 3D motion of the user to the terminal 500 in cooperation with the acceleration sensor 511. The processor 501 may implement the following functions based on the data collected by the gyro sensor 512: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 513 may be disposed at a side frame of the terminal 500 and/or at a lower layer of the touch display 505. When the pressure sensor 513 is disposed at a side frame of the terminal 500, a grip signal of the user to the terminal 500 may be detected, and the processor 501 performs left-right hand recognition or quick operation according to the grip signal collected by the pressure sensor 513. When the pressure sensor 513 is disposed at the lower layer of the touch display screen 505, the processor 501 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 505. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 514 is used for collecting the fingerprint of the user, and the processor 501 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 514, or the fingerprint sensor 514 identifies the identity of the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the user is authorized by the processor 501 to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 514 may be provided on the front, back or side of the terminal 500. When a physical key or a vendor Logo is provided on the terminal 500, the fingerprint sensor 514 may be integrated with the physical key or the vendor Logo.
The optical sensor 515 is used to collect the ambient light intensity. In one embodiment, the processor 501 may control the display brightness of the touch screen 505 based on the ambient light intensity collected by the optical sensor 515. Specifically, when the intensity of the ambient light is high, the display brightness of the touch display screen 505 is turned up; when the ambient light intensity is low, the display brightness of the touch display screen 505 is turned down. In another embodiment, the processor 501 may also dynamically adjust the shooting parameters of the camera assembly 506 based on the ambient light intensity collected by the optical sensor 515.
A proximity sensor 516, also referred to as a distance sensor, is typically provided on the front panel of the terminal 500. The proximity sensor 516 serves to collect a distance between the user and the front surface of the terminal 500. In one embodiment, when the proximity sensor 516 detects that the distance between the user and the front of the terminal 500 gradually decreases, the processor 501 controls the touch display 505 to switch from the bright screen state to the off screen state; when the proximity sensor 516 detects that the distance between the user and the front surface of the terminal 500 gradually increases, the processor 501 controls the touch display 505 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 5 is not limiting and that more or fewer components than shown may be included or certain components may be combined or a different arrangement of components may be employed.
The embodiment of the application also provides a non-transitory computer readable storage medium, which when executed by a processor of a mobile terminal, enables the mobile terminal to execute the vehicle retrieval method provided by the embodiment.
The embodiment of the application also provides a computer program product containing instructions, which when run on a computer, cause the computer to execute the vehicle searching method provided by the embodiment.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the application is not intended to limit the application to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the application are intended to be included within the scope of the application.

Claims (11)

1. A vehicle retrieval method, the method comprising:
acquiring a shooting image of a vehicle to be retrieved;
Invoking a target network model, inputting the shot image into the target network model, extracting features of the shot image once by the target network model, and outputting vehicle image features, wherein the vehicle image features are used for describing global information of a vehicle, the vehicle image features comprise global association features and specific dimension segments, the global association features and the specific dimension segments are arranged in the vehicle image features according to a preset rule, the specific dimension segments are used for describing specific local areas of the vehicle, the global association features are used for describing features except the specific local areas in the vehicle, and the target network model is used for determining the vehicle image features of any vehicle based on the shot image of any vehicle;
based on the vehicle image features, data associated with the vehicle is retrieved from a database storing a plurality of matching image features, each matching image feature including a matching local region feature corresponding to the particular local region.
2. The method of claim 1, wherein the retrieving data associated with the vehicle from a database based on the vehicle image features comprises:
Determining cosine similarity between the vehicle image features and each matching image feature in the database, and obtaining a first similarity score corresponding to each matching image feature;
according to the sequence of the first similarity scores from large to small, obtaining the matching image features corresponding to the first similarity scores of the preset quantity from the database;
determining the matching local area characteristics corresponding to the specific local area from each acquired matching image characteristic to obtain the preset number of matching local area characteristics;
determining cosine similarity between the features in the specific dimension section and each of the preset number of matched local region features to obtain the preset number of second similarity scores;
and retrieving data associated with the vehicle from the database based on the preset number of first similarity scores and the preset number of second similarity scores.
3. The method of claim 2, wherein each of the matching image features is identical to the data structure of the vehicle image feature, and wherein determining the matching local region feature corresponding to the particular local region from each of the acquired matching image features comprises:
Determining a location of a feature within the particular dimension segment in the vehicle image feature;
and obtaining the matching characteristic corresponding to the position from each obtained matching image characteristic, and obtaining the matching local area characteristic corresponding to the specific local area in each matching image characteristic.
4. The method of claim 2, wherein when the database stores correspondence between a plurality of matching image features and vehicle information, the retrieving data associated with the vehicle from the database based on the preset number of first similarity scores and the preset number of second similarity scores comprises:
respectively carrying out weighted summation on each first similar score in the preset number of first similar scores and a corresponding second similar score in the preset number of second similar scores to obtain a preset number of third similar scores;
determining a maximum third similar score from the preset number of third similar scores;
determining the matching image features corresponding to the maximum third phase scores from the preset number of matching image features;
and acquiring vehicle data corresponding to the determined matching image features from the corresponding relations between the plurality of matching image features of the database and the vehicle information, and obtaining data associated with the vehicle.
5. The method of claim 1, wherein the target network model is trained based on a plurality of image samples, a vehicle class label for each image sample, and location information for a particular local area.
6. A vehicle retrieval device, the device comprising:
the acquisition module is used for acquiring a shooting image of the vehicle to be retrieved;
the system comprises a calling module, a target network model and a target network model, wherein the calling module is used for calling the target network model, inputting the shot image into the target network model, extracting features of the shot image once by the target network model, outputting vehicle image features, wherein the vehicle image features are used for describing global information of a vehicle, the vehicle image features comprise global association features and specific dimension segments, the global association features and the specific dimension segments are arranged in the vehicle image features according to a preset rule, the specific dimension segments are used for describing specific local areas of the vehicle, the global association features are used for describing features except the specific local areas in the vehicle, and the target network model is used for determining the vehicle image features of any vehicle based on the shot image of any vehicle;
And the retrieval module is used for retrieving data associated with the vehicle from a database based on the vehicle image features, wherein the database stores a plurality of matching image features, and each matching image feature comprises matching local area features corresponding to the specific local area.
7. The apparatus of claim 6, wherein the retrieval module is to:
determining cosine similarity between the vehicle image features and each matching image feature in the database, and obtaining a first similarity score corresponding to each matching image feature;
according to the sequence of the first similarity scores from large to small, obtaining the matching image features corresponding to the first similarity scores of the preset quantity from the database;
determining the matching local area characteristics corresponding to the specific local area from each acquired matching image characteristic to obtain the preset number of matching local area characteristics;
determining cosine similarity between the features in the specific dimension section and each of the preset number of matched local region features to obtain the preset number of second similarity scores;
and retrieving data associated with the vehicle from the database based on the preset number of first similarity scores and the preset number of second similarity scores.
8. The apparatus of claim 7, wherein the retrieval module is to:
each matching image feature is identical to the data structure of the vehicle image feature, and the position of the feature in the specific dimension section in the vehicle image feature is determined;
and obtaining the matching characteristic corresponding to the position from each obtained matching image characteristic, and obtaining the matching local area characteristic corresponding to the specific local area in each matching image characteristic.
9. The apparatus of claim 7, wherein the retrieval module is to:
respectively carrying out weighted summation on each first similar score in the preset number of first similar scores and a corresponding second similar score in the preset number of second similar scores to obtain a preset number of third similar scores;
determining a maximum third similar score from the preset number of third similar scores;
determining the matching image features corresponding to the maximum third phase scores from the preset number of matching image features;
and acquiring vehicle data corresponding to the determined matching image features from the corresponding relations between the plurality of matching image features of the database and the vehicle information, and obtaining data associated with the vehicle.
10. The apparatus of claim 6, wherein the target network model is trained based on a plurality of image samples, a vehicle class label for each image sample, and location information for a particular local feature.
11. A computer readable storage medium having instructions stored thereon, which when executed by a processor, implement the steps of the method of any of claims 1-5.
CN201910134010.7A 2019-02-22 2019-02-22 Vehicle searching method, device and storage medium Active CN111611414B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910134010.7A CN111611414B (en) 2019-02-22 2019-02-22 Vehicle searching method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910134010.7A CN111611414B (en) 2019-02-22 2019-02-22 Vehicle searching method, device and storage medium

Publications (2)

Publication Number Publication Date
CN111611414A CN111611414A (en) 2020-09-01
CN111611414B true CN111611414B (en) 2023-10-24

Family

ID=72202973

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910134010.7A Active CN111611414B (en) 2019-02-22 2019-02-22 Vehicle searching method, device and storage medium

Country Status (1)

Country Link
CN (1) CN111611414B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113569911A (en) * 2021-06-28 2021-10-29 北京百度网讯科技有限公司 Vehicle identification method and device, electronic equipment and storage medium
CN115222896B (en) * 2022-09-20 2023-05-23 荣耀终端有限公司 Three-dimensional reconstruction method, device, electronic device and computer-readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108197538A (en) * 2017-12-21 2018-06-22 浙江银江研究院有限公司 A kind of bayonet vehicle searching system and method based on local feature and deep learning
CN108197326A (en) * 2018-02-06 2018-06-22 腾讯科技(深圳)有限公司 A kind of vehicle retrieval method and device, electronic equipment, storage medium
CN108229468A (en) * 2017-06-28 2018-06-29 北京市商汤科技开发有限公司 Vehicle appearance feature recognition and vehicle retrieval method, apparatus, storage medium, electronic equipment
CN108596277A (en) * 2018-05-10 2018-09-28 腾讯科技(深圳)有限公司 A kind of testing vehicle register identification method, apparatus and storage medium
CN109063768A (en) * 2018-08-01 2018-12-21 北京旷视科技有限公司 Vehicle recognition methods, apparatus and system again
CN109359696A (en) * 2018-10-29 2019-02-19 重庆中科云丛科技有限公司 A kind of vehicle money recognition methods, system and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108229468A (en) * 2017-06-28 2018-06-29 北京市商汤科技开发有限公司 Vehicle appearance feature recognition and vehicle retrieval method, apparatus, storage medium, electronic equipment
WO2019001481A1 (en) * 2017-06-28 2019-01-03 北京市商汤科技开发有限公司 Vehicle appearance feature identification and vehicle search method and apparatus, storage medium, and electronic device
CN108197538A (en) * 2017-12-21 2018-06-22 浙江银江研究院有限公司 A kind of bayonet vehicle searching system and method based on local feature and deep learning
CN108197326A (en) * 2018-02-06 2018-06-22 腾讯科技(深圳)有限公司 A kind of vehicle retrieval method and device, electronic equipment, storage medium
CN108596277A (en) * 2018-05-10 2018-09-28 腾讯科技(深圳)有限公司 A kind of testing vehicle register identification method, apparatus and storage medium
CN109063768A (en) * 2018-08-01 2018-12-21 北京旷视科技有限公司 Vehicle recognition methods, apparatus and system again
CN109359696A (en) * 2018-10-29 2019-02-19 重庆中科云丛科技有限公司 A kind of vehicle money recognition methods, system and storage medium

Also Published As

Publication number Publication date
CN111611414A (en) 2020-09-01

Similar Documents

Publication Publication Date Title
CN110807361B (en) Human body identification method, device, computer equipment and storage medium
CN110222789B (en) Image recognition method and storage medium
CN111753784B (en) Video special effect processing method, device, terminal and storage medium
CN112084811B (en) Identity information determining method, device and storage medium
CN111127509B (en) Target tracking method, apparatus and computer readable storage medium
CN111782950B (en) Sample data set acquisition method, device, equipment and storage medium
CN111857793B (en) Training method, device, equipment and storage medium of network model
CN111754386B (en) Image area shielding method, device, equipment and storage medium
CN112148899A (en) Multimedia recommendation method, device, equipment and storage medium
CN112052897A (en) Multimedia data shooting method, device, terminal, server and storage medium
CN112261491B (en) Video time sequence marking method and device, electronic equipment and storage medium
CN108831423B (en) Method, device, terminal and storage medium for extracting main melody tracks from audio data
CN111611414B (en) Vehicle searching method, device and storage medium
CN109547847B (en) Method and device for adding video information and computer readable storage medium
CN110471614B (en) Method for storing data, method and device for detecting terminal
CN112100528B (en) Method, device, equipment and medium for training search result scoring model
CN112990424B (en) Neural network model training method and device
CN113592874B (en) Image display method, device and computer equipment
CN110737692A (en) data retrieval method, index database establishment method and device
CN111860064B (en) Video-based target detection method, device, equipment and storage medium
CN111310526B (en) Parameter determination method and device for target tracking model and storage medium
CN111563201A (en) Content pushing method, device, server and storage medium
CN112135256A (en) Method, device and equipment for determining movement track and readable storage medium
CN113724739B (en) Method, terminal and storage medium for retrieving audio and training acoustic model
CN114299997B (en) Audio data processing method, device, electronic device, storage medium and product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant