[go: up one dir, main page]

CN114359216A - Image data processing method, device, equipment and storage medium - Google Patents

Image data processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN114359216A
CN114359216A CN202111666092.3A CN202111666092A CN114359216A CN 114359216 A CN114359216 A CN 114359216A CN 202111666092 A CN202111666092 A CN 202111666092A CN 114359216 A CN114359216 A CN 114359216A
Authority
CN
China
Prior art keywords
image data
score
target
quality score
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111666092.3A
Other languages
Chinese (zh)
Inventor
张辉
李建
黄伟锋
张恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN202111666092.3A priority Critical patent/CN114359216A/en
Publication of CN114359216A publication Critical patent/CN114359216A/en
Withdrawn legal-status Critical Current

Links

Images

Landscapes

  • Studio Devices (AREA)

Abstract

The application discloses an image data processing method, an image data processing device, image data processing equipment and a storage medium. The image data processing method includes: acquiring image data to be processed; determining a quality score of the image data; and determining whether to perform preset processing on the image data based on the quality scores, wherein the preset processing comprises storing to a first image library and/or distributing to a third-party platform. According to the scheme, whether the image data needs to be subjected to preset processing or not can be determined according to the quality of the image data, and the flexibility of image data processing can be improved.

Description

Image data processing method, device, equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image data processing method, apparatus, device, and storage medium.
Background
With the deep development of public security information construction of smart cities, massive video and image data serve as important basic resources for smart city perception and play more and more important roles in the field of public security.
Generally, images shot by the shooting devices are used as basic resources of the smart city. However, different image data often have different roles, and if all the image data are processed identically, inconvenience in the subsequent use process is often caused.
Disclosure of Invention
The application at least provides an image data processing method, an image data processing device and a storage medium.
The application provides an image data processing method, which comprises the following steps: acquiring image data to be processed; determining a quality score of the image data; and determining whether to perform preset processing on the image data based on the quality scores, wherein the preset processing comprises storing to a first image library and/or distributing to a third-party platform.
Therefore, by performing the corresponding preset processing according to the quality score of the image data, rather than performing the same processing for all the image data, the flexibility of image data processing is improved.
After acquiring image data to be processed, the method comprises the following steps: acquiring a data processing mode corresponding to shooting equipment of image data; and in response to the data processing mode corresponding to the shooting equipment being the data cleaning mode, executing the step of determining the quality score of the image data.
Therefore, by setting a corresponding data processing mode for the photographing apparatus instead of forcing data processing for all photographing apparatuses, hardware resource consumption due to data processing can be reduced.
Before a data processing mode corresponding to a shooting device for acquiring image data, the method comprises the following steps: receiving a first instruction of setting a data processing mode as a data cleaning mode for shooting equipment by a user; and responding to the first instruction, and configuring the data processing mode of the shooting device to be a data cleaning mode corresponding to the first instruction.
Therefore, by receiving an instruction of a user to set a data processing mode for the photographing apparatus, the data processing mode of the photographing apparatus is configured so that image data processing can be performed according to the user's needs.
Wherein determining a quality score of the image data comprises: determining at least one target score acquisition model corresponding to the photographing device; the target score acquisition model is used for outputting the quality score of a target object in the image data; determining a quality score of the image data using the at least one target score acquisition model.
Therefore, by using at least one target score acquisition model, the manner in which quality scores are determined for image data is made more diverse.
Wherein, before determining at least one target score acquisition model corresponding to the photographing apparatus, comprising: receiving a second instruction of setting at least one target score acquisition model for the shooting equipment by the user; and responding to the second instruction, and configuring a target score acquisition model corresponding to the second instruction for the shooting device.
Therefore, by receiving a second instruction of setting at least one target score acquisition model for the shooting device by the user and configuring the corresponding target score acquisition model for the shooting device according to the instruction, image data processing can be carried out according to the requirements of the user.
Wherein the number of the target score acquisition models is at least two; determining a quality score of the image data using at least one target score acquisition model, comprising: respectively obtaining a model by using each target score to obtain candidate quality scores of the image data; and taking the candidate quality scores meeting the preset conditions as the final quality scores of the image data.
Therefore, the determined final quality score is more accurate by obtaining a plurality of candidate quality scores related to the image data by using each target score acquisition model and taking the candidate quality score satisfying the preset condition as the final quality score of the image data.
Wherein determining a quality score of the image data using at least one target score acquisition model comprises: performing feature extraction on a target object in the image data by using at least one target score acquisition model to obtain a plurality of feature points related to the target object, wherein the target objects corresponding to different target score acquisition models are the same or different; and determining the quality score of the image data based on the number of the characteristic points, wherein the quality score is positively correlated with the number of the characteristic points.
Therefore, the quality score of the image data can be determined by extracting the features of the image data and then according to the number of the feature points obtained by feature extraction. In addition, because the feature points can be used for processing the image such as subsequent detection and identification, the more the number of the feature points is, the better the processing can be realized, and the image quality can be well reflected by using the number of the feature points.
Wherein determining at least one target score acquisition model corresponding to the photographing apparatus includes: acquiring a processing scene category corresponding to the image data as a target scene category; and determining at least one target score acquisition model corresponding to the shooting equipment according to the target scene category.
Therefore, by using a target score acquisition model matching the processing scene category of the image data, the applicability of determining the quality score of the image data can be improved.
Acquiring a processing scene type corresponding to the image data, including: acquiring preset configuration information, wherein the configuration information comprises identifiers of a plurality of shooting devices and processing scene categories corresponding to the shooting devices; and inquiring the processing scene type corresponding to the image data to be processed from the configuration information by using the identification of the shooting equipment.
Therefore, by setting corresponding processing scene types for the respective shooting devices, targeted quality score determination can be performed for image data shot by different shooting devices, thereby improving flexibility in determining image data quality scores.
Wherein determining whether to perform preset processing on the image data based on the quality score includes: responding to the quality score meeting the preset requirement, and storing the image data to a first image library; in response to the quality score not meeting the preset requirement, not storing the image data or storing the image data to a second image library; and/or, in response to the quality score meeting a preset requirement, distributing the image data to a third party platform; and in response to the quality score not meeting the preset requirement, not distributing the image data to the third party platform.
Therefore, the image data is stored in the first image library only when the quality score of the image data meets the preset requirement, and the storage resource of the first image library can be saved. In addition, under the condition that the quality score of the image data meets the preset requirement, the image data is distributed to a third-party platform, and network transmission resources can be saved.
Wherein, on the basis of the quality score, determining whether to perform preset processing on the image data further comprises: responding to the quality score meeting the preset requirement, and acquiring image information of the image data; constructing structured data using the image information; storing the structured data to a first image library; and/or, distributing the structured data to a third party platform.
Therefore, by acquiring the image information of the image data with the quality score meeting the preset requirement and constructing the corresponding structured data, rather than acquiring the structured data for all the image data, the resource consumption in the process of acquiring the structured data can be saved. And, subsequent retrieval from the structured data is facilitated by storing or distributing the structured data.
The application provides an image data processing apparatus, including: the data acquisition module is used for acquiring image data to be processed; a score determination module for determining a quality score of the image data; and the processing module is used for determining whether to perform preset processing on the image data based on the quality scores, wherein the preset processing comprises storage in the first image library and/or distribution to a third-party platform.
The application provides an electronic device comprising a memory and a processor, wherein the processor is used for executing program instructions stored in the memory so as to realize the image data processing method.
The present application provides a computer-readable storage medium having stored thereon program instructions that, when executed by a processor, implement the above-described image data processing method.
According to the scheme, the corresponding preset processing is executed according to the quality fraction of the image data instead of performing the same processing on all the image data, so that the flexibility of image data processing is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and, together with the description, serve to explain the principles of the application.
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of an image data processing method according to the present application;
FIG. 2 is a partial flow chart of an embodiment of an image data processing method according to the present application;
FIG. 3 is a partial sub-flowchart diagram illustrating step S12 according to an embodiment of the image data processing method of the present application;
FIG. 4 is another schematic flow chart diagram illustrating an embodiment of an image data processing method of the present application;
FIG. 5 is a schematic structural diagram of an embodiment of an image data processing apparatus according to the present application;
FIG. 6 is a schematic structural diagram of an embodiment of an electronic device of the present application;
FIG. 7 is a schematic structural diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The following describes in detail the embodiments of the present application with reference to the drawings attached hereto.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating an embodiment of an image data processing method according to the present application. Specifically, the image data processing method may include the steps of:
step S11: image data to be processed is acquired.
The image data to be processed may be in various categories, for example, the image data to be processed may be a security monitoring image, a general photographic image, a medical image, or the like.
The mode of acquiring the image data to be processed may be obtained by shooting by a camera assembly carried by the execution device itself executing the image data processing method, or may be transmitted to the execution device by other devices through various communication modes. Other devices refer to devices that do not share the same processor as the executing device.
In other disclosed embodiments, the image data to be processed may also be a plurality of frames of image data extracted from a segment of video data. Wherein, shooting equipment can be the surveillance camera head in the security protection system.
Step S12: a quality score of the image data is determined.
The quality score of the image data may be determined by comprehensively determining the quality score of the image data based on image parameters such as the degree of sharpness and brightness of the image data. Alternatively, the quality score of the image data may be determined by the number of target objects contained in the image data and/or the sharpness of the target objects. The target object may be a human face, a human body, an animal body, a motor vehicle, a non-motor vehicle, or the like. In some application scenarios, the sharper the image data is, the higher the quality score of the image data is.
Step S13: and determining whether to perform preset processing on the image data based on the quality scores, wherein the preset processing comprises storing to a first image library and/or distributing to a third-party platform.
In some application scenarios, it is determined whether to store the image data to the first image repository based on the quality score. For example, the first image library may be a high-quality image library, and image data with a quality score not lower than the first quality score threshold may be stored in the first image library, and image data with a quality score lower than the first quality score threshold may not be stored in the first image library, at this time, image data with a quality score lower than the first quality score threshold may be selected to be directly discarded, so that occupation of a storage space by image data may be reduced, and a storage resource may be saved.
In some application scenarios, it is determined whether to distribute the image data to a third party platform based on the quality score. In order to reduce the network resource consumption in the data transmission process, only image data with higher quality scores can be distributed. For example, image data having a quality score not lower than the second quality score threshold may be distributed to a third party platform, and image data having a quality score lower than the second quality score threshold may not be distributed to the third party platform, and at this time, the image data having a quality score lower than the second quality score threshold may be directly discarded or stored in the second image library. The second image library may be a low quality image library. The first quality score threshold and the second quality score threshold may be the same or different. The third party platform builds a connection to other platforms that execute image data processing. For example, an intelligent transportation platform, a security platform, etc. connected with the execution device.
In some application scenarios, it is determined whether to store and distribute image data to a first image repository to a third party platform based on the quality score. Specifically, image data having a quality score not less than a first quality score threshold is stored to a first image repository, and image data having a quality score not less than a second quality score threshold is distributed to a third party platform. Here, the first quality score threshold and the second quality score threshold may be equal or unequal. Additionally, image data having a quality score below the first quality score threshold and/or the second quality score threshold may be discarded or saved to a second image repository.
According to the scheme, the corresponding preset processing is executed according to the quality fraction of the image data instead of performing the same processing on all the image data, so that the flexibility of image data processing is improved.
Referring to fig. 2, fig. 2 is a partial schematic flow chart of an embodiment of an image data processing method according to the present application. As shown in fig. 2, after acquiring the image data to be processed, the image data processing method may further perform the steps of:
step S21: and acquiring a data processing mode corresponding to the shooting equipment of the image data.
The photographing apparatus of the image data is an apparatus that photographs the image data. The data processing mode comprises a data cleaning mode and a non-data cleaning mode. The data cleaning mode refers to that image data processing needs to be carried out on image data shot by the shooting equipment, and the non-data cleaning mode refers to that the image data shot by the shooting equipment is directly stored in the first image library and/or distributed to a third party platform without carrying out the image data processing on the image data shot by the shooting equipment.
In some disclosed embodiments, the data processing mode corresponding to each shooting device can be set by the user. Before executing a data processing mode corresponding to the shooting device for acquiring the image data, the image data processing method may further include: and receiving a setting instruction of a user for setting a data processing mode for the shooting equipment. And in response to the setting instruction, configuring the data processing mode of the photographing apparatus to the data processing mode corresponding to the setting instruction. Illustratively, a first instruction of a user to set a data processing mode to a data cleaning mode for the shooting device is received. And responding to the first instruction, and configuring the data processing mode of the shooting device into a data cleaning mode corresponding to the first instruction. Or receiving a third instruction of setting the data processing mode to be the non-data cleaning mode for the shooting device by the user. And responding to a third instruction, and configuring the data processing mode of the shooting device into a non-data cleaning mode corresponding to the third instruction. The data processing mode of the shooting device is configured by receiving an instruction of a user to set the data processing mode of the shooting device, so that image data processing can be performed according to the requirements of the user.
Specifically, the client corresponding to the execution device is used for receiving an instruction of the user. For example, a setting instruction of the data cleansing mode of the photographing apparatus a by the user is received, and the data processing mode of the photographing apparatus a is set to the data cleansing mode in response to the setting instruction. The method can receive an instruction of batch setting of data processing modes of a plurality of shooting devices by a user, and can also receive an instruction of setting of one shooting device at a time. In some application scenarios, batch setting may be performed according to the installation position of the image pickup apparatus. For example, a setting instruction of the data processing mode of a certain street by the user is received, and the data processing modes of all the shooting devices installed on the certain street are set correspondingly in response to the setting instruction. And then, the configuration information of the shooting equipment about the data processing mode is sent to the picture stream access service. And the picture stream intervention service acquires a data processing mode corresponding to the shooting equipment of the image data according to the configuration information of the shooting equipment, and selects different subsequent operation flows according to the data processing mode corresponding to the shooting equipment of the image data.
Step S22: and in response to the data processing mode corresponding to the shooting equipment being the data cleaning mode, executing the step of determining the quality score of the image data.
Or, in response to the fact that the data processing mode corresponding to the shooting equipment is the non-data cleaning mode, the preset processing is directly carried out on the image data. That is, when the data processing mode corresponding to the shooting device is the non-data cleaning mode, the image data shot by the device is directly stored in the first picture library and/or distributed to the third-party platform.
By setting the corresponding data processing mode for the shooting device instead of forcing data processing for all the shooting devices, hardware resource consumption caused by data processing can be reduced.
In some application scenarios, additional hardware resources, such as GPU resources, are required to be added for processing image data of image data, and image data processing can be performed on image data captured by a capture device located at a specific or core position according to user needs, so as to reduce excessive consumption of hardware resources.
Referring to fig. 3, fig. 3 is a partial sub-flowchart diagram illustrating step S12 according to an embodiment of the image data processing method of the present application. As shown in fig. 3, the step S12 may include the following steps:
step S121: at least one target score acquisition model corresponding to the photographing apparatus is determined.
The target score acquisition model is used for outputting the quality score of the target object in the image data. Optionally, the target objects corresponding to different target score obtaining models may be the same or different.
Before step S121 is executed, the image data processing method may further execute the following steps:
and receiving a second instruction of setting at least one target score acquisition model for the shooting device by the user. Then, in response to the second instruction, a target score acquisition model corresponding to the second instruction is configured for the shooting device. The target score obtaining model configured for the same shooting device may be a model corresponding to the same target object but having different ways of obtaining the quality scores of the target objects, for example, the model may be provided by different manufacturers, the target score obtaining model configured for the same shooting device may also be a model for obtaining the quality scores of different target objects, and for example, the target score obtaining model configured for the shooting device a may include a model for outputting the quality scores of human faces in the image data and a model for outputting the quality scores of motor vehicles in the image data.
Specifically, the user may set the target score obtaining model of each shooting device in a human-computer interaction interface on the client corresponding to the execution device. And then, issuing the configuration information of the shooting equipment about the target score acquisition model to the picture cleaning service. The picture cleaning service acquires a target score acquisition model corresponding to the shooting equipment of the image data according to the configuration information of the shooting equipment, and acquires a quality score of the image data according to the target score acquisition model corresponding to the shooting equipment of the image data.
By receiving a second instruction of setting at least one target score acquisition model for the shooting equipment by the user and configuring the corresponding target score acquisition model for the shooting equipment according to the instruction, image data processing can be carried out according to the requirements of the user.
Optionally, step S121 may further include the steps of:
and acquiring a processing scene type corresponding to the image data as a target scene type. The type of the processing scene type may be set by the user, or may be a default type. For example, the processing scenario category may be a traffic processing scenario category, a punch-card processing scenario category, a prison regulatory processing scenario category, and so on. Regarding the processing scene category, it may be determined according to the main function of the image data captured by the capturing device. For example, if the photographing apparatus is located in an office building, the main function of the image data photographed by the photographing apparatus may be to record the behavior of a person. For example, whether the person is late or early is recorded, and thus, the processing scene category corresponding to the image data captured by the capturing device may be a card punching processing scene category.
The method for acquiring the processing scene category corresponding to the image data may be: and acquiring preset configuration information. The configuration information includes the identifiers of the plurality of shooting devices and the processing scene categories corresponding to the shooting devices. And then, by using the identification of the shooting equipment for shooting the image data to be processed, the processing scene type corresponding to the image data to be processed is obtained by inquiring the configuration information. By setting corresponding processing scene types for the shooting devices, the quality scores of the image data shot by different shooting devices can be determined in a targeted manner, and therefore flexibility in determining the quality scores of the image data is improved.
Before the preset configuration information is acquired, the method may further include the following steps:
and receiving a processing scene type setting instruction of each shooting device from a user. And then, based on the processing scene category setting instruction, constructing configuration information. The user can set the processing scene type of each shooting device in the human-computer interaction interface on the client corresponding to the execution device. The method can receive an instruction of a user for batch setting of processing scene types of a plurality of shooting devices, and can also receive an instruction of setting of one shooting device at a time. In some application scenarios, batch setting may be performed according to the installation position of the image pickup apparatus. For example, a setting instruction of a data processing mode of a certain street by a user is received, and in response to the setting instruction, processing scene categories of all shooting devices installed on the certain street are set correspondingly. By setting corresponding processing scene types for the shooting devices, the quality scores of the image data shot by different shooting devices can be determined in a targeted manner, and therefore flexibility in determining the quality scores of the image data is improved.
In some application scenarios, an instruction for processing a scenario category imported by a user may be received. The processing scene category imported by the user may be a category that the execution device does not originally have. By the method, the image data processing method provided by the embodiment of the disclosure can adapt to more scenes.
In some application scenarios, a score obtaining model imported by a user is received, and an association relation between the score obtaining model and at least one processing scenario category is established for subsequent use.
In some disclosed embodiments, after the target scene category is acquired, at least one target score acquisition model corresponding to the target scene category is determined according to the target scene category, so that the target score acquisition model for processing the image data can meet the requirement of the shooting device in an actual scene. The different target score acquisition models can be used for feature extraction of target objects matched with the target scene categories.
By acquiring the processing scene types of the shooting devices and determining the target score acquisition model, the quality scores of the image data can be determined according to the specific requirements of the user, so that the method is suitable for processing the data under different processing scenes.
Step S122: determining a quality score of the image data using the at least one target score acquisition model.
In some disclosed embodiments, the target score acquisition model is at least two. Step S122 may include the steps of: and respectively obtaining the model by using each target score to obtain the candidate quality score of the image data. If the number of the target objects in the image data is greater than 1, the target score acquisition model may determine the quality scores for the target objects respectively, and then use the highest quality score as the candidate quality score of the target score acquisition model. Illustratively, if the image data of the target score obtaining model with the input target object being a human face includes three human faces, the quality score corresponding to the first human face is 3 points, the quality score corresponding to the second human face is 5 points, and the quality score corresponding to the third human face is 7 points, then the candidate quality score output by the target score obtaining model is 7 points.
Then, the candidate quality scores satisfying the preset condition are taken as the final quality scores of the image data. Illustratively, the image data is input into each target score acquisition model, each of which is capable of inputting a quality score of a corresponding target object, respectively, and then the quality score of the target object output by each model is taken as a candidate quality score of the image data. Wherein, the preset condition may be a maximum value, a minimum value, an average value, and the like, among the candidate mass fractions. The embodiment of the present disclosure takes the maximum value among the candidate quality scores as the final quality score of the image data.
Illustratively, the target score obtaining model corresponding to the image data comprises a face score obtaining model and a motor vehicle score obtaining model, the target object corresponding to the face score obtaining model comprises a face, the target object motor vehicle corresponding to the motor vehicle score obtaining model comprises exactly one face and one motor vehicle, the mass scores corresponding to the face and the motor vehicle are respectively determined, the mass score of the face is 3, the mass score of the motor vehicle is 6, and the mass score of the image data is 6.
By using at least one target score acquisition model, the manner in which quality scores are determined for image data is made more diverse.
A plurality of candidate quality scores of the image data are obtained by utilizing each target score acquisition model, and the candidate quality scores meeting preset conditions are used as the final quality scores of the image data, so that the determined final quality scores are more accurate.
In some disclosed embodiments, the at least one target score acquisition model is used separately, and the quality score of the image data may be determined by:
and performing feature extraction on the target object in the image data by using at least one target score acquisition model to obtain a plurality of feature points related to the target object. And the target objects corresponding to the different score acquisition models are the same or different.
The image data is subjected to feature extraction, and a plurality of feature points are obtained in the following manner: and carrying out target identification on the image data by using the target score acquisition model, and judging whether a target object corresponding to the target score acquisition model exists in the image data. And in response to the target object corresponding to the target score acquisition model existing in the image data, performing feature extraction on an image area where the target object is located to obtain a plurality of feature points. And responding to the situation that the target object corresponding to the target score obtaining model does not exist in the image data, and outputting the candidate quality score of the image data as a preset score by the target score obtaining model. The preset score may be 0 or other preset value. In other application scenarios, in response to that the image data does not include the target object corresponding to the target score obtaining model, the quality score is not determined, and the image data is directly discarded or directly stored in the second image library.
Then, based on the number of feature points, a quality score of the image data is determined.
Wherein the mass fraction is positively correlated with the number of the characteristic points. Specifically, the quality score of the target object is positively correlated with the number of feature points corresponding to the target object. That is, the greater the number of feature points corresponding to the target object, the higher the quality score of the target object. The quality score of the image data can be determined by extracting the features of the image data and then according to the number of the feature points obtained by feature extraction. In addition, because the feature points can be used for processing the image such as subsequent detection and identification, the more the number of the feature points is, the better the processing can be realized, and the image quality can be well reflected by using the number of the feature points.
The quality score of the image data is determined by using at least one target score acquisition model, and the whole process is convenient and quick. In some application scenarios, one target score obtaining model can perform feature extraction on one target object, and also can perform feature extraction on multiple target objects. Wherein, a quality score is correspondingly output by one target score acquisition model. If one target score acquisition model performs feature extraction on a plurality of target objects, the target score acquisition model outputs a quality score meeting a preset condition in the quality scores of the feature points of the plurality of target objects as an output result of the target score acquisition model.
In some disclosed embodiments, the step S13 may include the following steps:
and responding to the quality score meeting the preset requirement, and storing the image data to a first image library. And in response to the quality score not meeting the preset requirement, not storing the image data or storing the image data to a second image library. Wherein, the quality score meeting the preset requirement may be that the quality score is greater than or equal to a preset quality score threshold. Only the image data with higher quality scores is stored, so that the storage capacity of the low-quality image data in the first image library can be reduced, and the storage resources are saved to a certain extent.
Or, in response to the quality score meeting a preset requirement, distributing the image data to a third party platform. And in response to the quality score not meeting the preset requirement, not distributing the image data to the third party platform. Distributing the image data to the third party platform specifically includes uploading the image data to the third party platform. By only distributing the image data with higher quality scores to the third-party platform, the network bandwidth transmission efficiency can be improved.
In further disclosed embodiments, storing the image data to the first image repository or to the second image repository or to a third party platform comprises the steps of:
and responding to the quality score meeting the preset requirement, and storing the image data to a first image library. And distributing the image data to a third party platform in response to the quality score meeting a preset requirement.
And in response to the quality score not meeting the preset requirement, not storing the image data or storing the image data to a second image library. And in response to the quality score not meeting the preset requirement, not distributing the image data to the third party platform.
In some disclosed embodiments, a setting instruction of a user to a photographing apparatus that photographs the image data may be received, and in response to the setting instruction, a preset process that is executable may be set to the photographing apparatus. For example, the user may set a preset process that the photographing apparatus a can execute to store to the first image library. And storing the image data into a first image library when the quality score of the image data shot by the shooting device A is determined to meet the preset requirement. And receiving a setting instruction which is set by the user on the shooting device and needs to be operated, and responding to the setting instruction, and executing the operation which needs to be operated after the shooting device is acquired. For example, the user may set that all image data captured by the capturing device B needs to be distributed to the third party platform, and after the image data captured by the capturing device B is acquired, the image data is distributed to the third party platform regardless of whether the quality score meets the preset requirement. The method can simultaneously receive a setting instruction of the preset processing which is set by the user on the shooting equipment and can be executed, and also can receive a setting instruction which is set by the user on the shooting equipment and has to be executed. If the preset processing available for execution of the same equipment and the operation which needs to be executed are contradictory, one of the operations is selected for execution. Further, after the contradiction occurs, the operation that has to be performed is subject to. For example, if the user sets that the preset processing that the shooting device C can execute is distributed to the third-party platform, and the operation that the user has to execute and is set by the shooting device C is also distributed to the third-party platform, at this time, if it is detected that the image data shot by the shooting device C does not meet the preset requirement, the image data will not be distributed to the third-party platform if the preset processing that can be executed is performed, and the image data will still be distributed to the third-party platform if the operation that has to be executed is performed, and at this time, the image data will still be distributed to the third-party platform subject to the operation that has to be executed. Of course, this is merely an example, and in other embodiments, the preset processing that can still be executed is subject to the criterion.
According to the scheme, the image data are stored in the first image library only when the quality scores of the image data meet the preset requirements, and the storage resources of the first image library can be saved. In addition, under the condition that the quality score of the image data meets the preset requirement, the image data is distributed to a third-party platform, and network transmission resources can be saved.
In some disclosed embodiments, the step S13 includes the following steps:
and responding to the quality score meeting the preset requirement, and acquiring the image information of the image data. And uses the image information to construct structured data. The structured data may include various types of data that have been parsed for the image data, such as an image quality score, a gender of a target object included in the image, an age of the target object, accessories worn by the target object, and so forth. The structured data is stored to a first image repository and/or distributed to a third party platform. That is, the structured data is stored with the image data to a first image repository and/or distributed to a third party platform. The first image database and the second image database are image databases, and comprise a sub-database for storing image data and a sub-database for storing structured data. Wherein the structured data is available for retrieval. The manner in which the corresponding structured data is generated from the image data can be found in the general art, and will not be described in great detail herein.
By acquiring the image information of the image data to be stored or distributed and constructing the corresponding structured data, rather than acquiring the structured data for all the image data, the resource consumption in the process of acquiring the structured data can be saved. And, subsequent retrieval from the structured data is facilitated by storing or distributing the structured data.
In other disclosed embodiments, step S13 may further include the steps of:
and responding to the quality score not meeting the preset requirement, and acquiring the image information of the image data. And uses the image information to construct structured data. The structured data is then stored to a second image repository. That is, the structured data is stored with the image data to a second image repository.
For better understanding of the image data processing method provided by the embodiments of the present disclosure, please refer to the following examples.
Referring to fig. 4, fig. 4 is another schematic flow chart of an embodiment of the image data processing method of the present application. As shown in fig. 4, an image data processing method provided by an embodiment of the present disclosure may include the following steps:
step S31: an image data stream is acquired.
The specific image data stream may include a plurality of images to be processed. For a specific manner of acquiring the image data stream, please refer to the above manner of acquiring the image to be processed, which is not described herein again.
Step S32: and judging whether the image data stream belongs to a data cleaning mode.
Specifically, whether a data processing mode corresponding to the shooting device of the current image data stream belongs to a data cleaning mode is judged. The specific determination method is not described herein again. If the current image data stream belongs to the data cleansing mode as a result of the determination, step S33 is executed. If the current image data stream is determined to belong to the non-data-case mode, step S36 and step S37 are executed.
Step S33: and acquiring a target score acquisition model corresponding to the image data stream.
The manner of obtaining the target score obtaining model corresponding to the current image data stream may refer to the manner of obtaining the target score obtaining model corresponding to the image data, and is not described herein again. By acquiring the target score acquisition model of the plurality of pieces of image data at one time, the operation of acquiring the target score acquisition model is not required to be executed after each piece of image data is acquired, and the execution efficiency of the equipment is improved.
The target score obtaining model may include a human face score obtaining model, a human body score obtaining model, a motor vehicle score obtaining model, a non-motor vehicle score obtaining model, and the like.
Step S34: and inputting a target score acquisition model.
Specifically, the image data in the image data stream is respectively input into at least one target score obtaining model shown in fig. 4, so as to obtain the quality scores of the image data. The manner of obtaining the quality score of each image data in the image data stream by using the target score obtaining model is as described above, and is not described herein again.
Step S35: and filtering the image data with the quality score not meeting the preset requirement.
The image data with the quality score not meeting the preset requirement may be filtered by deleting or saving the image data not meeting the preset requirement to the second image library.
Step S36: and storing the image data and the structured data.
The image data and the structured data are stored in a first image library.
Step S37: the image data and the structured data are distributed to a third party platform.
The manner of distributing the image data and the structured data to the third-party platform is described above, and will not be described herein again.
According to the scheme, the corresponding preset processing is executed according to the quality fraction of the image data instead of performing the same processing on all the image data, so that the flexibility of image data processing is improved.
The main execution body of the image data processing method may be an image data processing apparatus, and the image processing apparatus may be any terminal device or server or other processing device capable of executing the method of the present application, where the terminal device may be a device for monitoring image analysis, a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, an on-board device, a wearable device, an autonomous driving automobile, a robot, a security system, a product such as glasses and a helmet for augmented reality or virtual reality, and the like. In some possible implementations, the image data processing method may be implemented by a processor calling computer readable instructions stored in a memory.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an embodiment of an image data processing apparatus according to the present application. The image data processing apparatus 40 includes an acquisition module 41, a score determination module 42, and a processing module 43. A data obtaining module 41, configured to obtain image data to be processed; a score determination module 42 for determining a quality score of the image data; and a processing module 43, configured to determine whether to perform preset processing on the image data based on the quality score, where the preset processing includes storing to the first image library and/or distributing to a third party platform.
According to the scheme, the corresponding preset processing is executed according to the quality fraction of the image data instead of performing the same processing on all the image data, so that the flexibility of image data processing is improved.
In some disclosed embodiments, after the score determining module 42 obtains the image data to be processed, it includes: acquiring a data processing mode corresponding to shooting equipment of image data; and in response to the data processing mode corresponding to the shooting equipment being the data cleaning mode, executing the step of determining the quality score of the image data.
According to the scheme, the corresponding data processing mode is set for the shooting equipment instead of performing data processing on all the shooting equipment, so that the hardware resource consumption caused by the data processing can be reduced.
In some disclosed embodiments, the image data processing device module 40 further includes a configuration module (not shown). Before the data processing mode corresponding to the shooting device for acquiring the image data, the configuration module is used for: receiving a first instruction of setting a data processing mode as a data cleaning mode for shooting equipment by a user; and responding to the first instruction, and configuring the data processing mode of the shooting device to be a data cleaning mode corresponding to the first instruction.
According to the scheme, the data processing mode of the shooting equipment is configured by receiving the first instruction of setting the data processing mode of the shooting equipment by the user, so that image data processing can be carried out according to the requirements of the user.
In some disclosed embodiments, score determination module 42 determines a quality score for the image data, including: determining at least one target score acquisition model corresponding to the photographing device; the target score acquisition model is used for outputting the quality score of a target object in the image data; determining a quality score of the image data using the at least one target score acquisition model.
According to the scheme, the mode of determining the quality score for the image data is more diversified by using at least one target score acquisition model.
In some disclosed embodiments, prior to determining the at least one target score acquisition model corresponding to the capture device, the configuration module is to: receiving a second instruction of setting at least one target score acquisition model for the shooting equipment by the user; and responding to the second instruction, and configuring a target score acquisition model corresponding to the second instruction for the shooting device.
According to the scheme, the second instruction of setting at least one target score acquisition model for the shooting equipment by the user is received, and the corresponding target score acquisition model is configured for the shooting equipment according to the second instruction, so that image data processing can be carried out according to the requirements of the user.
In some disclosed embodiments, the number of target score acquisition models is at least two; the score determination module 42 determines a quality score of the image data using at least one target score acquisition model, including: respectively obtaining a model by using each target score to obtain candidate quality scores of the image data; and taking the candidate quality scores meeting the preset conditions as the final quality scores of the image data.
According to the scheme, the candidate quality scores of the image data are obtained by utilizing the target score acquisition models, and the candidate quality scores meeting the preset conditions are used as the final quality scores of the image data, so that the determined final quality scores are more accurate.
In some disclosed embodiments, the score determination module 42 determines the quality score of the image data using at least one target score acquisition model, including: performing feature extraction on a target object in the image data by using at least one target score acquisition model to obtain a plurality of feature points related to the target object, wherein the target objects corresponding to different target score acquisition models are the same or different; and determining the quality score of the image data based on the number of the characteristic points, wherein the quality score is positively correlated with the number of the characteristic points.
According to the scheme, the quality score of the image data can be determined by extracting the features of the image data and then according to the number of the feature points obtained by feature extraction. In addition, because the feature points can be used for processing the image such as subsequent detection and identification, the more the number of the feature points is, the better the processing can be realized, and the image quality can be well reflected by using the number of the feature points.
In some disclosed embodiments, the score determining module 42 determines at least one target score acquisition model corresponding to the capture device, including: acquiring a processing scene category corresponding to the image data as a target scene category; and determining at least one target score acquisition model corresponding to the shooting equipment according to the target scene category.
According to the scheme, the target score acquisition model matched with the processing scene category of the image data is used, so that the applicability of determining the quality score of the image data can be improved.
In some disclosed embodiments, the score determining module 42 obtains a processing scene category corresponding to the image data, including: acquiring preset configuration information, wherein the configuration information comprises identifiers of a plurality of shooting devices and processing scene categories corresponding to the shooting devices; and inquiring the processing scene type corresponding to the image data to be processed from the configuration information by using the identification of the shooting equipment.
According to the scheme, the corresponding processing scene types are set for the shooting devices, so that the quality scores of the image data shot by different shooting devices can be determined in a targeted manner, and the flexibility of determining the quality scores of the image data is improved.
In some disclosed embodiments, the processing module 43 determines whether to perform the preset processing on the image data based on the quality score, including: responding to the quality score meeting the preset requirement, and storing the image data to a first image library; in response to the quality score not meeting the preset requirement, not storing the image data or storing the image data to a second image library; and/or, in response to the quality score meeting a preset requirement, distributing the image data to a third party platform; and in response to the quality score not meeting the preset requirement, not distributing the image data to the third party platform.
According to the scheme, the image data are stored in the first image library only when the quality scores of the image data meet the preset requirements, and the storage resources of the first image library can be saved. In addition, under the condition that the quality score of the image data meets the preset requirement, the image data is distributed to a third-party platform, and network transmission resources can be saved.
In some disclosed embodiments, the processing module 43 determines whether to perform the preset processing on the image data based on the quality score, and further includes: responding to the quality score meeting the preset requirement, and acquiring image information of the image data; constructing structured data using the image information; storing the structured data to a first image library; and/or, distributing the structured data to a third party platform.
According to the scheme, the image information of the image data with the quality score meeting the preset requirement is obtained, the corresponding structured data is constructed according to the image information, and the structured data is not obtained aiming at all the image data, so that the resource consumption in the process of obtaining the structured data can be saved. And, subsequent retrieval from the structured data is facilitated by storing or distributing the structured data.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an embodiment of an electronic device according to the present application. The electronic device 50 comprises a memory 51 and a processor 52, the processor 52 being configured to execute program instructions stored in the memory 51 to implement the steps in the above-described embodiment of the image data processing method. In one particular implementation scenario, electronic device 50 may include, but is not limited to: a microcomputer, a server, and the electronic device 50 may also include a mobile device such as a notebook computer, a tablet computer, and the like, which is not limited herein.
Specifically, the processor 52 is configured to control itself and the memory 51 to implement the steps in the above-described embodiment of the image data processing method. Processor 52 may also be referred to as a CPU (Central Processing Unit). Processor 52 may be an integrated circuit chip having signal processing capabilities. The Processor 52 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 52 may be commonly implemented by an integrated circuit chip.
According to the scheme, the corresponding preset processing is executed according to the quality fraction of the image data instead of performing the same processing on all the image data, so that the flexibility of image data processing is improved.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an embodiment of a computer-readable storage medium according to the present application. The computer readable storage medium 60 stores program instructions 601, and the program instructions 601, when executed by the processor, are used for implementing the steps in the above-described image data processing method embodiments.
According to the scheme, the corresponding preset processing is executed according to the quality fraction of the image data instead of performing the same processing on all the image data, so that the flexibility of image data processing is improved.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, and for brevity, will not be described again herein.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely one type of logical division, and an actual implementation may have another division, for example, a unit or a component may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.

Claims (14)

1. An image data processing method characterized by comprising:
acquiring image data to be processed;
determining a quality score for the image data;
and determining whether to perform preset processing on the image data based on the quality score, wherein the preset processing comprises storing to a first image library and/or distributing to a third-party platform.
2. The method of claim 1, wherein after the obtaining image data to be processed, comprising:
acquiring a data processing mode corresponding to the shooting equipment of the image data;
and in response to the data processing mode corresponding to the shooting equipment being a data cleaning mode, executing the step of determining the quality score of the image data.
3. The method of claim 2, wherein the obtaining of the data processing mode corresponding to the capturing device of the image data is preceded by:
receiving a first instruction of setting a data processing mode as a data cleaning mode for the shooting equipment by a user;
and responding to the first instruction, and configuring a data processing mode of the shooting device to a data cleaning mode corresponding to the first instruction.
4. The method of claim 2, wherein the determining the quality score of the image data comprises:
determining at least one target score acquisition model corresponding to the photographing apparatus; the target score acquisition model is used for outputting the quality score of a target object in the image data;
determining a quality score of the image data using the at least one target score acquisition model.
5. The method of claim 4, wherein prior to determining at least one target score acquisition model corresponding to the capture device, comprising:
receiving a second instruction of setting at least one target score acquisition model for the shooting equipment by a user;
and responding to the second instruction, and configuring a target score acquisition model corresponding to the second instruction for the shooting equipment.
6. The method according to claim 4, wherein the number of the target score obtaining models is at least two;
the determining a quality score of the image data using the at least one target score acquisition model comprises:
respectively obtaining a model by using each target score to obtain a candidate quality score related to the image data;
and taking the candidate quality scores meeting the preset conditions as the final quality scores of the image data.
7. The method of claim 4, wherein determining the quality score of the image data using the at least one target score acquisition model comprises:
performing feature extraction on a target object in the image data by using the at least one target score acquisition model to obtain a plurality of feature points related to the target object, wherein the target objects corresponding to different target score acquisition models are the same or different;
determining a quality score of the image data based on the number of feature points, wherein the quality score is positively correlated with the number of feature points.
8. The method of claim 4, wherein determining at least one target score acquisition model corresponding to the capture device comprises:
acquiring a processing scene category corresponding to the image data as a target scene category;
and determining at least one target score acquisition model corresponding to the shooting equipment according to the target scene category.
9. The method of claim 8, wherein obtaining the processing scene category corresponding to the image data comprises:
acquiring preset configuration information, wherein the configuration information comprises identifiers of a plurality of shooting devices and processing scene categories corresponding to the shooting devices;
and inquiring the processing scene type corresponding to the image data to be processed from the configuration information by using the identification of the shooting equipment.
10. The method according to any one of claims 1-9, wherein the determining whether to perform the preset processing on the image data based on the quality score comprises:
responding to the quality score meeting a preset requirement, and storing the image data to a first image library; in response to the quality score not meeting a preset requirement, not storing the image data or storing the image data to a second image library;
and/or the presence of a gas in the gas,
in response to the quality score meeting the preset requirement, distributing the image data to a third party platform; in response to the quality score not meeting a preset requirement, not distributing the image data to a third party platform.
11. The method of claim 10, wherein the determining whether to pre-process the image data based on the quality score further comprises:
responding to the quality score meeting the preset requirement, and acquiring image information of the image data;
constructing structured data using the image information;
storing the structured data to a first image repository; and/or, distributing the structured data to a third party platform.
12. An image data processing apparatus characterized by comprising:
the data acquisition module is used for acquiring image data to be processed;
a score determination module to determine a quality score of the image data;
and the processing module is used for determining whether to perform preset processing on the image data based on the quality score, wherein the preset processing comprises storage in a first image library and/or distribution to a third-party platform.
13. An electronic device comprising a memory and a processor for executing program instructions stored in the memory to implement the method of any of claims 1 to 11.
14. A computer readable storage medium having stored thereon program instructions, which when executed by a processor implement the method of any of claims 1 to 11.
CN202111666092.3A 2021-12-31 2021-12-31 Image data processing method, device, equipment and storage medium Withdrawn CN114359216A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111666092.3A CN114359216A (en) 2021-12-31 2021-12-31 Image data processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111666092.3A CN114359216A (en) 2021-12-31 2021-12-31 Image data processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114359216A true CN114359216A (en) 2022-04-15

Family

ID=81105730

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111666092.3A Withdrawn CN114359216A (en) 2021-12-31 2021-12-31 Image data processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114359216A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114937017A (en) * 2022-05-27 2022-08-23 玖壹叁陆零医学科技南京有限公司 Image processing method, image processing device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114937017A (en) * 2022-05-27 2022-08-23 玖壹叁陆零医学科技南京有限公司 Image processing method, image processing device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US20210166040A1 (en) Method and system for detecting companions, electronic device and storage medium
US11445150B2 (en) Multi-camera collaboration-based image processing method and video surveillance system
US20230260313A1 (en) Method for identifying potential associates of at least one target person, and an identification device
WO2020111776A1 (en) Electronic device for focus tracking photographing and method thereof
CN112052251B (en) Target data updating method and related device, equipment and storage medium
US11948280B2 (en) System and method for multi-frame contextual attention for multi-frame image and video processing using deep neural networks
CN113378616A (en) Video analysis method, video analysis management method and related equipment
CN113283319B (en) Method, device, medium and electronic device for evaluating face blur
CN114359216A (en) Image data processing method, device, equipment and storage medium
CN110677580B (en) Shooting method, shooting device, storage medium and terminal
CN113868457A (en) Image processing method based on image gathering and related device
CN113128278B (en) Image recognition method and device
CN116168045B (en) Method and system for dividing sweeping lens, storage medium and electronic equipment
CN110933314A (en) Focus-following shooting method and related product
CN112487082A (en) Biological feature recognition method and related equipment
CN113011497B (en) Image comparison method and system
CN110458171B (en) License plate recognition method and related device
CN114143429A (en) Image shooting method, image shooting device, electronic equipment and computer readable storage medium
EP4116878A1 (en) Target recognition method and device
CN113920751B (en) High-definition digital photo frame dynamic tracking control system and method
CN117177004B (en) Content frame extraction method, device, equipment and storage medium
JP6443144B2 (en) Information output device, information output program, information output method, and information output system
CN112785487B (en) Image processing method and device, storage medium and electronic equipment
JP7207586B2 (en) Imaging control device and imaging system
CN114448952B (en) Streaming media data transmission method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220415