Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first client may be referred to as a second client, and similarly, a second client may be referred to as a first client, without departing from the scope of the present application. Both the first client and the second client are clients, but they are not the same client.
Fig. 1 is a diagram illustrating an application scenario of the data transmission method according to an embodiment. As shown in fig. 1, the mobile terminal 10 may establish a communication connection with the server 20 through a network, where the server 20 may be a single server, a server cluster composed of a plurality of servers, or a server in the server cluster. The mobile terminal 10 acquires the image to be clustered, identifies the image to be clustered, and extracts the image characteristics of the image to be clustered. The mobile terminal 10 uploads the image characteristics of the images to be clustered to the server 20, and the server 20 clusters the images to be clustered according to the image characteristics of the images to be clustered uploaded by the mobile terminal 10 and returns a clustering result to the mobile terminal 10. The mobile terminal 10 receives the clustering result returned by the server 20, and the clustering result may include an identifier of the image to be clustered corresponding to the image feature and a label assigned after clustering the image feature. The mobile terminal 10 may assign the images to be clustered to the groups corresponding to the labels according to the corresponding identifiers of the images to be clustered and the assigned labels.
Fig. 2 is a timing diagram illustrating interaction between a mobile terminal and a server in one embodiment. As shown in fig. 2, the main interaction process of the mobile terminal and the server may include the following steps:
1. the mobile terminal 10 identifies the images to be clustered and extracts the image features of the images to be clustered.
The mobile terminal 10 may obtain one or more images to be clustered, identify each obtained image to be clustered, and extract image features of the images to be clustered. In one embodiment, the mobile terminal 10 may analyze the image to be clustered through a preset face recognition model, and determine whether the corresponding image to be clustered includes a face. When the images to be clustered are detected to contain the faces, face characteristic points of the images to be clustered can be extracted, wherein the face characteristic points can be used for describing the shapes of the faces, the shapes of five sense organs and the positions.
2. The mobile terminal 10 uploads the image features to the server 20.
The mobile terminal 10 may upload the extracted image features of the images to be clustered to the server 20, where the image features may include corresponding image information, where the image information may include identifiers of the images to be clustered corresponding to the image features, and the identifiers of the images to be clustered may be names or numbers of the images to be clustered.
In one embodiment, the mobile terminal 10 may extract current image grouping information and image features of grouped images in various groups, wherein the image grouping information may include group information of each group, image information included under each group, and the like. The mobile terminal may pack the image grouping information, the image features of the grouped images in each group, and the image features of the images to be clustered into an uplink data packet, and send the uplink data packet to the server 20.
3. The server 20 clusters the images to be clustered and assigns corresponding labels.
The mobile terminal 10 may send a clustering request to the server 20, and the server 20 may cluster the image features of the images to be clustered according to the clustering request of the mobile terminal 10. In one embodiment, the cluster request may include information about the identity, account, and transmission time of the corresponding mobile terminal 10, and the server 20 may add the cluster request to the queue service after receiving the cluster request. When the clustering requests are distributed, the clustering requests which belong to the same mobile terminal in the queue service and are sent at different sending moments can be merged, and the clustering requests which belong to the same account in the queue service can also be merged.
In one embodiment, when the server 20 receives the upstream data packet and the clustering request, the upstream data packet may be parsed to obtain information such as image grouping information, image features of grouped images in each group, and image features of images to be clustered. The server 20 may calculate similarity between the image features of the images to be clustered and the image features of the grouped images in each group through a preset clustering model, determine the group to which the images to be clustered corresponding to the image features belong, and assign corresponding labels.
4. The server 20 returns the clustering result to the mobile terminal 10.
5. The mobile terminal 10 allocates the images to be clustered to the groups corresponding to the labels according to the identifiers of the images to be clustered corresponding to the image features and the allocated labels in the clustering result.
The server 20 may return the clustering result to the mobile terminal 10, where the clustering result may include the image identifier to be clustered corresponding to the image feature and the label assigned after clustering the image feature. The mobile terminal 10 may add each image to be clustered to the corresponding group according to the clustering result, may obtain the image to be clustered identifier and the assigned tag corresponding to the image feature from the clustering result, assign the image to be clustered corresponding to the image feature to the group corresponding to the tag, and assign the corresponding group identifier.
As shown in fig. 3, in one embodiment, a data transmission method is provided, which includes the following steps:
and 310, acquiring an image to be clustered of the mobile terminal.
Specifically, the mobile terminal may obtain one or more images to be clustered from a memory such as a memory, where the images to be clustered may be images shot by a user on the mobile terminal, or images obtained from other computer devices, for example, images sent by other mobile terminals, or images saved when the user browses a web page through the mobile terminal. In this embodiment, the images to be clustered may be photos, and the mobile terminal may cluster the images to be clustered through the server, so as to generate a corresponding album.
In one embodiment, the images to be clustered may be images stored on the mobile terminal without grouping, that is, images that are not clustered, images that have a corresponding group but need to be clustered again, and the like. In this embodiment, the images to be clustered may be images stored on the mobile terminal without grouping, and after the images are clustered, corresponding group identifiers are allocated, and the mobile terminal may acquire the images without the corresponding group identifiers in the stored images as the images to be clustered.
In an embodiment, if there are a plurality of acquired images to be clustered, the mobile terminal may detect whether the acquired images to be clustered include a repeated image, where the repeated image refers to a plurality of images with similarity greater than a first threshold, and if so, the mobile terminal may select an image with highest quality from the plurality of repeated images for identification, and extract an image feature of the image with highest quality for uploading. The mobile terminal can determine the image quality according to the saturation, the definition, the brightness and the like in each repeated image, and selects the image with the highest quality from the images for identification.
And 320, identifying the image to be clustered, and extracting the image characteristics of the image to be clustered.
Specifically, the mobile terminal can identify each acquired image to be clustered and extract image features of the image to be clustered. In one embodiment, the server may cluster the images according to faces. The mobile terminal can firstly carry out face recognition on each image to be clustered, and divide the image to be clustered into an unmanned image and a face image. Further, the mobile terminal can analyze the image to be clustered through a preset face recognition model, and judge whether the corresponding image to be clustered contains a face. In one embodiment, the face recognition model may be a decision model constructed in advance through machine learning, when the face recognition model is constructed, a large number of sample images may be obtained, the sample images include face images and unmanned images, the sample images may be labeled according to whether each sample image includes a face, the labeled sample images are used as input of the face recognition model, and the face recognition model is obtained through machine learning and training.
After the images to be clustered are divided into the unmanned images and the face images by the mobile terminal, the unmanned images can be divided into corresponding unmanned image groups, and corresponding group identifiers are added. In one embodiment, the mobile terminal may extract only the image features of the face images in the images to be clustered, and perform clustering according to the image features of the face images. The mobile terminal may extract image features of each face image according to a preset feature model, where the image features may include shape features, spatial features, edge features, and the like, where the shape features refer to local shapes in the images to be clustered, the spatial features refer to mutual spatial positions or relative directional relationships between a plurality of regions segmented from the images to be clustered, and the edge features refer to boundary pixels forming two regions in the images to be clustered, but are not limited thereto, and may also include color features, texture features, and the like. Further, the mobile terminal performs face recognition on the image to be clustered, and when the image to be clustered is detected to contain a face, face characteristic points of the image to be clustered can be extracted, wherein the face characteristic points can be used for describing the shape of the face, the shape of five sense organs and the position of the face.
Step 330, uploading the image features to a server.
Specifically, the mobile terminal may upload the extracted image features of the images to be clustered to the server, where the image features may include corresponding image information, where the image information may include identifiers of the images to be clustered corresponding to the image features, and the identifiers of the images to be clustered may be names or numbers of the images to be clustered. The server can cluster the images to be clustered according to the image characteristics of the images to be clustered, and the images to be clustered containing similar image characteristics are classified into one type. In one embodiment, the server can cluster the images to be clustered according to the faces, and the server can classify the images to be clustered with similar face characteristic points into one category according to the characteristic points uploaded by the mobile terminal and used for describing the face shape, the shape of five sense organs, the position and other information in the images to be clustered. The server can add a corresponding label to the image feature of each image to be clustered according to the clustering result, and the label can be used for representing the group to which the image to be clustered corresponding to the image feature belongs.
And 340, receiving a clustering result returned by the server, wherein the clustering result comprises an image identifier to be clustered corresponding to the image characteristics and a label distributed after the image characteristics are clustered.
And 350, distributing the images to be clustered to the groups corresponding to the labels according to the corresponding identifiers of the images to be clustered and the distributed labels.
Specifically, the server may return the clustering result to the mobile terminal, where the clustering result may include an image identifier to be clustered corresponding to the image feature and a tag assigned after clustering the image feature. The mobile terminal can add each image to be clustered to the corresponding group according to the clustering result, can obtain the image to be clustered identifier corresponding to the image characteristic and the distributed label from the clustering result, distributes the image to be clustered corresponding to the image characteristic to the group corresponding to the label, and distributes the corresponding group identifier. In one embodiment, the mobile terminal may establish one or more albums, each group may correspond to one album, and images belonging to the same group may be displayed in the same album.
According to the data transmission method, the images to be clustered of the mobile terminal are obtained, the images to be clustered are identified, the image characteristics of the images to be clustered are extracted, the image characteristics are uploaded to the server for clustering, the image clustering can be completed only by uploading the image characteristics, the whole image does not need to be uploaded, the data amount of transmission can be reduced, and the transmission speed is improved. In addition, the images to be clustered can be distributed to corresponding categories according to the clustering result returned by the server, and the images are displayed in a classified manner, so that the images are easier to search, and are convenient and quick.
As shown in fig. 4, in an embodiment, the step 310 of acquiring the images to be clustered of the mobile terminal includes the following steps:
step 402, comparing the image information stored in the first database and the second database, and generating a new image list and/or an updated image list according to the comparison result.
Specifically, in this embodiment, the first database refers to a media database, and the media database may be used to store information of multimedia files such as images, videos, and audios, and may be used by a video player, an audio player, and an album gallery. The first database may contain fields of storage path, message digest, multimedia number, modification time, etc. of the image for storing information of the image. In one embodiment, the first database may include a SD Card (Secure Digital Memory Card) media database and a Memory media database, wherein the SD Card media database may be used for storing the SD Card multimedia information, and the Memory media database may be used for storing the Memory multimedia information. The second database refers to a face database, and face recognition scanning results, image characteristics, group information and the like of all images can be stored in the face database. The face database may include a plurality of types of fields such as a picture attribute, a face attribute, a group attribute, and the like, where the picture attribute may include fields such as a storage path of an image, a message digest, a multimedia number, and a modification time, the face attribute may include fields such as a face state, a face size, and a face feature, and the group attribute may include fields such as a group identifier, a group name, and a creation time, but is not limited thereto. When the mobile terminal collects a new image, for example, the new image can be collected through a camera or received from other computer equipment, and the like, the new image needs to be stored in the first database, when the image is subjected to face recognition scanning, image features are extracted, clustering is performed according to the image features, and then information of the image, corresponding image features, group information and the like can be stored in the face database.
In other embodiments, besides clustering images according to faces, clustering may also be performed according to other features, such as scene, place, time, etc., and the second database may be a database storing feature information for clustering, clustering results, etc., and is not limited to the face database.
The mobile terminal can compare the image stored in the first database with the image information stored in the second database, can compare the image information with the image information stored in the second database according to fields such as storage paths, multimedia numbers, modification time or message summaries and the like, and generates a new image list and/or an updated image list. In one embodiment, the added image list may record images that are not face-recognized in the mobile terminal, and the mobile terminal may add images that exist in the first database but not in the second database to the added image list. The updated image list may be a list in which images whose contents have changed after face recognition are recorded, and images that exist in both the first database and the second database but have changed may be added to the updated image list. The mobile terminal may generate only one of the new image list and the updated image list, or may generate both the new image list and the updated image list.
And step 404, determining the images to be clustered according to the newly added image list and/or the updated image list.
Specifically, the mobile terminal can directly take the images contained in the newly added image list and/or the updated image list as the images to be clustered, extract the image features of the images to be clustered and upload the image features to the server for clustering. In one embodiment, when the mobile terminal generates the updated image list, the mobile terminal may determine whether there are corresponding grouped but re-clustered images in the updated image list, may extract image features of each image in the updated image list, and compare the extracted image features with corresponding image features stored in the second database. If the similarity between the extracted image features and the corresponding image features stored in the second database is greater than or equal to a preset value, determining that the images with the similarity greater than the preset value can not be clustered again; if the similarity between the extracted image features and the corresponding image features stored in the second database is smaller than the preset value, it can be determined that the images with the similarity smaller than the preset value need to be clustered again. The mobile terminal can take the newly added image list and the image which needs to be clustered again in the updated image list as the image to be clustered.
In this embodiment, the images to be clustered can be acquired, and only the images to be clustered can be clustered, so that the pressure of the server can be reduced, and the image clustering efficiency can be improved.
As shown in fig. 5, in an embodiment, the step 402 compares the image information stored in the first database and the second database, and generates a new image list and/or an updated image list according to the comparison result, including the following steps:
step 502, determining whether the corresponding image is found in the second database according to the path of the image in the first database, if so, executing step 506, and if not, executing step 504.
Specifically, the mobile terminal may search in the second database according to the path of the image in the first database, and determine whether the second database stores the face recognition result corresponding to the image. The mobile terminal can read the value of each image stored in the first database in the storage path field one by one, and search whether the second database has an image with the storage path field value consistent with the read value, if so, the image with the storage path field value consistent with the read value in the second database is the corresponding image in the second database. In an embodiment, the mobile terminal may also search for a corresponding image in the second database according to the multimedia number of each image in the first database, and if an image with a multimedia number that is consistent with that in the first database can be found in the second database, the image with the multimedia number that is consistent with that is the corresponding image in the second database.
And step 504, adding the images which are not found into the newly added image list.
Specifically, if the mobile terminal does not find the corresponding image in the second database according to the path of the image in the first database, the image only exists in the first database but does not exist in the second database, which indicates that the image is not subjected to face recognition scanning, and the image which is not found in the second database in the first database and corresponds to the image may be added to the new image list. Further, the added image list may record identification information of images existing only in the first database and not in the second database, where the identification information may be a multimedia number, a storage path, and the like.
Step 506, determining whether the modification time of the image in the first database is consistent with that of the corresponding image in the second database, if yes, executing step 512, and if not, executing step 508.
Specifically, if the corresponding image can be found in the second database, the value of the modification time field of the image in the first database and the value of the modification time field of the corresponding image in the second database are extracted, whether the two values are consistent or not is judged, and if the modification times are consistent, the image is not modified after face recognition and storage in the second database, and the processing is not performed.
Step 508, determining whether the message digests of the image in the first database and the corresponding image in the second database are consistent, if yes, executing step 512, otherwise executing step 510.
Specifically, if the modification time of the image in the first database is not consistent with the modification time of the image corresponding to the second database, which indicates that the image has been modified after face recognition is performed and stored in the second database, the value of the message digest field of the image stored in the first database and the value of the message digest field of the image corresponding to the second database may be further extracted, and compared to determine whether the two values are consistent. The message abstracts can also be called digital abstracts, each message abstract is a fixed-length value which can uniquely correspond to a message or a text, whether the content of the image is changed or not can be judged by judging whether the image in the first database is consistent with the message abstract of the corresponding image in the second database or not, if the message abstracts are not consistent, the content of the image is changed after the image is subjected to face recognition scanning and stored in the second database, and the image stored in the first database and the corresponding image in the second database are not the same-content image.
In one embodiment, the Message Digest of the image may be the MD5(Message Digest Algorithm MD5, fifth edition of Message Digest Algorithm) of the image, or may be other hash algorithms, and the like, without being limited thereto. When the mobile terminal stores a new image or modifies the image, the message digest of the image can be calculated according to an algorithm such as MD5, and the message digest and information such as the multimedia number and the storage path of the image are stored in the first database in an associated manner.
In step 510, the image with inconsistent message digest is added to the updated image list.
Specifically, the mobile terminal may add an image, in which a message digest is different from a message digest of a corresponding image in the second database, to the updated image list, where an image whose content has changed after face recognition may be recorded in the updated image list, and further, identification information of an image whose content has changed after face recognition scanning may be recorded in the updated image list.
At step 542, no processing is performed.
Specifically, if the modification time of the image in the first database is different from that in the second database, but the message digest is the same, which indicates that the image is modified after face recognition, but the image content is not changed, the processing may not be performed.
In this embodiment, the first database and the second database may be compared to generate a new image list and/or an updated image list, which facilitates determining images to be clustered, so that only the images to be clustered are clustered, thereby reducing the pressure of the server and improving the efficiency of image clustering.
As shown in FIG. 6, in one embodiment, step 330 uploads the image features to a server, including the steps of:
step 602, extracting the current image grouping information and the image features of the grouped images in each group.
Specifically, the mobile terminal may extract current image grouping information, wherein the image grouping information may include group information of each group, such as group identification, group name, creation time, and the like, and may further include image information included under each group, such as identification information of included images, storage path, and the like. In one embodiment, the image grouping information may be represented by group _ id: the pic _ id is represented in a form, wherein the group _ id represents a group identifier, and the pic _ id represents a multimedia number of a picture. Further, the mobile terminal may extract current image grouping information from the second database and cache the image grouping information in the third database. The third database refers to a backup database and can be used for storing information interacted with the server, such as information sent to the server, information sent by the server, and the like. The third database may also include a plurality of types of fields such as a picture attribute, a face attribute, and a group attribute, where the number of fields under each attribute may be less than that of the second database, and only the fields related to the server interaction are reserved, for example, the picture attribute may only include fields such as a storage path and a multimedia number, the face attribute may only include fields such as a face feature, and the group attribute may only include fields such as a group identifier and a creation time, but is not limited thereto.
In one embodiment, the current image grouping information may include manually grouped grouping information and automatically clustered grouping information, wherein the manually grouped grouping information refers to grouping information that is manually grouped by a user and includes a grouping created by the user, a grouping combined by the user, a grouping to which a manually adjusted photo belongs, and the like, and the automatically clustered grouping information refers to a grouping generated by clustering a server or a mobile terminal and the like according to image characteristics of each image.
In one embodiment, after the mobile terminal extracts the current image grouping information from the second database and caches the image grouping information in the third database, the image features of the grouped images in each group can be extracted from the second database according to the image grouping information, and the image features of the images contained in each group can be extracted from the second database and correspondingly stored in the third database. The image features of the grouped images in each group are extracted, the image features corresponding to each group can be determined, for example, the face features corresponding to each group and the like, and the server can be helped to cluster the image features of the images to be clustered.
And step 604, packing the image grouping information, the image characteristics of the grouped images in each group and the image characteristics of the images to be clustered into an uplink data packet.
Specifically, the mobile terminal can package the image grouping information, the image characteristics of the grouped images in each group and the image characteristics of the images to be clustered into an uplink data packet according to a preset format, and upload the uplink data packet to the server for image clustering. In one embodiment, the mobile terminal may package image features of images belonging to the same group into the same uplink data packet according to the group, and carry group information such as a group identifier, a group name, and the like of the corresponding group.
And step 606, uploading the uplink data packet to a server.
Step 608, sending a clustering request to the server, where the clustering request instructs the server to calculate similarity between image features of the images to be clustered and image features of grouped images in each group, determine the groups of the image features, and allocate corresponding labels.
Specifically, the mobile terminal may upload the uplink data packet to the server and send a clustering request to the server. In one embodiment, the server may be a single server, or may be a distributed server cluster composed of a plurality of hosts, where the server cluster may include a plurality of servers, and each server may provide an image clustering service to the mobile terminal. After the mobile terminal sends the clustering request to the server cluster, the server cluster can add the clustering request to the queue service, distribute the clustering request to each server in the server cluster according to the queue service, and perform image clustering by the server distributed to the clustering request. Each clustering request included in the queue service may carry information such as a sent identifier, an account, and a sending time of a corresponding Mobile terminal, where the identifier of the Mobile terminal may be a Media Access Control (MAC) address of the Mobile terminal, or an International Mobile Subscriber Identity (IMSI) address of the Mobile terminal.
In one embodiment, the server cluster may allocate the clustering requests to the servers of the server cluster according to the sending time sequence of the clustering requests in the queue service. When the clustering requests are distributed, whether the clustering requests which belong to the same mobile terminal and are sent at different moments with the distributed clustering requests are contained in the queue service or not can be detected, if the clustering requests which belong to the same mobile terminal and are sent at different sending moments with the distributed clustering requests are contained, the clustering requests which belong to the same mobile terminal and are sent at different sending moments with the distributed clustering requests can be merged, and the merged clustering requests are distributed to the server. For example, the currently assigned clustering request is that mobile terminal a is No. 6/8/2017: 00, it is detected that the queue service further includes the number 7 of mobile terminal a in 2017, 8/month 2: 00 and 8, the mobile terminal A sends a clustering request and the number of the mobile terminal A is 8/8 in 2017: 00, the three clustering requests belonging to the mobile terminal a can be merged, and the merged clustering requests are distributed to a server and are processed by the server in a unified manner. When the clustering request is distributed, whether the clustering request sent by the same account with the distributed clustering request is contained in the queue service or not can be detected, if so, the clustering requests sent by the same account with the distributed clustering request can be merged, and the merged clustering requests are distributed to the server. For example, the currently assigned clustering request is that account X is numbered 6 by mobile terminal a in 2017, 8/month 2: 00, it is detected that the queue service further includes account X, and number 7 of 8/month 2 in 2017 through mobile terminal B: 00, the two clustering requests belonging to the account X can be merged, and the merged clustering requests are distributed to the server and processed by the server in a unified manner.
When the server receives the uplink data packet and the clustering request, the uplink data packet can be analyzed to obtain information such as image grouping information, image characteristics of grouped images in each group, image characteristics of images to be clustered and the like. And the server clusters each image to be clustered through a preset clustering model. Further, the server can calculate the similarity with the image features of the grouped images in each group respectively according to the image features of each image to be clustered through the clustering model. When the similarity between the image features of the images to be clustered and the image features of the images in the group is greater than a second threshold, the images can be considered to belong to the same class, the server can allocate the image features to the group with the similarity greater than the second threshold, allocate the labels matched with the group, and establish the corresponding relationship between the labels and the corresponding identifiers of the images to be clustered. If the image feature similarity of the image to be clustered does not exist in the group with the similarity of the image features of the image to be clustered being larger than the second threshold, the image to be clustered does not belong to the existing group, the image features of the image to be clustered which do not belong to the existing group can be clustered again through a preset clustering model, the image to be clustered with the similar image features is divided to generate a new group, and a corresponding label is allocated to the image to be clustered corresponding to the image features belonging to the same new group. In one embodiment, if the server receives the merged clustering request, the image features of all the images to be clustered corresponding to the merged clustering request are directly obtained and clustered, so that the clustering efficiency of the images can be improved.
In one embodiment, the server may formulate a clustering strategy according to actual requirements to determine whether to cluster only images to be clustered or cluster all images with image characteristics uploaded historically by the mobile terminal. For example, after the server updates the clustering model, a clustering strategy for clustering all images with image features uploaded historically by the mobile terminal can be formulated, wherein when all images with image features uploaded historically are clustered, a group with a manual grouping attribute can be reserved, and clustering is performed again for a group and an image which do not relate to manual operation of a user.
And after the mobile terminal receives the clustering result returned by the server, adding the corresponding image to be clustered into a group matched with the label according to the image to be clustered corresponding to the image characteristics and the assigned label, and caching the clustering result into a third database. The mobile terminal can update the second database according to the clustering result returned by the server cached in the third database.
In this embodiment, the images to be clustered can be grouped according to the existing grouping information and the image characteristics of the grouped images in each group, so that the clustering result is more accurate, the actual requirements of the user can be met, the image clustering efficiency is improved, and the user viscosity can be improved.
In one embodiment, before acquiring the image to be clustered of the mobile terminal in step 310, the method may include: acquiring a current power state, and if the power state meets a preset state, executing step 310 to acquire an image to be clustered of the mobile terminal.
Specifically, the mobile terminal may obtain a current power state before uploading the image features of the image to be clustered to the server, where the power state may include an available remaining power amount, whether the image is in a charging state, a power consumption speed, and the like. And when the power supply state meets the preset state, acquiring the image to be clustered, identifying the image to be clustered, extracting the image characteristics of the image to be clustered, and uploading the image characteristics to a server. The preset state may be that the available remaining power is greater than a preset percentage, or in a charging state, or that the available remaining power is greater than a preset percentage and the power consumption speed is less than a set value, etc., and the preset state is not limited thereto, and may be set according to actual requirements.
In one embodiment, the mobile terminal may preset an upload time period for uploading the image features of the images to be clustered, and may upload the image features of the images to be clustered to the server if the current time is within the preset upload time period, where the upload time period may be set to a time period in which the mobile terminal is used less, for example, 2 to 4 points in the morning.
In this embodiment, when the power state meets the preset state, the image features of the images to be clustered are uploaded to the server, so that the states of the power supply and the like of the mobile terminal when the image features of the images to be clustered are uploaded can be ensured, and the influence of the uploaded image features on the use of the mobile terminal is reduced.
In one embodiment, a data transmission method is provided, comprising the steps of:
and (1) acquiring the current power state.
And (2) when the power state meets a preset state, acquiring the image to be clustered of the mobile terminal.
And (3) identifying the image to be clustered, and extracting the image characteristics of the image to be clustered.
And (4) extracting the current image grouping information and the image characteristics of the grouped images in each group, and packaging the image grouping information, the image characteristics of the grouped images in each group and the image characteristics of the images to be clustered into an uplink data packet.
And (5) uploading the uplink data packet to a server.
And (6) sending a clustering request to a server, wherein the clustering request instructs the server to calculate the similarity between the image features of the images to be clustered and the image features of the grouped images in each group, determines each group of the images to be clustered and allocates a corresponding label, wherein the clustering request comprises a mobile terminal identifier and a sending moment, when the clustering request is allocated to the server, the server is instructed to obtain clustering requests sent by the same mobile terminal at different sending moments according to the mobile terminal identifier and merge the obtained clustering requests, the clustering requests comprise account information, and when the clustering request is allocated to the server, the server is instructed to obtain the clustering requests sent by the same account according to the account information and merge the obtained clustering requests.
And (7) receiving a clustering result returned by the server, wherein the clustering result comprises an image identifier to be clustered corresponding to the image characteristics and a label distributed after the image characteristics are clustered.
And (8) distributing the images to be clustered to the groups corresponding to the labels according to the corresponding identifiers of the images to be clustered and the distributed labels.
In the embodiment, the images to be clustered are obtained, the images to be clustered are identified, the image features of the images to be clustered are extracted, the image features are uploaded to the server for clustering, the image clustering can be completed only by uploading the image features, the whole image does not need to be uploaded, the data volume of transmission can be reduced, and the transmission speed is improved.
As shown in fig. 7, in one embodiment, a data transmission apparatus 700 is provided and includes an obtaining module 710, an extracting module 720, an uploading module 730, a receiving module 740, and an allocating module 750.
The obtaining module 710 is configured to obtain an image to be clustered of the mobile terminal.
And the extracting module 720 is configured to identify the image to be clustered, and extract the image features of the image to be clustered.
In one embodiment, the extracting module 720 is further configured to perform face recognition on the image to be clustered, and when it is detected that the image to be clustered includes a face, extract face feature points of the image to be clustered, where the face feature points are used to describe a shape of the face, a shape of five sense organs, and a position.
And an uploading module 730, configured to upload the image features to a server.
The receiving module 740 is configured to receive a clustering result returned by the server, where the clustering result includes an identifier of an image to be clustered corresponding to the image feature, and a label assigned after the image feature is clustered.
And the allocating module 750 is configured to allocate the images to be clustered to the groups corresponding to the labels according to the corresponding identifiers of the images to be clustered and the allocated labels.
The data transmission device acquires the images to be clustered of the mobile terminal, identifies the images to be clustered, extracts the image characteristics of the images to be clustered, uploads the image characteristics to the server for clustering, only uploads the image characteristics to complete image clustering, does not need to upload the whole image, can reduce the data volume of transmission, and improves the transmission speed. In addition, the images to be clustered can be distributed to corresponding categories according to the clustering result returned by the server, and the images are displayed in a classified manner, so that the images are easier to search, and are convenient and quick.
As shown in fig. 8, in one embodiment, the obtaining module 710 includes a comparing unit 712 and a determining unit 714.
The comparing unit 712 is configured to compare the image information stored in the first database and the second database, and generate a new image list and/or an updated image list according to the comparison result.
The determining unit 714 is configured to determine an image to be clustered according to the new image list and/or the updated image list.
In this embodiment, the images to be clustered can be acquired, and only the images to be clustered can be clustered, so that the pressure of the server can be reduced, and the image clustering efficiency can be improved.
In one embodiment, the comparing unit 712 includes a searching subunit, an adding subunit and a determining subunit.
And the searching subunit is used for searching in the second database according to the path of the image in the first database.
And the adding subunit is used for adding the image which is not found into the new image list if the corresponding image is not found in the second database.
And the judging subunit is used for judging whether the modification time of the image in the first database is consistent with that of the corresponding image in the second database or not if the corresponding image is found in the second database.
And the judging subunit is further configured to judge whether the message digests of the image in the first database and the corresponding image in the second database are consistent or not if the modification times are inconsistent.
And the adding subunit is further used for adding the inconsistent image to the updated image list if the message excerpts are inconsistent.
In this embodiment, the first database and the second database may be compared to generate a new image list and an updated image list, which facilitates determining images to be clustered, so that only the images to be clustered are clustered, thereby reducing the pressure on the server and improving the efficiency of image clustering.
As shown in fig. 9, in one embodiment, the upload module 730 includes a packet information extraction unit 732, a packing unit 734, and an upload unit 736.
Grouping information extracting section 732 for extracting current image grouping information and image features of grouped images in each group.
The packing unit 734 is configured to pack the image grouping information, the image features of the grouped images in each group, and the image features of the images to be clustered into an uplink data packet.
An uploading unit 736 configured to upload the uplink data packet to the server.
In an embodiment, the uploading unit 736 is further configured to send a clustering request to the server, where the clustering request instructs the server to calculate similarity between image features of the images to be clustered and image features of grouped images in each group, determine the group of the image features, and assign corresponding tags.
In one embodiment, the clustering request comprises a mobile terminal identifier and a sending time; and when the clustering requests are distributed to the server, indicating the server to obtain the clustering requests which belong to the same mobile terminal and are sent at different sending moments according to the mobile terminal identification, and merging the obtained clustering requests.
In one embodiment, the clustering request includes account information; and when the clustering requests are distributed to the servers, indicating the servers to obtain the clustering requests sent by the same account according to the account information, and merging the obtained clustering requests.
In this embodiment, the images to be clustered can be grouped according to the existing grouping information and the image characteristics of the grouped images in each group, so that the clustering result is more accurate, the actual requirements of the user can be met, the image clustering efficiency is improved, and the user viscosity can be improved.
In one embodiment, the data transmission apparatus 700 further includes a power status acquiring module in addition to the acquiring module 710, the extracting module 720, the uploading module 730, the receiving module 740, and the allocating module 750.
And a power state obtaining module, configured to obtain a current power state, and if the power state meets a preset state, obtain the image to be clustered through the obtaining module 710.
In an embodiment, the obtaining module 710 is further configured to obtain an image to be clustered of the mobile terminal if the current time is within a preset uploading time period.
In this embodiment, when the power state meets the preset state, the image features of the images to be clustered are uploaded to the server, so that the states of the power supply and the like of the mobile terminal when the image features of the images to be clustered are uploaded can be ensured, and the influence of the uploaded image features on the use of the mobile terminal is reduced.
The embodiment of the application also provides the mobile terminal. As shown in fig. 10, for convenience of explanation, only the parts related to the embodiments of the present application are shown, and details of the technology are not disclosed, please refer to the method part of the embodiments of the present application. The mobile terminal may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a vehicle-mounted computer, a wearable device, and the like, taking the mobile terminal as the mobile phone as an example:
fig. 10 is a block diagram of a partial structure of a mobile phone related to a mobile terminal according to an embodiment of the present application. Referring to fig. 10, the cellular phone includes: radio Frequency (RF) circuitry 1010, memory 1020, input unit 1030, display unit 1040, sensor 1050, audio circuitry 1060, WiFi module 1070, processor 1080, and power supply 1090. Those skilled in the art will appreciate that the handset configuration shown in fig. 10 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The RF circuit 1010 may be configured to receive and transmit signals during information transmission and reception or during a call, and may receive downlink information of a base station and then process the received downlink information to the processor 1080; the uplink data may also be transmitted to the base station. Typically, the RF circuitry includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 1010 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to GSM, General Packet Radio Service (GPRS), CDMA, Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Messaging Service (SMS), etc.
The memory 1020 can be used for storing software programs and modules, and the processor 1080 executes various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 1020. The memory 1020 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function (such as an application program for a sound playing function, an application program for an image playing function, and the like), and the like; the data storage area may store data (such as audio data, an address book, etc.) created according to the use of the mobile phone, and the like. Further, the memory 1020 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The input unit 1030 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone 1000. Specifically, the input unit 1030 may include a touch panel 1032 and other input devices 1034. Touch panel 1032, which may also be referred to as a touch screen, may collect touch operations by a user (e.g., operations by a user on or near touch panel 1032 using a finger, a stylus, or any other suitable object or accessory) and drive the corresponding connection device according to a predetermined program. In one embodiment, touch panel 1032 can include two portions, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 1080, and can receive and execute commands sent by the processor 1080. In addition, the touch panel 1032 may be implemented using various types such as resistive, capacitive, infrared, and surface acoustic wave. The input unit 1030 may include other input devices 1034 in addition to the touch panel 1032. In particular, other input devices 1034 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), and the like.
The display unit 1040 may be used to display information input by a user or information provided to the user and various menus of the cellular phone. The display unit 1040 may include a display panel 1042. In one embodiment, the Display panel 1042 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. In one embodiment, the touch panel 1032 can overlay the display panel 1042, and when the touch panel 1032 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 1080 to determine the type of the touch event, and then the processor 1080 provides a corresponding visual output on the display panel 1042 according to the type of the touch event. Although in fig. 10, the touch panel 1032 and the display panel 1042 are two separate components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 1032 and the display panel 1042 may be integrated to implement the input and output functions of the mobile phone.
The cell phone 1000 may also include at least one sensor 1050, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 1042 according to the brightness of ambient light, and the proximity sensor may turn off the display panel 1042 and/or the backlight when the mobile phone moves to the ear. The motion sensor can comprise an acceleration sensor, the acceleration sensor can detect the magnitude of acceleration in each direction, the magnitude and the direction of gravity can be detected when the mobile phone is static, and the motion sensor can be used for identifying the application of the gesture of the mobile phone (such as horizontal and vertical screen switching), the vibration identification related functions (such as pedometer and knocking) and the like; the mobile phone may be provided with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor.
Audio circuitry 1060, speaker 1062, and microphone 1064 may provide an audio interface between a user and a cell phone. The audio circuit 1060 can transmit the electrical signal converted from the received audio data to the speaker 1062, and the electrical signal is converted into a sound signal by the speaker 1062 and output; on the other hand, the microphone 1064 converts the collected sound signal into an electrical signal, which is received by the audio circuit 1060 and converted into audio data, and the audio data is processed by the audio data output processor 1080 and then transmitted to another mobile phone through the RF circuit 1010, or the audio data is output to the memory 1020 for subsequent processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help the user to send and receive e-mail, browse web pages, access streaming media, etc. through the WiFi module 1070, which provides wireless broadband internet access for the user.
The processor 1080 is a control center of the mobile phone, connects various parts of the whole mobile phone by using various interfaces and lines, and executes various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 1020 and calling data stored in the memory 1020, thereby integrally monitoring the mobile phone. In one embodiment, processor 1080 may include one or more processing units. In one embodiment, processor 1080 may integrate an application processor and a modem processor, wherein the application processor primarily handles operating systems, user interfaces, applications, and the like; the modem processor handles primarily wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 1080.
The handset 1000 also includes a power supply 1090 (e.g., a battery) for powering the various components, and preferably, the power supply 1090 is logically coupled to the processor 1080 via a power management system that provides management of charging, discharging, and power consumption.
In one embodiment, the cell phone 1000 may also include a camera, a bluetooth module, and the like.
In the embodiment of the present application, the processor 1080 included in the mobile terminal implements the data transmission method described above when executing the computer program stored in the memory.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the above-mentioned data transmission method.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), or the like.
Any reference to memory, storage, database, or other medium as used herein may include non-volatile and/or volatile memory. Suitable non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.