Detailed Description
The following describes embodiments of the present application in detail with reference to the drawings.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, interfaces, techniques, etc., in order to provide a thorough understanding of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" is merely an association relation describing the association object, and there may be three kinds of relations, for example, a and/or B, and there may be three cases where a alone exists, a and B together, and B alone exists. In addition, the character "/" herein is generally an or relationship between the front and rear related objects. Further, "more" than two or more than two herein.
Referring to fig. 1, fig. 1 is a flowchart illustrating an embodiment of a method for labeling pictures according to the present application. Specifically, the method may include the steps of:
and S11, determining the number and types of the points to be marked based on the pictures to be marked.
In a specific application scenario, the pictures to be marked can be a plurality of pictures of objects with the same kind of characteristics, for example, 100 pictures containing human body parts or 200 pictures containing animal bone parts, etc. The number of specific pictures to be marked and the feature objects can be determined according to actual situations, and the method is not limited herein.
In a specific application scenario, the number and type of points to be marked may be determined based on all the features contained in the feature objects of the picture to be marked. The number of points to be marked of the picture to be marked is determined based on the total feature number of the feature object. In a specific application scenario, when the feature objects on the multiple pictures to be marked are human bodies, the number and types of points to be marked can be determined based on all joint features of the human bodies, namely, the number of points to be marked is the number of all joint features of the human bodies. The number of points to be marked is not set according to the characteristics contained in the human body range displayed on the single picture to be marked. For example, when the part of the human body on the picture to be marked is only the upper body, all the joint features of the human body range to the whole human body, not only the upper body.
In a specific application scenario, when the feature objects of the plurality of pictures to be marked are human bodies, the number and types of the points to be marked can be determined based on all joint feature points of the human bodies. For example, when 20 joint feature points of a human body are total, the types are respectively "nose", "under neck", "right shoulder", "right elbow", "right wrist", "left shoulder", "left elbow", "left wrist", "right crotch", "right knee", "right ankle", "left crotch", "left knee", "left ankle", "right eye", "left eye", "right ear", "left palm center", "right palm center", and the number of points to be marked is 20, the types are respectively "nose", "under neck", "right shoulder", "right elbow", "right wrist", "left shoulder", "left elbow", "left wrist", "right crotch", "right knee", "right ankle", "left crotch", "left knee", "left ankle", "right eye", "left eye", "right ear", "right palm center", and the number of points to be marked are respectively. The number of features of a specific feature object may be determined by a manual or feature extraction model, which is not limited herein.
In a specific application scene, the pictures to be marked can also be a plurality of pictures with the same kind of characteristic scene, for example, 100 building pictures and the like. The number of specific pictures to be annotated and the feature scene can be determined according to actual conditions, and the method is not limited herein. The number and the type of the points to be marked can be determined based on the characteristics of the characteristic scene.
And step S12, corresponding labeling is carried out on the pictures to be labeled based on the number and the types of the points to be labeled, so as to obtain a plurality of labeling points.
And respectively carrying out corresponding labeling on each picture to be labeled based on the number and the types of the points to be labeled obtained in the previous step, and obtaining a plurality of corresponding labeling points on each picture to be labeled.
In a specific application scenario, after 20 points to be marked of the whole body of a human body are determined according to the number of human body features, when a target picture to be marked, which only contains part of the human body, is correspondingly marked, only the human body part visually displayed by the target picture to be marked is marked, and the human body part not visually displayed by the target picture to be marked is not marked, so that a plurality of marking points of the target picture to be marked are obtained. For example, when the target picture to be marked only comprises the upper body, the target picture to be marked is correspondingly marked only based on the number and the type of the points to be marked of the upper body of the human body, so that a plurality of marking points are obtained.
And S13, selecting a target marking point from the marking points, and connecting the target marking point with at least one marked other marking point to obtain marking information of the picture based on the marking point and the connecting line.
After the target picture to be marked is marked and a plurality of marking points are obtained, selecting a target marking point from the plurality of marking points, and connecting the target marking point with at least one marked other marking point to obtain marking information of the picture based on the marking point and the connecting line.
In a specific application scenario, after all the labeling points are labeled, one of the target labeling points is selected, and the target labeling point is connected with other labeled points until the labeling points are correspondingly connected. In another specific application scenario, after each target marking point is marked in the marking process, the target marking point and other marked marking points are connected until all the marking points are marked. And the visual corresponding relation is established for each marking point through the connecting line, so that the marking effect on the marked picture is improved, and the practicability of the marking information is improved when the marking information is applied.
And after the labeling and the connection are completed, labeling information of the picture and the labeled picture are obtained based on the labeled points and the connection.
According to the method, the number and the type of the points to be marked are determined based on the pictures to be marked, the pictures to be marked are correspondingly marked based on the number and the type of the points to be marked, a plurality of marking points are obtained, the target marking points are selected from the marking points, the target marking points are connected with at least one marked other marking point, and marking information of the pictures is obtained based on the marking points and the connection, so that marking effects on the marked pictures are improved by connecting the marking points, visual corresponding connection among the marking points on the pictures is enhanced, and therefore the practicability of the marking information is improved when the marking information is applied.
Referring to fig. 2, fig. 2 is a flowchart of another embodiment of a method for labeling pictures according to the present application. In this embodiment, the common features of the pictures to be marked are taken as human body structures, and in other embodiments, the common features of the pictures to be marked can be any other structures or features, which are not limited herein. Specifically, the method may include the steps of:
the labeling method is applied to a labeling tool of a labeling platform, and the labeling platform labels the pictures to be labeled through the labeling tool after obtaining the pictures to be labeled.
And S21, determining the number and types of the points to be marked based on the pictures to be marked.
The number and types of the points to be marked are determined based on the obtained characteristics of the human body structure shared by the plurality of pictures to be marked, wherein the types of the points to be marked in the embodiment refer to the types of the parts of the points to be marked.
In a specific application scenario, the characteristics of the human body structure are determined in a manual determination manner to be shared by [ "nose", "neck down", "right shoulder", "right elbow", "right wrist", "left shoulder", "left elbow", "left wrist", "right crotch", "right knee", "right ankle", "left crotch", "left knee", "left ankle", "right eye", "left eye", "right ear", "left palm center", "right palm center" ], and then the number of the waiting mark points which can be determined in this step is 20, which are respectively of the type [ "nose", "neck down", "right shoulder", "right elbow", "right wrist", "left shoulder", "left elbow", "left wrist", "right crotch", "right knee", "right ankle", "left knee", "left thigh", "left knee", "left eye", "right ear", "left palm center" ].
After the number and the types of the points to be marked are determined, marking labels of a plurality of points to be marked are determined based on the number and the types of the points to be marked, and the marking labels can be placed in the areas to be selected of the marking tool according to the sequence of the human body structure. In a specific application scenario, a certain labeling label can be selected for labeling, and the next label which is not labeled is automatically selected based on the sequence of the human body structure after labeling, so that labeling is facilitated. The automatic selection method may be performed by modulo operation or other methods, and when the modulo operation is selected to perform automatic selection, the following code [ remainPoints [ (index+1)% remannpoints. Length ] may be used.
Referring to fig. 3, fig. 3 is a schematic diagram of an embodiment in which points to be marked in the selected area are being marked in the embodiment of fig. 2.
The plurality of labeling labels are sequentially arranged in the region to be selected 30, wherein the plurality of labeling labels are in one-to-one correspondence with the points to be labeled. In the present embodiment, the candidate area 30 includes a labeled label 31, an unlabeled label 32, and a label 33 that is being labeled. The area 30 to be selected automatically selects the next unlabeled label 32 for labeling after the labeling label 33 being labeled is labeled.
In this embodiment, when the image to be marked is marked, all the points to be marked may be marked and marked, or only some of the points to be marked may be marked and marked. The setting may be specifically performed based on actual conditions, and is not limited herein.
And S22, identifying the marking points of the picture to be marked through a picture marking model based on the number and the type of the points to be marked, obtaining initial marking information of at least one pre-marking point, and marking the at least one pre-marking point on the picture to be marked by utilizing the initial marking information of the at least one pre-marking point.
Because more marking points need to be drawn on the picture to be marked, the marking process in the embodiment can draw the picture to be marked and the marking points by using Canvas at the front end of the web, thereby reducing the problems of touching the performance bottleneck of the front end or causing page clamping and the like. Specifically, a Canvas is established by utilizing a Canvas, then a picture to be marked is drawn on the Canvas, and further points to be marked are drawn on the picture to be marked on the Canvas by utilizing the Canvas.
Before the picture to be marked is marked manually, marking point identification can be performed on the picture to be marked through a picture marking model based on the number and the type of the points to be marked, initial marking information of at least one pre-marking point is obtained, and the at least one pre-marking point is marked on the picture to be marked by utilizing the initial marking information of the at least one pre-marking point. The initial labeling information may include initial coordinate information and initial type information of the pre-labeling point on the picture. The initial type information includes none, occlusion, and visibility, which is different from the type of the annotation point.
The method comprises the steps of obtaining initial coordinate information and initial type information of each pre-marked point, carrying out marking point identification on a picture to be marked through a picture marking model, configuring the initial coordinate information and the initial type information of each pre-marked point to obtain label characteristics of each pre-marked point, and marking each pre-marked point on the picture to be marked through the picture marking model by utilizing the label characteristics of each pre-marked point. The image annotation model is an image annotation model obtained by training in advance based on image annotation, and may be specifically a deep neural network model or other models, which is not limited herein.
In this step, a JSON (JavaScript Object Notation, javascript object notation, which is a lightweight data exchange format) is defined for storing coordinate information of a pre-labeling point and type information of the pre-labeling point, which provides an interface for auxiliary labeling of a picture labeling model, specifically, the outer layer of JSON configuration is an object, the key name of the inner layer is the name of a point, the key value is an array with a length of 3, values in the array sequentially represent the abscissa relative to the left boundary of a picture to be labeled, the ordinate relative to the upper boundary of the picture to be labeled and the type of the pre-labeling point, wherein when the initial type information of the pre-labeling point is zero, the corresponding data is 0, when the initial type information of the pre-labeling point is shielding, the corresponding data is 1, and when the initial type information of the pre-labeling point is visible, the corresponding data is 2. (invisible 0, occlusion 1, visible 2), examples of configured tag features are as follows:
{ "Right wrist": [456.12207,915.48486,2], "under neck": [445.7528,991.6262,2], "nose": [439.641,1065.5913,2] }
Taking "right wrist": 456.12207,915.48486,2 "as an example, the right wrist" is the name and type of the pre-marked point, 456.12207,915.48486 is the coordinates of the pre-marked point, 2 is the initial type information of the pre-marked point, and is visible.
The label feature of this embodiment is that labelFeature of the labeling tool and the final labeling information share one field, pre-labeling is performed by the picture labeling model before secondary labeling, the pre-labeled label feature information is filled into LabelFeature field, and after modification and update during secondary labeling, the new labeling information can replace the old labelFeature. Or, dividing the label information into two fields, and distinguishing the pre-labeled label information and the final label information.
In a specific application scene, after the label characteristics of each pre-marked point are obtained through the picture marking model, the label characteristics can be traversed, and the pre-marked points are drawn on the picture to be marked in the Canvas through the Canvas.
In a specific application scene, each pre-labeling point is labeled on the picture to be labeled by using the label characteristics of each pre-labeling point through a picture labeling model, and the method can be realized by the following codes:
The labelFeature data structure is the data structure of the tag characteristic information pre-marked above, analyzes labelFeature to obtain the name, the abscissa, the ordinate and the type of the point of the pre-marked point, and calibrates and draws the point based on the coordinates, thereby drawing the marked point on the picture to be marked. And the coordinate value divided by the ratio is to calibrate the scale and position of the annotation process picture scaled and moved in the canvas.
In a specific application scenario, the number of pre-labeling points labeled by the image labeling model may be less than the number of points to be labeled obtained in the foregoing step. Because the accuracy of the picture marking model is affected by the training data and the iteration times, the pre-marking points marked by the picture marking model may have inaccurate phenomenon, and therefore, in this embodiment, the picture pre-marked by the picture marking model also needs to be marked for the second time, so as to improve the accuracy and precision of the picture marking.
In a specific application scenario, when a pre-labeling point is labeled on a picture to be labeled, an instance of the name of the pre-labeling point is created so as to intuitively display specific information of the pre-labeling point, for example, when a labeling point of a left palm center is labeled, a corresponding text instance is also created to display a word of the left palm center besides drawing a dot at a target position so as to intuitively display the information of the labeling point. The subsequent secondary labeling also creates an instance of the name of the pre-labeled point for display.
And S23, performing secondary labeling on the picture to be labeled to label the rest labeling points on the picture to be labeled, so as to obtain a plurality of labeling points.
When the pictures are manually marked, the pictures to be marked need to be displayed through a screen or a visible area, so that the pictures can be conveniently watched and marked manually. Therefore, before the picture to be marked is marked for the second time manually, canvas (a drawing tool) is used at the front end of the web to draw the picture to be marked on the Canvas, and because the display area of the screen or the visible area is fixed and limited, in order to enable the picture to be marked to be matched with the screen or the visible area after initialization, the occurrence of mining robbery of picture frame drawing is reduced, and the picture to be marked needs to be calculated for a first time. In a specific application scenario, if the aspect ratio of the picture to be annotated is larger than the visible area, the picture to be annotated is scaled to have a width equal to the width of the visible area, otherwise the scaled height is equal to the height of the visible area. The scaling function in the application scenario may be implemented by the following code:
and carrying out secondary manual labeling on the picture to be labeled so as to label the rest labeling points on the picture to be labeled, thereby obtaining a plurality of labeling points, namely all labeling points which can be labeled by the human body structure displayed on the picture to be labeled.
In the secondary labeling process, because the step is manual labeling, clicking is performed manually through Canvas at a certain position on the Canvas, so that the labeling point is directly drawn on the picture to be labeled. The method for drawing the marked points by secondary marking is directly carried out manually and is different from the method for drawing the marked points based on the marked labels which are obtained in the pre-marking.
In a specific application scenario, because the pre-labeling points marked by the image marking model are affected by the precision thereof, the pre-labeling points marked by the image marking model may have inaccurate and incomplete conditions, so that the image to be marked is marked secondarily by manual marking, specifically, the image to be marked can be marked secondarily to mark the rest marking points except the pre-labeling points on the image to be marked, and the final determined type information of each rest marking point is determined, and/or when the positions of the pre-labeling points are inaccurate, the positions of at least one inaccurate pre-labeling point are moved by the secondary marking, so as to obtain a plurality of determined marking points required to be marked on the image to be marked.
In a specific application scenario, when the initial coordinate information and the initial type information of the pre-marked point are inaccurate, the initial coordinate information and the initial type information of the pre-marked point are modified through secondary marking in the step, so that final determined coordinate information and determined type information of the pre-marked point are obtained.
In a specific application scenario, in step S22 and step S23, in the process of marking the pre-marking points and marking points on the picture to be marked, the marking points may be connected based on the preset connection rule. The preset connection rule may be set by an arrangement sequence of the human body structure, and in this embodiment, the connection rule between the marking points is determined based on the number and types of the points to be marked.
In a specific application scenario, the connection rules between the marking points can be as follows, [ "nose", "right eye" ], [ "nose", "left eye" ], [ "right eye", "right ear" ], [ "left eye", "left ear" ], [ "nose", "under the neck" ], [ "under the neck", "right shoulder" ], [ "under the neck", "left shoulder" ], [ "right shoulder", "right elbow" ], [ "right elbow", "right wrist" ], [ "left shoulder", "left elbow" ], [ "left elbow", "left wrist" ], [ "right wrist", "right palm" ], [ "left wrist", "left palm" ], [ "under the neck", "right crotch" ], [ "right knee" ], [ "right ankle" ], [ "under the neck" ], [ "left crotch" ], [ "left knee" ], and [ (left ankle "]. In this embodiment, the annotation point may be connected to more than 1 other annotation point, for example, there may be a case where a certain annotation point is connected to 5 other annotation points. In other embodiments, for example, in the face feature labeling, only the connection between the labeling point and a single other labeling point may exist, and the setting is specifically performed according to the specific features of the feature structure, which is not limited herein.
In a specific application scenario, before labeling the picture to be labeled, the color of the connecting line can be set so as to further highlight the visualization of the labeling result. In a specific application scenario, the default connection lines may be red in color. In another specific application scenario, the function of the custom connection color may be provided by a data structure of { "under-neck-right shoulder": (255,0,85,0.65) "," under-neck-left shoulder ": (255,0,0,0.65)", "right shoulder-right elbow": (255,85,0,0.65) "," right elbow-right wrist ": (255,170,0,0.65)", "left shoulder-left elbow": (255,255,0,0.65) "," left elbow-left wrist ": (170,255,0,0.65)", "under-neck-right crotch": (85,255,0,0.65) "," right crotch-right knee ": (0,255,0,0.65)", "right knee-right ankle": (0,255,85,0.65) "," under-neck-left crotch ": (0,255,170,0.65)", "left crotch-left knee": (0,255,255,0.65) "," left knee-left ankle ": (0,170,255,0.65)", "under-nose-neck" "(0,85,255,0.65)", "nose-right eye": (0,0,255,0.65) "," right eye-right ear ": (255,0,170,0.65)", "under-left eye": (170,0,255,0.65) "," left crotch "" (left and right wrist) "," left hand-left wrist "" (3758) "," left crotch "".
The description is given by taking [ "left wrist-left palm": (0,255,0,0.65) "] as an example, and other data structures are similar to the example and will not be repeated. "left wrist-left palm" means that the marking point of the left wrist is connected with the marking point of the left palm, the first three bits 0,255,0 in "(0,255,0,0.65)" means the RGB color of the connection, and the last 0.65 means the transparency of the connection. The RGB colors and transparency of the connection lines can be set based on practical situations, and are only shown here without limitation.
In this embodiment, the connection line may be filled with color by using a map color filling tool, and specifically, the logic of customizing the color of the connection line is implemented by using the following codes.
When connecting the marking points based on a preset connection rule, displaying the connection between the marking points based on preset colors and preset transparency.
In a specific application scenario, after all the labeling points are labeled, one of the target labeling points is selected, and the target labeling point is connected with other labeled points until the labeling points are correspondingly connected. In another specific application scenario, after each target marking point is marked in the marking process, the target marking point and other marked marking points are connected until all the marking points are marked. And the visual corresponding relation is established for each marking point through the connecting line, so that the marking effect on the marked picture is improved, and the practicability of the marking information is improved when the marking information is applied.
In a specific application scenario, after the target annotation point is connected with other annotation points, the target annotation point, other annotation points and the connection between the target annotation point and other annotation points are established in a corresponding relation, so that the target annotation point, other annotation points and the combination between the target annotation point and other annotation points are obtained. When a certain target marking point is connected with a plurality of other target marking points, the target marking point, all other marking points and the connecting lines can be established with corresponding relation for combination. By combining the marking points and the connecting lines, the subsequent adjustment steps can be performed on a whole, such as defining some attributes for the group or associating some object instances, integrally scaling, deleting and the like, so that the marking efficiency is improved.
When the target mark point is wrong and needs to be deleted, the combination can be directly used for deleting the target mark point, the example of the target mark point and the connecting line corresponding to the target mark point at one time. Otherwise, the subsequent operations such as reconnecting, moving the marking point or withdrawing the marking point are interacted with errors. And the deletion efficiency is improved. The logical part code of the operation is as follows:
In a specific application scenario, JSON is configured according to the connection combination, and based on a preset connection rule, whether a previous and a next marking point of the marking point exist on the canvas or not is calculated, and if any, the previous and the next marking points are connected with each other. If the connection lines are mutually connected, in addition to the F point and the other 3 points X, Y, Z, the F-X line, the F-Y line and the F-Z line are required to be referenced at the F point, the X, Y, Z point example is also required to be bound with the line example through calculation, and the purpose of the calculation is to facilitate the F-X line, the F-Y line and the F-Z line to move along with the F point when the F point is moved. The wiring operation description code is as follows:
When moving the annotation point, one difficulty is how to have all the lines connected to it follow. In this embodiment, when connecting lines, data.name1 of the line object is assigned as a new point currently ready for drawing, data.name2 is assigned as another point already existing on the canvas, and according to the sequence of calling the cords parameters of drawLine () method, [ x1, y1, x2, y2], it is guaranteed that name1 and [ x1, y1] are matched. When a certain marking point is moved, judging that if name1 is the currently moved point in all the bound line objects of the marking point, the x1 and y1 positions of the line are synchronous with the moved position of the point. If name2 is the currently moving annotation point, then the x2 and y2 positions of the line are synchronized with the position to which the annotation point is moved. Thereby realizing the line follow-up.
In the secondary labeling process, the current operation record stack can be stacked, the function of removing the upper operation is realized through undoStack (removing the stack), and the action function value is new points (and connecting lines), move moving points (and moving lines) and del deleting points (and deleting lines). The position information and the type information of each marking point, the color and the transparency of each connecting line are stored in real time, so that the marking can be cancelled conveniently, and the problem of marking loss caused by some unavoidable conditions is avoided.
In order to improve the labeling efficiency, shortcut key operation can be supported in the secondary manual labeling process. For example, adding a marking point (left click), modifying the marking point (drag point), deleting the marking point (DEL), withdrawing (right click), zooming (mouse wheel), switching the marking point type (A), displaying a hidden label (W), displaying a hidden marking trace (E), dragging a picture (Alt+mouse drag, wherein the MacOS system is an Option key+mouse drag), adjusting the radius of the point (for example, adjusting the thickness of a line (for example, the length of the line is not limited).
Referring to fig. 4, fig. 4 is a schematic diagram of an embodiment of a marked picture in the embodiment of fig. 2.
In this embodiment, a part of the human body is shown on the marked picture 10. Based on the illustrated portion of the human body, 9 labeling points are labeled thereon, namely, a lower neck 11, a right shoulder 12, a left shoulder 13, a left elbow 14, a right elbow 15, a right wrist 16, a left wrist 19, a right crotch 17, and a left crotch 18. Besides the original point, the 9 marking points are also provided with text examples for explaining each marking point, and the marking points are connected with each other based on a preset connection rule. In this embodiment, each of the wires is black in color, and the transparency thereof is 0.67. However, in other embodiments, the color and transparency of the connection line can be selected in a customized manner, which is not limited herein.
And step S24, traversing each marked point and each connecting line on the marked picture to obtain the determined coordinate information, the determined type information and the track information and the connecting information of each marked point, and taking the determined coordinate information, the determined type information and the track information and the connecting information of each connecting line of each marked point as the marked information of the picture.
After the secondary labeling is completed, traversing each labeling point and each connecting line on the labeled picture to obtain final determined coordinate information, determined type information and track information and connecting information of each connecting line of each labeling point, and taking the determined coordinate information, the determined type information and the track information and the connecting information of each connecting line of each labeling point as the labeling information of the picture. So as to lead out the labeling information of the picture. The server interface can be requested to send the labeling information of the picture to the server. The service interface address can be flexibly customized.
According to the method, the number and the type of the points to be marked are determined based on the pictures to be marked, the pictures to be marked are pre-marked based on the number and the type of the points to be marked, a plurality of marking points are obtained, the target marking points are selected from the marking points, the target marking points are connected with at least one marked other marking point, so that marking information of the pictures is obtained based on the marking points and the connection, visual corresponding relation is built for each marking point through the connection, marking effect on the marked pictures is improved, and the practicability of the marking information is improved when the marking information is applied. According to the embodiment, through the custom JSON configuration, a user can customize marking points of a picture to be marked on a marking platform, rules of line matching between the points (namely automatic line connection), colors of custom lines and support pre-marking of a picture marking model. The method for labeling pictures in the embodiment is used for labeling tools, can realize friendly UI design, is smooth to operate and has strong expansibility (can be used as tools or plug-ins only, and can be integrated into an operated labeling platform). The marking efficiency can be greatly improved.
Referring to fig. 5, fig. 5 is a schematic frame diagram of an embodiment of a labeling device for pictures according to the present application. The labeling device 50 of the picture comprises a determining module 51, a labeling module 52 and a connecting module 53. The system comprises a determining module 51 for determining the number and the type of the points to be marked based on the pictures to be marked, a marking module 52 for carrying out corresponding marking on the pictures to be marked based on the number and the type of the points to be marked to obtain a plurality of marking points, and a connecting module 53 for selecting a target marking point from the plurality of marking points and connecting the target marking point with at least one marked other marking point to obtain marking information of the pictures based on the marking points and the connecting line.
The connection module 53 is further configured to connect the target marking point with other marking points based on a preset connection rule, and establish a corresponding relationship among the target marking point, other marking points, and the connection between the target marking point and other marking points, so as to obtain a combination among the target marking point, other marking points, and the target marking point and other marking points.
The connection module 53 is further configured to display a connection between the target annotation point and other annotation points based on the preset color and the preset transparency.
The labeling module 52 is further configured to identify labeling points of the to-be-labeled picture through a picture labeling model based on the number and the type of the to-be-labeled points to obtain initial labeling information of at least one pre-labeled point, label at least one pre-labeled point on the to-be-labeled picture by using the initial labeling information of the at least one pre-labeled point, and secondarily label the to-be-labeled picture to label the remaining labeling points on the to-be-labeled picture to obtain a plurality of labeling points.
The labeling module 52 is further configured to identify labeling points of the to-be-labeled picture through the picture labeling model to obtain initial coordinate information and initial type information of each pre-labeling point, configure the initial coordinate information and the initial type information of each pre-labeling point to obtain label characteristics of each pre-labeling point, and label each pre-labeling point on the to-be-labeled picture through the picture labeling model by utilizing the label characteristics of each pre-labeling point.
The labeling module 52 is further configured to perform secondary labeling on the image to be labeled, label the remaining labeling points on the image to be labeled, determine the type of each remaining labeling point, and/or move the position of at least one pre-labeling point to obtain the labeling point.
According to the scheme, the pictures can be marked, and the marked points are connected, so that the marked effect on the marked pictures is improved, and the practicability of marked information is improved.
Referring to fig. 6, fig. 6 is a schematic diagram of a frame of an electronic device according to an embodiment of the application. The electronic device 60 comprises a memory 61 and a processor 62 coupled to each other, the processor 62 being adapted to execute program instructions stored in the memory 61 for implementing the steps of the labeling method embodiment of any of the pictures described above. In one specific implementation scenario, electronic device 60 may include, but is not limited to, a microcomputer, a server, and further, electronic device 60 may also include a mobile device such as a notebook computer, a tablet computer, etc., without limitation.
In particular, the processor 62 is configured to control itself and the memory 61 to implement the steps of the labeling method embodiment of any of the pictures described above. The processor 62 may also be referred to as a CPU (Central Processing Unit ). The processor 62 may be an integrated circuit chip having signal processing capabilities. The Processor 62 may also be a general purpose Processor, a digital signal Processor (DIGITAL SIGNAL Processor, DSP), an Application SPECIFIC INTEGRATED Circuit (ASIC), a Field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, a discrete gate or transistor logic device, a discrete hardware component. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 62 may be commonly implemented by an integrated circuit chip.
According to the scheme, the pictures can be marked, and the marked points are connected, so that the marked effect on the marked pictures is improved, and the practicability of marked information is improved.
Referring to fig. 7, fig. 7 is a schematic diagram of a frame of an embodiment of a computer readable storage medium according to the present application. The computer readable storage medium 70 stores program instructions 701 capable of being executed by a processor, the program instructions 701 being configured to implement the steps of the labeling method embodiment of any of the pictures described above.
According to the scheme, the pictures can be marked, and the marked points are connected, so that the marked effect on the marked pictures is improved, and the practicability of marked information is improved.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical, or other forms.
The elements illustrated as separate elements may or may not be physically separate, and elements shown as elements may or may not be physical elements, may be located in one place, or may be distributed over network elements. Some or all of the units may be selected according to actual needs to achieve the purpose of the embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the methods of the embodiments of the present application. The storage medium includes a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, an optical disk, or other various media capable of storing program codes.