CN111783986B - Network training method and device, and gesture prediction method and device - Google Patents
Network training method and device, and gesture prediction method and device Download PDFInfo
- Publication number
- CN111783986B CN111783986B CN202010638037.2A CN202010638037A CN111783986B CN 111783986 B CN111783986 B CN 111783986B CN 202010638037 A CN202010638037 A CN 202010638037A CN 111783986 B CN111783986 B CN 111783986B
- Authority
- CN
- China
- Prior art keywords
- prediction
- image
- dimensional
- gesture
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012549 training Methods 0.000 title claims abstract description 171
- 238000000034 method Methods 0.000 title claims abstract description 85
- 238000009877 rendering Methods 0.000 claims abstract description 131
- 230000011218 segmentation Effects 0.000 claims abstract description 116
- 238000013519 translation Methods 0.000 claims abstract description 16
- 230000006870 function Effects 0.000 claims description 60
- 238000012545 processing Methods 0.000 claims description 30
- 238000002372 labelling Methods 0.000 claims description 18
- 230000015572 biosynthetic process Effects 0.000 claims description 16
- 238000003786 synthesis reaction Methods 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 15
- 238000013528 artificial neural network Methods 0.000 claims description 11
- 238000006243 chemical reaction Methods 0.000 claims description 11
- 230000002194 synthesizing effect Effects 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 19
- 230000014616 translation Effects 0.000 description 13
- 238000004891 communication Methods 0.000 description 10
- 230000000007 visual effect Effects 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 9
- 230000008569 process Effects 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 238000003491 array Methods 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 238000005192 partition Methods 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000010295 mobile communication Methods 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The disclosure relates to a network training method and device, and a gesture prediction method and device, wherein the method comprises the following steps: predicting a two-dimensional sample image through a gesture prediction network to obtain a prediction segmentation mask corresponding to a target object in the two-dimensional sample image and prediction gesture information corresponding to the target object, wherein the prediction gesture information comprises three-dimensional rotation information and three-dimensional translation information; performing differential rendering operation according to the predicted gesture information corresponding to the target object and the three-dimensional model corresponding to the target object to obtain differential rendering information corresponding to the target object; determining self-supervision training total loss of the gesture prediction network according to the two-dimensional sample image, the prediction segmentation mask, the depth image corresponding to the two-dimensional sample image and the differentiable rendering information; according to the self-supervision training total loss, training the gesture prediction network, and the method and the device can improve the accuracy of the gesture prediction network.
Description
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a network training method and device, and a gesture prediction method and device.
Background
Acquiring six-dimensional (6D) poses (i.e., rotations of 3 degrees of freedom and translations of 3 degrees of freedom) of an object in three-dimensional (3D) space from a two-dimensional (2D) image is critical in many real-world applications, for example, providing critical information for tasks such as grabbing or motion planning of a robot; in unmanned, obtaining the 6D pose of the vehicle and pedestrian may provide driving decision information for the vehicle.
In recent years, deep learning has made a relatively large progress in the 6D pose estimation task, however, estimating the 6D pose of an object with only monocular RGB (red/green/blue) images remains a very challenging task. One of the important reasons is that the amount of data required for deep learning is very large, while the real annotation data of the 6D object pose estimation is very complex to acquire, and is very time-consuming and labor-consuming.
Disclosure of Invention
The disclosure provides a self-supervision training technical scheme for training a neural network.
According to an aspect of the present disclosure, there is provided a network training method, including:
Predicting a two-dimensional sample image through a gesture prediction network to obtain a prediction segmentation mask corresponding to a target object in the two-dimensional sample image and prediction gesture information corresponding to the target object, wherein the prediction gesture information comprises three-dimensional rotation information and three-dimensional translation information;
Performing differential rendering operation according to the predicted gesture information corresponding to the target object and the three-dimensional model corresponding to the target object to obtain differential rendering information corresponding to the target object;
Determining self-supervision training total loss of the gesture prediction network according to the two-dimensional sample image, the prediction segmentation mask, the depth image corresponding to the two-dimensional sample image and the differentiable rendering information;
And training the gesture prediction network according to the self-supervision training total loss.
In one possible implementation manner, the differentiable rendering information corresponding to the target object includes: rendering a segmentation mask, rendering a two-dimensional image, rendering a depth image,
The determining the self-supervision training total loss of the gesture prediction network according to the two-dimensional sample image, the prediction segmentation mask, the depth image corresponding to the two-dimensional sample image and the differentiable rendering information comprises:
Determining a first self-supervising training loss according to the two-dimensional sample image and the rendered two-dimensional image;
Determining a second self-supervising training loss according to the prediction segmentation mask and the rendering segmentation mask;
Determining a third self-supervision training loss according to the depth image corresponding to the two-dimensional sample image and the rendering depth image;
And determining the self-supervision training total loss of the gesture prediction network according to the first self-supervision training loss, the second self-supervision training loss and the third self-supervision training loss.
In one possible implementation, the determining a first self-supervised training penalty from the two-dimensional sample image and the rendered two-dimensional image includes:
after the two-dimensional sample image and the rendered two-dimensional image are respectively converted into a color model LAB mode, determining first image loss by adopting a first loss function according to the two-dimensional sample image after the conversion mode, the rendered two-dimensional image after the conversion mode and the prediction segmentation mask;
Determining a second image loss by adopting a second loss function according to the two-dimensional sample image, the rendered two-dimensional image and the prediction segmentation mask, wherein the second loss function is a loss function based on a multi-scale structure similarity index;
Determining a third image loss by adopting a third loss function according to the two-dimensional sample image, the rendered two-dimensional image and the prediction segmentation mask, wherein the third loss function is a loss function based on a multi-scale feature distance of a depth convolution neural network;
determining the first self-supervising training loss according to the first image loss, the second image loss and the third image loss.
In one possible implementation, the determining the second self-supervised training loss according to the prediction segmentation mask and the rendering segmentation mask includes:
and determining a second self-supervision training loss by adopting a cross entropy loss function according to the prediction segmentation mask and the rendering segmentation mask.
In a possible implementation manner, the determining a third self-supervision training loss according to the depth image corresponding to the two-dimensional sample image and the rendered depth image includes:
respectively performing back projection operation on a depth image corresponding to the two-dimensional sample image and the rendering depth image to obtain point cloud information corresponding to the depth image and point cloud information corresponding to the rendering depth image;
and determining a third self-supervision training loss according to the point cloud information corresponding to the depth image and the point cloud information corresponding to the rendering depth image.
In one possible implementation, the gesture prediction network includes: a category prediction sub-network, a bounding box prediction sub-network, and a pose prediction sub-network,
The predicting the two-dimensional sample image through the gesture predicting network to obtain a prediction segmentation mask corresponding to the target object in the two-dimensional sample image and prediction gesture information corresponding to the target object comprises the following steps:
Predicting a two-dimensional sample image through the category prediction sub-network to obtain category information corresponding to a target object in the two-dimensional sample image;
Predicting a two-dimensional sample image through the boundary box prediction sub-network to obtain boundary box information corresponding to a target object in the two-dimensional sample image;
And processing the two-dimensional sample image, the category information and the boundary box information through the gesture prediction sub-network to obtain a prediction segmentation mask corresponding to a target object in the two-dimensional sample image and prediction gesture information corresponding to the target object.
In one possible implementation, before the predicting, by the pose prediction network, the two-dimensional sample image, the method further includes:
rendering and synthesizing operation is carried out according to the three-dimensional model of the object and the preset gesture information, so that a synthesized two-dimensional image and the labeling information of the synthesized two-dimensional image are obtained, wherein the labeling information of the synthesized two-dimensional image comprises labeling object category information, labeling boundary box information, preset gesture information and preset synthesized segmentation masks;
predicting the synthesized two-dimensional image through the gesture prediction network to obtain prediction information of the synthesized two-dimensional image, wherein the prediction information comprises prediction object type information, prediction boundary box information, a prediction synthesis segmentation mask and prediction synthesis gesture information;
and training the gesture prediction network according to the prediction information and the labeling information of the synthesized two-dimensional image.
According to an aspect of the present disclosure, there is provided a gesture prediction method, the method including:
the method comprises the steps of carrying out prediction processing on an image to be processed through a gesture prediction network to obtain gesture information of a target object in the image to be processed,
Wherein the gesture prediction network is trained by the network training method according to any one of claims 1 to 7.
According to an aspect of the present disclosure, there is provided a network training apparatus including:
The prediction module is used for predicting the two-dimensional sample image through a gesture prediction network to obtain a prediction segmentation mask corresponding to a target object in the two-dimensional sample image and prediction gesture information corresponding to the target object, wherein the prediction gesture information comprises three-dimensional rotation information and three-dimensional translation information;
the rendering module is used for performing differential rendering operation according to the predicted gesture information corresponding to the target object and the three-dimensional model corresponding to the target object to obtain differential rendering information corresponding to the target object;
the determining module is used for determining the self-supervision training total loss of the gesture prediction network according to the two-dimensional sample image, the prediction segmentation mask, the depth image corresponding to the two-dimensional sample image and the differentiable rendering information;
And the self-supervision training module is used for training the gesture prediction network according to the self-supervision training total loss.
In one possible implementation manner, the differentiable rendering information corresponding to the target object includes: rendering a segmentation mask, rendering a two-dimensional image, rendering a depth image, the determination module being further configured to:
Determining a first self-supervising training loss according to the two-dimensional sample image and the rendered two-dimensional image;
Determining a second self-supervising training loss according to the prediction segmentation mask and the rendering segmentation mask;
Determining a third self-supervision training loss according to the depth image corresponding to the two-dimensional sample image and the rendering depth image;
And determining the self-supervision training total loss of the gesture prediction network according to the first self-supervision training loss, the second self-supervision training loss and the third self-supervision training loss.
In one possible implementation manner, the determining module is further configured to:
after the two-dimensional sample image and the rendered two-dimensional image are respectively converted into a color model LAB mode, determining first image loss by adopting a first loss function according to the two-dimensional sample image after the conversion mode, the rendered two-dimensional image after the conversion mode and the prediction segmentation mask;
Determining a second image loss by adopting a second loss function according to the two-dimensional sample image, the rendered two-dimensional image and the prediction segmentation mask, wherein the second loss function is a loss function based on a multi-scale structure similarity index;
Determining a third image loss by adopting a third loss function according to the two-dimensional sample image, the rendered two-dimensional image and the prediction segmentation mask, wherein the third loss function is a loss function based on a multi-scale feature distance of a depth convolution neural network;
determining the first self-supervising training loss according to the first image loss, the second image loss and the third image loss.
In one possible implementation manner, the determining module is further configured to:
and determining a second self-supervision training loss by adopting a cross entropy loss function according to the prediction segmentation mask and the rendering segmentation mask.
In one possible implementation manner, the determining module is further configured to:
respectively performing back projection operation on a depth image corresponding to the two-dimensional sample image and the rendering depth image to obtain point cloud information corresponding to the depth image and point cloud information corresponding to the rendering depth image;
and determining a third self-supervision training loss according to the point cloud information corresponding to the depth image and the point cloud information corresponding to the rendering depth image.
In one possible implementation, the gesture prediction network includes: a category prediction sub-network, a bounding box prediction sub-network, and a pose prediction sub-network, the prediction module further configured to:
Predicting a two-dimensional sample image through the category prediction sub-network to obtain category information corresponding to a target object in the two-dimensional sample image;
Predicting a two-dimensional sample image through the boundary box prediction sub-network to obtain boundary box information corresponding to a target object in the two-dimensional sample image;
And processing the two-dimensional sample image, the category information and the boundary box information through the gesture prediction sub-network to obtain a prediction segmentation mask corresponding to a target object in the two-dimensional sample image and prediction gesture information corresponding to the target object.
In one possible implementation, the apparatus further includes:
The pre-training module is used for conducting rendering and synthesizing operation according to the three-dimensional model of the object and preset gesture information to obtain a synthesized two-dimensional image and marking information of the synthesized two-dimensional image, wherein the marking information of the synthesized two-dimensional image comprises marking object category information, marking boundary frame information, preset gesture information and preset synthesized segmentation masks;
predicting the synthesized two-dimensional image through the gesture prediction network to obtain prediction information of the synthesized two-dimensional image, wherein the prediction information comprises prediction object type information, prediction boundary box information, a prediction synthesis segmentation mask and prediction synthesis gesture information;
and training the gesture prediction network according to the prediction information and the labeling information of the synthesized two-dimensional image.
According to an aspect of the present disclosure, there is provided a posture predicting apparatus including:
A prediction module for predicting the image to be processed through a gesture prediction network to obtain gesture information of a target object in the image to be processed,
The gesture prediction network is obtained by training by adopting the network training method described in any one of the above.
According to an aspect of the present disclosure, there is provided an electronic apparatus including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the instructions stored in the memory to perform the above method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In this way, the two-dimensional sample image can be predicted through the gesture prediction network to obtain a prediction segmentation mask corresponding to the target object in the two-dimensional sample image and prediction gesture information corresponding to the target object, wherein the prediction gesture information comprises three-dimensional rotation information and three-dimensional translation information, and differential rendering operation is performed according to the prediction gesture information corresponding to the target object and the three-dimensional model corresponding to the target object to obtain differential rendering information corresponding to the target object. And determining training self-supervision training total loss of the gesture prediction network according to the two-dimensional sample image, the prediction segmentation mask, the depth image corresponding to the two-dimensional sample image and the differentiable rendering information, and training the gesture prediction network according to the self-supervision training total loss. According to the network training method and device and the gesture predicting method and device, the gesture predicting network is self-supervised and trained on the two-dimensional sample image and the depth image without marked information, so that the accuracy of the gesture predicting network is improved, and meanwhile, the training efficiency of the gesture predicting network is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the technical aspects of the disclosure.
FIG. 1 illustrates a flow chart of a network training method according to an embodiment of the present disclosure;
FIG. 2 shows a schematic diagram of a network training method according to an embodiment of the present disclosure;
FIG. 3 shows a schematic diagram of a network training method according to an embodiment of the present disclosure;
FIG. 4 shows a schematic diagram of a network training method according to an embodiment of the present disclosure;
FIG. 5 illustrates a block diagram of a network training apparatus according to an embodiment of the present disclosure;
Fig. 6 illustrates a block diagram of an electronic device 800, according to an embodiment of the disclosure;
fig. 7 illustrates a block diagram of an electronic device 1900 according to an embodiment of the disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
In training a neural network for object pose estimation, a large amount of synthetic data is generally obtained by means of rendering using a three-dimensional model of a known object, and then the neural network is trained by the synthetic data. However, there is a large field gap between the synthesized data and the real data, so the training result on the synthesized data is often not high in precision, unsatisfactory, and the effect of the means such as field adaptation or field randomization on the problem is limited.
The embodiment of the disclosure provides a self-supervision training method for a network, which can improve the prediction accuracy of the gesture prediction network by self-supervision training of the gesture prediction network through a real two-dimensional sample image and a depth image.
Fig. 1 shows a flowchart of a network training method according to an embodiment of the present disclosure, which may be performed by an electronic device such as a terminal device or a server, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal digital assistant (Personal DIGITAL ASSISTANT, PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, etc., and the method may be implemented by a processor invoking computer readable instructions stored in a memory. Or the method may be performed by a server.
As shown in fig. 1, the network training method includes:
In step S11, a two-dimensional sample image is predicted by a gesture prediction network, so as to obtain a prediction segmentation mask corresponding to a target object in the two-dimensional sample image and prediction gesture information corresponding to the target object, where the prediction gesture information includes three-dimensional rotation information and three-dimensional translation information.
For example, the gesture prediction network is a neural network that predicts the 6D gesture of the target object, which may be applied to the fields of "robot work", "autopilot", "augmented reality", and the like. The two-dimensional sample image may be an image including a target object, which may be any object, for example: a human face, a human body, an animal, a plant, an object and the like.
The two-dimensional sample image may be input into a gesture prediction network to perform prediction to obtain a prediction segmentation mask corresponding to a target object in the two-dimensional sample image and prediction gesture information corresponding to the target object, where a pixel value of any pixel in the prediction segmentation mask is used to identify whether the pixel is a pixel in the target object in the sample two-dimensional image, for example: and when the pixel value is 1, identifying the pixel point as the pixel point on the target object, and when the pixel value is 0, identifying that the pixel point is not the pixel point on the target object. Wherein the predicted pose information may include three-dimensional rotation information R of the target object in three dimensions, which may be represented by quaternions, and three-dimensional translation information t (t x,ty,tz) of the target object in three dimensions.
In step S12, a differentiable rendering operation is performed according to the predicted gesture information corresponding to the target object and the three-dimensional model corresponding to the target object, so as to obtain differentiable rendering information corresponding to the target object.
For example, the two-dimensional sample image may be detected, a three-dimensional model corresponding to the target object in the two-dimensional sample image is determined, and according to the predicted gesture information corresponding to the target object and the three-dimensional model corresponding to the target object, a differential renderer performs a rendering operation to obtain differential rendering information corresponding to the target object, where the differential rendering information may include a rendering segmentation mask, a rendering two-dimensional image and a rendering depth image.
In step S13, a self-supervised training total loss of the gesture prediction network is determined according to the two-dimensional sample image, the prediction segmentation mask, the depth image corresponding to the two-dimensional sample image, and the differentiable rendering information.
For example, the self-supervision training total loss of the gesture training network can be obtained by performing visual consistency constraint and geometric consistency constraint on the two-dimensional sample image, the prediction segmentation mask and the depth image corresponding to the two-dimensional sample image and the differentiable rendering information obtained by rendering the predicted gesture information.
In step S14, the gesture prediction network is trained according to the self-supervised training aggregate loss.
For example, parameters of the gesture prediction network may be adjusted according to the self-supervised training total loss until the self-supervised training total loss meets the training requirement, thereby completing the self-supervised training of the gesture prediction network.
In this way, the two-dimensional sample image can be predicted through the gesture prediction network to obtain a prediction segmentation mask corresponding to the target object in the two-dimensional sample image and prediction gesture information corresponding to the target object, wherein the prediction gesture information comprises three-dimensional rotation information and three-dimensional translation information, and differential rendering operation is performed according to the prediction gesture information corresponding to the target object and the three-dimensional model corresponding to the target object to obtain differential rendering information corresponding to the target object. And determining the total training loss of the gesture prediction network according to the two-dimensional sample image, the prediction segmentation mask, the depth image corresponding to the two-dimensional sample image and the differentiable rendering information, and training the gesture prediction network according to the total self-supervision training loss. According to the network training method provided by the embodiment of the disclosure, the gesture prediction network is trained in a self-supervision manner through the two-dimensional sample image and the depth image without the labeling information, so that the training efficiency of the gesture prediction network can be improved while the accuracy of the gesture prediction network is improved.
In one possible implementation manner, the differentiable rendering information corresponding to the target object may include: rendering a segmentation mask, rendering a two-dimensional image, and rendering a depth image, wherein determining the total training loss of the gesture prediction network according to the two-dimensional sample image, the prediction segmentation mask, the depth image corresponding to the two-dimensional sample image, and the differentiable rendering information may include:
Determining a first self-supervising training loss according to the two-dimensional sample image and the rendered two-dimensional image;
Determining a second self-supervising training loss according to the prediction segmentation mask and the rendering segmentation mask;
Determining a third self-supervision training loss according to the depth image corresponding to the two-dimensional sample image and the rendering depth image;
And determining the self-supervision training total loss of the gesture prediction network according to the first self-supervision training loss, the second self-supervision training loss and the third self-supervision training loss.
For example, the rendering segmentation mask may be a mask image obtained by rendering, and the pixel value of any pixel in the rendering segmentation mask is used to identify whether the pixel is a pixel in a target object in the sample two-dimensional image. Rendering the two-dimensional image may be a two-dimensional image obtained through the three-dimensional model and the predicted pose information of the target object, rendering the depth image may be a depth image obtained through the three-dimensional model and the predicted pose information of the target object, and the rendering process may be completed by a renderer of related art, such as a Soft rasterized rendering engine (Soft-Rasterizer), which is not described in detail in the embodiments of the present disclosure.
The visual consistency constraint between the two-dimensional sample image and the rendered two-dimensional image can be established, the visual consistency constraint between the prediction segmentation mask and the rendered segmentation mask is established, the geometric consistency constraint between the depth image corresponding to the two-dimensional sample image and the rendered depth image is established, and the gesture prediction network is optimized by optimizing the visual consistency and the geometric consistency of the two self-supervision constraints.
The total loss of training of the gesture prediction network includes a loss determined by a visual consistency constraint and a loss determined by a geometric consistency constraint, wherein the loss determined by the visual consistency constraint includes a first training loss and a second training loss, the loss determined by the geometric consistency constraint includes a third training loss, and the total loss of self-supervised training of the gesture prediction network may be determined by the following equation (one).
L self=Lvisual+ηLgeom formula (I)
Where L self represents the total self-supervised training penalty, L visual represents the penalty determined by the visual consistency constraint, L geom represents the penalty determined by the geometric consistency constraint, i.e., L visual = first self-supervised training penalty + second self-supervised training penalty, L geom = third self-supervised training penalty, η represents the weight of the third self-supervised training penalty.
In one possible implementation manner, the determining a first self-supervised training loss according to the two-dimensional sample image and the rendered two-dimensional image may include:
after the two-dimensional sample image and the rendered two-dimensional image are respectively converted into a color model LAB mode, determining first image loss by adopting a first loss function according to the two-dimensional sample image after the conversion mode, the rendered two-dimensional image after the conversion mode and the prediction segmentation mask;
Determining a second image loss by adopting a second loss function according to the two-dimensional sample image, the rendered two-dimensional image and the prediction segmentation mask, wherein the second loss function is a loss function based on a multi-scale structure similarity index;
Determining a third image loss by adopting a third loss function according to the two-dimensional sample image, the rendered two-dimensional image and the prediction segmentation mask, wherein the third loss function is a loss function based on a multi-scale feature distance of a depth convolution neural network;
determining the first self-supervising training loss according to the first image loss, the second image loss and the third image loss.
For example, three loss functions may be employed to determine the first self-supervising training loss.
The first loss function is to convert the two-dimensional sample image and the rendered two-dimensional image into LAB (CIELab color model, LAB color model) mode, discard the luminance L channel from the two-dimensional sample image and the rendered two-dimensional image after conversion into LAB mode, and calculate the 1-norm distance between the two as the first image loss, where the first loss function can refer to the following formula (two).
Where L ab may represent a first image loss, M p may represent a prediction segmentation mask, N + may represent an area in the prediction segmentation mask where the pixel value is greater than 0, ρ may represent a color space transform operation, I S may represent a two-dimensional sample image, I R may represent rendering a two-dimensional image,Representing the j-th pixel point in the prediction segmentation mask.
The second loss function is a loss function based on MS-SSIM (Multi-Scale-Structural Similarity Index, multi-Scale similarity index), and the second loss function can be referred to the following equation (three).
L ms-ssim=1-ms-ssim(IS⊙MP,IR, S) formula (III)
Where L ms-ssim may represent a second image loss, ms-ssim represents a multi-scale similarity index function, and as such, it represents an element-wise multiplication, S is the number of scales employed, and as an example, S may take a value of 5.
The third loss function is a perception measurement loss function based on a depth convolution neural network, features of different layers of the two-dimensional sample image and the rendered two-dimensional image can be respectively extracted by using the pre-trained depth convolution neural network, an average 2-norm distance between the normalized features of the two-dimensional sample image and the rendered two-dimensional image is solved, and the third loss function can refer to the following formula (IV) as a third image loss.
Where L perceptual denotes the third image loss, L is the total number of layers of the acquired feature, l may denote the layer number,Normalized features may be represented, N l is the set of l level features, |N l | is the number of l level features, and L may be given a value of 5, for example.
And the first image loss, the second image loss and the third image loss are weighted and summed to obtain the first self-supervision training loss.
In one possible implementation manner, the determining the second self-supervised training loss according to the prediction segmentation mask and the rendering segmentation mask may include:
and determining a second self-supervision training loss by adopting a cross entropy loss function according to the prediction segmentation mask and the rendering segmentation mask.
For example, due to the defectivity of the prediction partition mask, the consistency constraint of the prediction partition mask and the rendering partition mask adopts a cross entropy loss function that re-adjusts the positive and negative region weights, and the following formula (five) can be referred to.
Where L mask represents the second self-supervised training penalty, M R represents the render segmentation mask, N - may represent the region in the prediction segmentation mask where the pixel value is equal to 0,Representing the j-th pixel point in the render segmentation mask.
In a possible implementation manner, the determining a third self-supervised training loss according to the depth image corresponding to the two-dimensional sample image and the rendered depth image may include:
respectively performing back projection operation on a depth image corresponding to the two-dimensional sample image and the rendering depth image to obtain point cloud information corresponding to the depth image and point cloud information corresponding to the rendering depth image;
and determining a third self-supervision training loss according to the point cloud information corresponding to the depth image and the point cloud information corresponding to the rendering depth image.
For example, the depth image corresponding to the two-dimensional sample image and the rendering depth image may be respectively converted into point cloud information under a camera coordinate system through a back projection operation, and a geometric consistency constraint is established for the point cloud information corresponding to the depth image corresponding to the two-dimensional sample image and the rendering depth image, and for example, the geometric consistency constraint is established by a chamfer (chamfer) distance between the point cloud information. The back projection operation may refer to the following formula (six), and the calculation of the chamfer (chamfer) distance may refer to the formula (seven).
Where D may represent a depth image (a depth image or a rendering depth image corresponding to a two-dimensional sample image), M may represent a segmentation mask (a prediction segmentation mask or a rendering segmentation mask), K may represent camera internal parameters, and x j and y j may represent two-dimensional coordinates of a j-th pixel point.
Wherein p S may represent point cloud information corresponding to a depth image corresponding to a two-dimensional sample image, p R may represent point cloud information corresponding to a rendered depth image, and L geom may represent a third self-supervising training penalty.
That is, the total network loss can be calculated by the following formula (eight):
L self=Lmask+αLab+βLms-ssin+γLperceptual+ηLgeom formula (eight)
Wherein α, β, γ are weights of the first image loss, the second image loss, and the third image loss, respectively, such as: α=0.2, β=1, γ=0.15.
After deriving the self-supervising network training total loss, the posture prediction network may be trained based on the self-supervising network training total loss, and an exemplary self-supervising training process of the posture prediction network may refer to fig. 2.
In one possible implementation, the gesture prediction network includes: a category prediction sub-network, a bounding box prediction sub-network, and a pose prediction sub-network,
The predicting, by the gesture predicting network, the two-dimensional sample image to obtain a prediction segmentation mask corresponding to the target object in the two-dimensional sample image and prediction gesture information corresponding to the target object may include:
Predicting a two-dimensional sample image through the category prediction sub-network to obtain category information corresponding to a target object in the two-dimensional sample image;
Predicting a two-dimensional sample image through the boundary box prediction sub-network to obtain boundary box information corresponding to a target object in the two-dimensional sample image;
And processing the two-dimensional sample image, the category information and the boundary box information through the gesture prediction sub-network to obtain a prediction segmentation mask corresponding to a target object in the two-dimensional sample image and prediction gesture information corresponding to the target object.
For example, the class prediction sub-network and the bounding box prediction sub-network may be constructed on a detector based on FPN (Feature Pyramid Network, feature pyramid), and the class prediction sub-network predicts the two-dimensional sample image to obtain class information of the target object in the two-dimensional sample image, and predicts the two-dimensional sample image according to the bounding box prediction sub-network to obtain bounding box information corresponding to the target object in the two-dimensional sample image. The detector extracts the FPN feature fusion, which may be exemplified by down-sampling the spatial size of the different FPN layer features from 128 to 64 dimensions by 1 x 1 convolution, up-sampling or down-sampling the spatial size of the different layer features to 1/8 of the input image by bilinear interpolation (e.g., input picture 480 x 640, unified to 60 x 80), and then stitching the unified size of the different layer features in dimensions.
After the FPN features are fused, the fused FPN features are spliced with the two-dimensional sample images and the two-dimensional coordinates corresponding to the two-dimensional sample images, and then new features are obtained. And processing the characteristics of each target object by a gesture prediction sub-network to obtain gesture information and a prediction segmentation mask corresponding to the target object.
Wherein the gesture prediction sub-network may include: the mask prediction sub-network, the quaternion sub-network, the 2D central point prediction sub-network and the central point distance from camera prediction sub-network output the prediction segmentation mask, the quaternion sub-network outputs three-dimensional rotation information, the 2D central point prediction sub-network outputs two-dimensional coordinates, the two-dimensional coordinates are transformed to obtain three-dimensional translation information of the target object together with the coordinate information output by the central point distance from camera prediction sub-network, and the three-dimensional translation information and the three-dimensional rotation information form gesture information of the target object, and refer to fig. 3.
In one possible implementation, before the predicting, by the pose prediction network, the two-dimensional sample image, the method may further include:
rendering and synthesizing operation is carried out according to the three-dimensional model of the object and the preset gesture information, so that a synthesized two-dimensional image and the labeling information of the synthesized two-dimensional image are obtained, wherein the labeling information of the synthesized two-dimensional image comprises labeling object category information, labeling boundary box information, preset gesture information and preset synthesized segmentation masks;
predicting the synthesized two-dimensional image through the gesture prediction network to obtain prediction information of the synthesized two-dimensional image, wherein the prediction information comprises prediction object type information, prediction boundary box information, a prediction synthesis segmentation mask and prediction synthesis gesture information;
and training the gesture training network according to the prediction information and the labeling information of the synthesized two-dimensional image.
For example, the gesture prediction network may be pre-trained by a composite two-dimensional image before being self-supervised trained by two-dimensional sample information.
For example, a two-dimensional image can be synthesized through a three-dimensional model of a known object and preset gesture information through an OpenGL (Open Graphics Library ) and a renderer based on a physical engine, and annotation information of the synthesized two-dimensional image can be obtained in the synthesis process, including annotation object category information, annotation bounding box information, preset gesture information and preset synthesis segmentation mask.
And processing the synthesized two-dimensional image through an attitude prediction network to obtain prediction information of the synthesized two-dimensional image, wherein the prediction information can comprise prediction object category information, prediction boundary box information, a prediction synthesis segmentation mask and prediction synthesis attitude information.
In the training process, the first loss may be calculated according to the predicted object class information and the labeled object class information, the second loss may be calculated according to the predicted bounding box information and the labeled bounding box information, the third loss may be calculated according to the predicted synthesized segmentation mask and the preset synthesized segmentation mask, and the fourth loss may be calculated according to the predicted synthesized pose information and the preset pose information, the total loss of the pose prediction network may include the first loss, the second loss, the third loss, and the fourth loss, and the total loss of the pose prediction network may be determined by the following formula (nine).
L synthetic=λclassLfocal+λboxLgiou+λmaskLbce+λposeLpose formula (nine)
L synthetic may represent the total loss of the gesture prediction network, and L focal、Lgiou、Lbce、Lpose represents the first loss, the second loss, the third loss, and the fourth loss, respectively, wherein,The point x representing the three-dimensional model M of the object passes through the predicted pose information/>And preset gesture information/>An average 1-norm distance between transformed points, where/>Is three-dimensional rotation information in predicted gesture information,/>Is three-dimensional translation information in predicted gesture information,/>Is three-dimensional rotation information in preset gesture information,/>The three-dimensional translation information in the preset gesture information is lambda class、λbox、λmask、λpose used for representing the weights of the first loss, the second loss, the third loss and the fourth loss respectively, and the weights can take the same value or different values.
After the pre-training is performed according to the synthesized two-dimensional image, when the two-dimensional sample image self-supervision training gesture prediction network is adopted, only the gesture prediction sub-network can be trained, and other networks do not update network parameters.
In order for those skilled in the art to better understand the disclosed embodiments, the disclosed embodiments are described below by way of specific examples.
Referring to fig. 4, the training of the pose prediction network is divided into two stages.
In the first stage, a large number of synthesized two-dimensional images are generated by using a three-dimensional model of an object through OpenGL and a rendering method based on a physical engine, identification information of the synthesized two-dimensional images can be obtained in the synthesis process, a gesture prediction network is trained, and category information, boundary frame information, a prediction segmentation mask and gesture information of the object are output.
And in the second stage, the real two-dimensional sample image which is not marked is utilized, the gesture prediction network is input to obtain the prediction gesture information and the prediction segmentation mask of the target object in the two-dimensional sample image, the prediction gesture information and the three-dimensional model of the target object are input to a differentiable renderer to obtain a rendering segmentation mask, a rendering two-dimensional image and a rendering depth image, visual consistency constraint is established between the rendering segmentation mask and the prediction segmentation mask, the rendering two-dimensional image and the real two-dimensional sample image, geometric consistency constraint is established between the point cloud information respectively corresponding to the depth images corresponding to the rendering depth image and the two-dimensional sample image, and the two self-supervision constraints are optimized to self-supervise the training gesture prediction network.
The embodiment of the disclosure provides a gesture prediction method, which comprises the following steps:
the method comprises the steps of carrying out prediction processing on an image to be processed through a gesture prediction network to obtain gesture information of a target object in the image to be processed,
The gesture prediction network is obtained by training by adopting the network training method described in any one of the above.
For example, the image to be processed may be predicted by using the gesture prediction network trained by any one of the foregoing methods, so as to obtain gesture information of the target object in the image to be processed.
Thus, according to the gesture prediction method provided by the embodiment of the disclosure, the accuracy of gesture prediction can be improved.
It will be appreciated that the above-mentioned method embodiments of the present disclosure may be combined with each other to form a combined embodiment without departing from the principle logic, and are limited to the description of the present disclosure. It will be appreciated by those skilled in the art that in the above-described methods of the embodiments, the particular order of execution of the steps should be determined by their function and possible inherent logic.
In addition, the disclosure further provides a network training device, an attitude prediction device, an electronic device, a computer readable storage medium, and a program, where the foregoing may be used to implement any one of the network training method and the attitude prediction method provided in the disclosure, and corresponding technical schemes and descriptions and corresponding descriptions referring to method parts are not repeated.
Fig. 5 shows a block diagram of a network training apparatus according to an embodiment of the present disclosure, as shown in fig. 5, the apparatus comprising:
The prediction module 51 may be configured to predict, through a gesture prediction network, a two-dimensional sample image to obtain a prediction segmentation mask corresponding to a target object in the two-dimensional sample image and prediction gesture information corresponding to the target object, where the prediction gesture information includes three-dimensional rotation information and three-dimensional translation information;
The rendering module 52 may be configured to perform a differential rendering operation according to the predicted gesture information corresponding to the target object and the three-dimensional model corresponding to the target object, so as to obtain differential rendering information corresponding to the target object;
A determining module 53, configured to determine a total loss of self-supervised training of the gesture prediction network according to the two-dimensional sample image, the prediction segmentation mask, the depth image corresponding to the two-dimensional sample image, and the differentiable rendering information;
Self-supervised training module 54 may be used to train the posture prediction network based on the self-supervised training loss.
In this way, the two-dimensional sample image can be predicted through the gesture prediction network to obtain a prediction segmentation mask corresponding to the target object in the two-dimensional sample image and prediction gesture information corresponding to the target object, wherein the prediction gesture information comprises three-dimensional rotation information and three-dimensional translation information, and differential rendering operation is performed according to the prediction gesture information corresponding to the target object and the three-dimensional model corresponding to the target object to obtain differential rendering information corresponding to the target object. And determining self-supervision training total loss of the gesture prediction network according to the two-dimensional sample image, the prediction segmentation mask, the depth image corresponding to the two-dimensional sample image and the differentiable rendering information, and training the gesture prediction network according to the self-supervision training total loss. According to the network training device provided by the embodiment of the disclosure, the gesture prediction network is self-supervised trained on the two-dimensional sample image and the depth image without the marked information, so that the accuracy of the gesture prediction network is improved, and meanwhile, the training efficiency of the gesture prediction network is improved.
In one possible implementation manner, the rendering information corresponding to the target object includes: rendering a segmentation mask, rendering a two-dimensional image, rendering a depth image, the determination module 53 may also be used to:
Determining a first self-supervising training loss according to the two-dimensional sample image and the rendered two-dimensional image;
Determining a second self-supervising training loss according to the prediction segmentation mask and the rendering segmentation mask;
Determining a third self-supervision training loss according to the depth image corresponding to the two-dimensional sample image and the rendering depth image;
And determining the self-supervision training total loss of the gesture prediction network according to the first self-supervision training loss, the second self-supervision training loss and the third self-supervision training loss.
In one possible implementation, the determining module 53 may be further configured to:
after the two-dimensional sample image and the rendered two-dimensional image are respectively converted into a color model LAB mode, determining first image loss by adopting a first loss function according to the two-dimensional sample image after the conversion mode, the rendered two-dimensional image after the conversion mode and the prediction segmentation mask;
Determining a second image loss by adopting a second loss function according to the two-dimensional sample image, the rendered two-dimensional image and the prediction segmentation mask, wherein the second loss function is a loss function based on a multi-scale structure similarity index;
Determining a third image loss by adopting a third loss function according to the two-dimensional sample image, the rendered two-dimensional image and the prediction segmentation mask, wherein the third loss function is a loss function based on a multi-scale feature distance of a depth convolution neural network;
determining the first self-supervising training loss according to the first image loss, the second image loss and the third image loss.
In one possible implementation, the determining module 53 may be further configured to:
and determining a second self-supervision training loss by adopting a cross entropy loss function according to the prediction segmentation mask and the rendering segmentation mask.
In one possible implementation, the determining module 53 may be further configured to:
respectively performing back projection operation on a depth image corresponding to the two-dimensional sample image and the rendering depth image to obtain point cloud information corresponding to the depth image and point cloud information corresponding to the rendering depth image;
and determining a third self-supervision training loss according to the point cloud information corresponding to the depth image and the point cloud information corresponding to the rendering depth image.
In one possible implementation, the gesture prediction network may include: a category prediction sub-network, a bounding box prediction sub-network, and a pose prediction sub-network, the prediction module 51 may also be configured to:
Predicting a two-dimensional sample image through the category prediction sub-network to obtain category information corresponding to a target object in the two-dimensional sample image;
Predicting a two-dimensional sample image through the boundary box prediction sub-network to obtain boundary box information corresponding to a target object in the two-dimensional sample image;
And processing the two-dimensional sample image, the category information and the boundary box information through the gesture prediction sub-network to obtain a prediction segmentation mask corresponding to a target object in the two-dimensional sample image and prediction gesture information corresponding to the target object.
In one possible implementation, the apparatus may further include:
the pre-training module can be used for conducting rendering synthesis operation according to the three-dimensional model of the object and preset gesture information to obtain a synthesized two-dimensional image and marking information of the synthesized two-dimensional image, wherein the marking information of the synthesized two-dimensional image comprises marking object category information, marking boundary frame information, preset gesture information and preset synthesized segmentation masks;
predicting the synthesized two-dimensional image through the gesture prediction network to obtain prediction information of the synthesized two-dimensional image, wherein the prediction information comprises prediction object type information, prediction boundary box information, a prediction synthesis segmentation mask and prediction synthesis gesture information;
and training the gesture prediction network according to the prediction information and the labeling information of the synthesized two-dimensional image.
According to an aspect of the present disclosure, there is provided a posture prediction apparatus, which may include:
A prediction module for predicting the image to be processed through a gesture prediction network to obtain gesture information of a target object in the image to be processed,
The gesture prediction network is obtained by training by adopting the network training method described in any one of the above.
Thus, according to the gesture predicting device provided by the embodiment of the disclosure, the accuracy of gesture prediction can be improved.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
The disclosed embodiments also provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method. The computer readable storage medium may be a non-volatile computer readable storage medium.
The embodiment of the disclosure also provides an electronic device, which comprises: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the instructions stored in the memory to perform the above method.
Embodiments of the present disclosure also provide a computer program product comprising computer readable code which, when run on a device, causes a processor in the device to execute instructions for implementing the network training method and the posture prediction method provided in any of the embodiments above.
The disclosed embodiments also provide another computer program product for storing computer readable instructions that, when executed, cause a computer to perform the operations of the network training method and the posture prediction method provided in any of the above embodiments.
The electronic device may be provided as a terminal, server or other form of device.
Fig. 6 shows a block diagram of an electronic device 800, according to an embodiment of the disclosure. For example, electronic device 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, or the like.
Referring to fig. 6, an electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interactions between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or nonvolatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen between the electronic device 800 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 further includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of the electronic device 800. For example, the sensor assembly 814 may detect an on/off state of the electronic device 800, a relative positioning of the components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in position of the electronic device 800 or a component of the electronic device 800, the presence or absence of a user's contact with the electronic device 800, an orientation or acceleration/deceleration of the electronic device 800, and a change in temperature of the electronic device 800. The sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 814 may also include a photosensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the electronic device 800 and other devices, either wired or wireless. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (WiFi), a second generation mobile communication technology (2G) or a third generation mobile communication technology (3G), or a combination thereof. In one exemplary embodiment, the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 804 including computer program instructions executable by processor 820 of electronic device 800 to perform the above-described methods.
Fig. 7 illustrates a block diagram of an electronic device 1900 according to an embodiment of the disclosure. For example, electronic device 1900 may be provided as a server. Referring to FIG. 7, electronic device 1900 includes a processing component 1922 that further includes one or more processors and memory resources represented by memory 1932 for storing instructions, such as application programs, that can be executed by processing component 1922. The application programs stored in memory 1932 may include one or more modules each corresponding to a set of instructions. Further, processing component 1922 is configured to execute instructions to perform the methods described above.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system stored in memory 1932, such as the Microsoft Server operating system (Windows Server TM), the apple Inc. promoted graphical user interface-based operating system (Mac OS X TM), the multi-user, multi-process computer operating system (Unix TM), the free and open source Unix-like operating system (Linux TM), the open source Unix-like operating system (FreeBSD TM), or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as memory 1932, including computer program instructions executable by processing component 1922 of electronic device 1900 to perform the methods described above.
The present disclosure may be a system, method, and/or computer program product. The computer program product may include a computer readable storage medium having computer readable program instructions embodied thereon for causing a processor to implement aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disks, hard disks, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), static Random Access Memory (SRAM), portable compact disk read-only memory (CD-ROM), digital Versatile Disks (DVD), memory sticks, floppy disks, mechanical coding devices, punch cards or in-groove structures such as punch cards or grooves having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media, as used herein, are not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (e.g., optical pulses through fiber optic cables), or electrical signals transmitted through wires.
The computer readable program instructions described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
The computer program instructions for performing the operations of the present disclosure may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as SMALLTALK, C ++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, aspects of the present disclosure are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field Programmable Gate Arrays (FPGAs), or Programmable Logic Arrays (PLAs), with state information of computer readable program instructions, which can execute the computer readable program instructions.
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be realized in particular by means of hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied as a computer storage medium, and in another alternative embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (12)
1. A method of network training, comprising:
Predicting a two-dimensional sample image through a gesture prediction network to obtain a prediction segmentation mask corresponding to a target object in the two-dimensional sample image and prediction gesture information corresponding to the target object, wherein the prediction gesture information comprises three-dimensional rotation information and three-dimensional translation information;
According to the predicted gesture information corresponding to the target object, performing differential rendering operation on the three-dimensional model corresponding to the target object to obtain differential rendering information corresponding to the target object; the differentiable rendering operation is a rendering operation performed by a differentiable renderer; the differentiable rendering information corresponding to the target object comprises: rendering a segmentation mask, rendering a two-dimensional image, and rendering a depth image;
Determining self-supervision training total loss of the gesture prediction network according to the two-dimensional sample image, the prediction segmentation mask, the depth image corresponding to the two-dimensional sample image and the differentiable rendering information;
And training the gesture prediction network according to the self-supervision training total loss.
2. The method of claim 1, wherein determining the self-supervised training aggregate penalty for the pose prediction network based on the two-dimensional sample image, the prediction segmentation mask, the depth image corresponding to the two-dimensional sample image, and the differentiable rendering information comprises:
Determining a first self-supervising training loss according to the two-dimensional sample image and the rendered two-dimensional image;
Determining a second self-supervising training loss according to the prediction segmentation mask and the rendering segmentation mask;
Determining a third self-supervision training loss according to the depth image corresponding to the two-dimensional sample image and the rendering depth image;
And determining the self-supervision training total loss of the gesture prediction network according to the first self-supervision training loss, the second self-supervision training loss and the third self-supervision training loss.
3. The method of claim 2, wherein the determining a first self-supervising training loss from the two-dimensional sample image and the rendered two-dimensional image comprises:
after the two-dimensional sample image and the rendered two-dimensional image are respectively converted into a color model LAB mode, determining first image loss by adopting a first loss function according to the two-dimensional sample image after the conversion mode, the rendered two-dimensional image after the conversion mode and the prediction segmentation mask;
Determining a second image loss by adopting a second loss function according to the two-dimensional sample image, the rendered two-dimensional image and the prediction segmentation mask, wherein the second loss function is a loss function based on a multi-scale structure similarity index;
Determining a third image loss by adopting a third loss function according to the two-dimensional sample image, the rendered two-dimensional image and the prediction segmentation mask, wherein the third loss function is a loss function based on a multi-scale feature distance of a depth convolution neural network;
determining the first self-supervising training loss according to the first image loss, the second image loss and the third image loss.
4. A method according to claim 2 or 3, wherein said determining a second self-supervising training loss from the prediction segmentation mask and the rendering segmentation mask comprises:
and determining a second self-supervision training loss by adopting a cross entropy loss function according to the prediction segmentation mask and the rendering segmentation mask.
5. The method of any of claims 2 to 4, wherein the determining a third self-supervising training loss from the depth image corresponding to the two-dimensional sample image and the rendered depth image comprises:
respectively performing back projection operation on a depth image corresponding to the two-dimensional sample image and the rendering depth image to obtain point cloud information corresponding to the depth image and point cloud information corresponding to the rendering depth image;
and determining a third self-supervision training loss according to the point cloud information corresponding to the depth image and the point cloud information corresponding to the rendering depth image.
6. The method according to any one of claims 1 to 5, wherein the gesture prediction network comprises: a category prediction sub-network, a bounding box prediction sub-network, and a pose prediction sub-network,
The predicting the two-dimensional sample image through the gesture predicting network to obtain a prediction segmentation mask corresponding to the target object in the two-dimensional sample image and prediction gesture information corresponding to the target object comprises the following steps:
Predicting a two-dimensional sample image through the category prediction sub-network to obtain category information corresponding to a target object in the two-dimensional sample image;
Predicting a two-dimensional sample image through the boundary box prediction sub-network to obtain boundary box information corresponding to a target object in the two-dimensional sample image;
And processing the two-dimensional sample image, the category information and the boundary box information through the gesture prediction sub-network to obtain a prediction segmentation mask corresponding to a target object in the two-dimensional sample image and prediction gesture information corresponding to the target object.
7. The method of claim 6, wherein prior to predicting the two-dimensional sample image by the pose prediction network, the method further comprises:
rendering and synthesizing operation is carried out according to the three-dimensional model of the object and the preset gesture information, so that a synthesized two-dimensional image and the labeling information of the synthesized two-dimensional image are obtained, wherein the labeling information of the synthesized two-dimensional image comprises labeling object category information, labeling boundary box information, preset gesture information and preset synthesized segmentation masks;
predicting the synthesized two-dimensional image through the gesture prediction network to obtain prediction information of the synthesized two-dimensional image, wherein the prediction information comprises prediction object type information, prediction boundary box information, a prediction synthesis segmentation mask and prediction synthesis gesture information;
and training the gesture prediction network according to the prediction information and the labeling information of the synthesized two-dimensional image.
8. A method of gesture prediction, the method comprising:
the method comprises the steps of carrying out prediction processing on an image to be processed through a gesture prediction network to obtain gesture information of a target object in the image to be processed,
Wherein the gesture prediction network is trained by the network training method according to any one of claims 1 to 7.
9. A network training device, comprising:
The prediction module is used for predicting the two-dimensional sample image through a gesture prediction network to obtain a prediction segmentation mask corresponding to a target object in the two-dimensional sample image and prediction gesture information corresponding to the target object, wherein the prediction gesture information comprises three-dimensional rotation information and three-dimensional translation information;
The rendering module is used for performing differential rendering operation on the three-dimensional model corresponding to the target object according to the predicted gesture information corresponding to the target object to obtain differential rendering information corresponding to the target object; the differentiable rendering operation is a rendering operation performed by a differentiable renderer; the differentiable rendering information corresponding to the target object comprises: rendering a segmentation mask, rendering a two-dimensional image, and rendering a depth image;
the determining module is used for determining the self-supervision training total loss of the gesture prediction network according to the two-dimensional sample image, the prediction segmentation mask, the depth image corresponding to the two-dimensional sample image and the differentiable rendering information;
And the self-supervision training module is used for training the gesture prediction network according to the self-supervision training total loss.
10. A posture predicting device, characterized in that the device comprises:
A prediction module for predicting the image to be processed through a gesture prediction network to obtain gesture information of a target object in the image to be processed,
Wherein the gesture prediction network is trained by the network training method according to any one of claims 1 to 7.
11. An electronic device, comprising:
A processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the instructions stored in the memory to perform the method of any of claims 1 to 8.
12. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method of any of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010638037.2A CN111783986B (en) | 2020-07-02 | 2020-07-02 | Network training method and device, and gesture prediction method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010638037.2A CN111783986B (en) | 2020-07-02 | 2020-07-02 | Network training method and device, and gesture prediction method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111783986A CN111783986A (en) | 2020-10-16 |
CN111783986B true CN111783986B (en) | 2024-06-14 |
Family
ID=72759605
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010638037.2A Active CN111783986B (en) | 2020-07-02 | 2020-07-02 | Network training method and device, and gesture prediction method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111783986B (en) |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112508007B (en) * | 2020-11-18 | 2023-09-29 | 中国人民解放军战略支援部队航天工程大学 | Space target 6D attitude estimation method based on image segmentation Mask and neural rendering |
CN112529913B (en) * | 2020-12-14 | 2024-10-29 | 北京达佳互联信息技术有限公司 | Image segmentation model training method, image processing method and device |
CN112529917A (en) * | 2020-12-22 | 2021-03-19 | 中国第一汽车股份有限公司 | Three-dimensional target segmentation method, device, equipment and storage medium |
CN114758334A (en) * | 2020-12-29 | 2022-07-15 | 华为技术有限公司 | Object registration method and device |
CN113592876B (en) * | 2021-01-14 | 2024-09-06 | 腾讯科技(深圳)有限公司 | Training method, device, computer equipment and storage medium for split network |
CN112884022B (en) * | 2021-01-29 | 2021-11-12 | 浙江师范大学 | An unsupervised deep representation learning method and system based on image translation |
CN113065546B (en) * | 2021-02-25 | 2022-08-12 | 湖南大学 | A target pose estimation method and system based on attention mechanism and Hough voting |
CN112926461B (en) * | 2021-02-26 | 2024-04-19 | 商汤集团有限公司 | Neural network training, driving control method and device |
CN113256574B (en) * | 2021-05-13 | 2022-10-25 | 中国科学院长春光学精密机械与物理研究所 | Three-dimensional target detection method |
CN113470124B (en) * | 2021-06-30 | 2023-09-22 | 北京达佳互联信息技术有限公司 | Training method and device for special effect model, and special effect generation method and device |
CN114359303B (en) * | 2021-12-28 | 2024-12-24 | 浙江大华技术股份有限公司 | Image segmentation method and device |
CN114511811A (en) * | 2022-01-28 | 2022-05-17 | 北京百度网讯科技有限公司 | Video processing method, video processing device, electronic equipment and medium |
CN116824016A (en) * | 2022-03-18 | 2023-09-29 | 华为技术有限公司 | Rendering model training, video rendering methods, devices, equipment and storage media |
CN114882301B (en) * | 2022-07-11 | 2022-09-13 | 四川大学 | Self-supervised learning medical image recognition method and device based on region of interest |
CN116681755B (en) * | 2022-12-29 | 2024-02-09 | 广东美的白色家电技术创新中心有限公司 | Pose prediction method and device |
CN118629084A (en) * | 2023-03-10 | 2024-09-10 | 北京字跳网络技术有限公司 | Method for constructing object posture recognition model, object posture recognition method and device |
CN118274786B (en) * | 2024-05-31 | 2024-08-13 | 四川宏大安全技术服务有限公司 | Buried pipeline settlement monitoring method and system based on Beidou coordinates |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105474273A (en) * | 2013-07-25 | 2016-04-06 | 微软技术许可有限责任公司 | Late stage reprojection |
CN109215080A (en) * | 2018-09-25 | 2019-01-15 | 清华大学 | 6D Attitude estimation network training method and device based on deep learning Iterative matching |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9443355B2 (en) * | 2013-06-28 | 2016-09-13 | Microsoft Technology Licensing, Llc | Reprojection OLED display for augmented reality experiences |
CN108229489B (en) * | 2016-12-30 | 2020-08-11 | 北京市商汤科技开发有限公司 | Key point prediction method, network training method, image processing method, device and electronic equipment |
US11676296B2 (en) * | 2017-08-11 | 2023-06-13 | Sri International | Augmenting reality using semantic segmentation |
CN109872343B (en) * | 2019-02-01 | 2020-03-17 | 视辰信息科技(上海)有限公司 | Weak texture object posture tracking method, system and device |
-
2020
- 2020-07-02 CN CN202010638037.2A patent/CN111783986B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105474273A (en) * | 2013-07-25 | 2016-04-06 | 微软技术许可有限责任公司 | Late stage reprojection |
CN109215080A (en) * | 2018-09-25 | 2019-01-15 | 清华大学 | 6D Attitude estimation network training method and device based on deep learning Iterative matching |
Also Published As
Publication number | Publication date |
---|---|
CN111783986A (en) | 2020-10-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111783986B (en) | Network training method and device, and gesture prediction method and device | |
CN111310616B (en) | Image processing method and device, electronic equipment and storage medium | |
CN109816611B (en) | Video repair method and device, electronic equipment and storage medium | |
CN110674719B (en) | Target object matching method and device, electronic equipment and storage medium | |
CN113822918B (en) | Scene depth and camera motion prediction method and device, electronic equipment and medium | |
CN109145970B (en) | Image-based question and answer processing method and device, electronic equipment and storage medium | |
CN113486765A (en) | Gesture interaction method and device, electronic equipment and storage medium | |
CN111401230B (en) | Gesture estimation method and device, electronic equipment and storage medium | |
CN114445562A (en) | Three-dimensional reconstruction method and device, electronic device and storage medium | |
CN112991381B (en) | Image processing method and device, electronic equipment and storage medium | |
CN113052874B (en) | Target tracking method and device, electronic equipment and storage medium | |
CN112785672B (en) | Image processing method and device, electronic equipment and storage medium | |
WO2023051356A1 (en) | Virtual object display method and apparatus, and electronic device and storage medium | |
CN113806054A (en) | Task processing method and device, electronic equipment and storage medium | |
CN114066856A (en) | Model training method and device, electronic equipment and storage medium | |
CN110706339A (en) | Three-dimensional face reconstruction method and device, electronic equipment and storage medium | |
CN109903252B (en) | Image processing method and device, electronic equipment and storage medium | |
CN112529846A (en) | Image processing method and device, electronic equipment and storage medium | |
CN109635926B (en) | Attention feature acquisition method and device for neural network and storage medium | |
CN114581525A (en) | Attitude determination method and device, electronic device and storage medium | |
CN114463212A (en) | Image processing method and device, electronic device and storage medium | |
CN111882558B (en) | Image processing method and device, electronic device and storage medium | |
CN111311588B (en) | Repositioning method and device, electronic equipment and storage medium | |
CN106875446A (en) | Camera method for relocating and device | |
CN110929616B (en) | Human hand identification method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |