[go: up one dir, main page]

CN111862241A - Human body alignment method and device - Google Patents

Human body alignment method and device Download PDF

Info

Publication number
CN111862241A
CN111862241A CN202010739570.8A CN202010739570A CN111862241A CN 111862241 A CN111862241 A CN 111862241A CN 202010739570 A CN202010739570 A CN 202010739570A CN 111862241 A CN111862241 A CN 111862241A
Authority
CN
China
Prior art keywords
human body
image
coordinate information
projection
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010739570.8A
Other languages
Chinese (zh)
Other versions
CN111862241B (en
Inventor
蒋亚洪
潘永路
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Youchain Times Technology Co Ltd
Original Assignee
Hangzhou Youchain Times Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Youchain Times Technology Co Ltd filed Critical Hangzhou Youchain Times Technology Co Ltd
Priority to CN202010739570.8A priority Critical patent/CN111862241B/en
Publication of CN111862241A publication Critical patent/CN111862241A/en
Application granted granted Critical
Publication of CN111862241B publication Critical patent/CN111862241B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • G06T2207/30208Marker matrix

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a human body alignment method and a human body alignment device, which comprise the following steps: acquiring human body image information shot by a cloud camera; acquiring projected human body image information; generating an intersection image according to the human body image before projection and the human body image after projection; processing and decolorizing the cross image to obtain a binary cross image; decoding the binary cross images to obtain a feature point set; acquiring first coordinate information; acquiring second coordinate information; obtaining a cloud camera internal reference matrix by using the first coordinate information and the second coordinate information; obtaining an external parameter matrix of the cloud camera by using the first coordinate information and the second coordinate information; storing the internal reference matrix and the external reference matrix. The invention can obtain camera parameters with higher accuracy and improve the human body alignment efficiency.

Description

Human body alignment method and device
[ technical field ] A method for producing a semiconductor device
The invention relates to the technical field of machine vision, in particular to a human body alignment method and device.
[ background of the invention ]
The human body alignment refers to a process of establishing a position relationship between a camera image plane pixel point and a human body scene point and obtaining camera parameters from the position relationship, so that subsequent human body image processing can be better performed. However, in the using process of the camera, due to the influence of self factors and various environmental factors, the image shot by the camera cannot provide enough information, and the accurate corresponding relation between the obtained two-dimensional image information and the real human body target information cannot be accurately established, so that the camera parameters cannot be accurately calculated. In order to correctly and effectively acquire camera parameters using a two-dimensional image of a human body, alignment of human body images is a problem that must be solved. In the prior art, the problems of complex calculation, low alignment efficiency, poor effect in a specific environment, low accuracy of acquired camera parameters and the like exist.
[ summary of the invention ]
The invention aims to solve the problems of complex operation, low alignment efficiency, poor effect in a specific environment, low accuracy of acquired camera parameters and the like of a human body alignment method in the prior art.
In order to achieve the purpose, the invention provides a human body alignment method and a human body alignment device.
The human body alignment method includes:
acquiring human body image information shot by a cloud camera, wherein the human body image information comprises a human body image before projection;
acquiring projected human body image information, wherein the projected human body image information comprises a projected human body image which is shot by a cloud camera after a coding pattern is projected to the surface of a human body through four projectors, the four projectors are positioned on four vertexes of a regular quadrangle which takes the human body as the center, projection planes of two projectors positioned on a diagonal line of the regular quadrangle are parallel to each other and are vertical to the diagonal line, the coding pattern completely covers the human body, and the texture of each region of the human body is different;
generating an intersection image M according to the human body image before projection and the human body image after projectionDifference (D),MDifference (D)=MRear end-MFront sideThe human body image before projection and the human body image after projection are human body images acquired by the same cloud camera, MRear endRepresenting the projected image of the body, MFront sideRepresenting a pre-projection human body image;
processing and decolorizing the cross image to obtain a binary cross image;
decoding the binary cross image to obtain a feature point set, wherein the feature point set comprises a plurality of feature points;
acquiring first coordinate information, wherein the first coordinate information is coordinate information of a feature point on the binary cross image;
acquiring second coordinate information, wherein the second coordinate information is coordinate information of the feature point on an image plane of the projector;
obtaining a cloud camera internal reference matrix by using the first coordinate information and the second coordinate information;
obtaining an external parameter matrix of the cloud camera by using the first coordinate information and the second coordinate information, wherein the external parameter matrix comprises a rotation matrix R and a translation matrix T;
storing the internal reference matrix and the external reference matrix.
Furthermore, the cloud cameras are fixed on a fixing frame at the same vertical distance, the fixing frame is located on eight vertexes of the regular octagon, an image plane of each cloud camera is perpendicular to a central connecting line, and the central connecting line is a connecting line between the vertex of the octagon where the cloud camera is located and the center of the regular octagon.
Further, the processing and decoloring the cross image to obtain a binary cross image specifically includes: and applying a random number R to all the characteristic points of the cross image, wherein if R is less than 5, the characteristic points are black, and R is more than 5, the characteristic points are white.
Further, the coding pattern is randomly distributed, and the minimum units of the coding pattern are different in both the horizontal direction and the vertical direction.
Further, the obtaining of the external parameter matrix of the camera by using the first coordinate information and the second coordinate information specifically includes:
obtaining third coordinate information through the first coordinate information and the second coordinate information by utilizing a triangular relation, wherein the third coordinate information is three-dimensional coordinate information of the feature point in the scene;
and obtaining the external parameter matrix of the camera according to the first coordinate information, the second coordinate information and the third coordinate information.
Further, the light projected by the projector can only illuminate one surface of the human body.
Further, the human body stands in a posture with both arms open at 30 degrees and both legs open at 15 degrees and remains still for one second.
A human body alignment device, the device comprising:
a first image information acquisition module configured to acquire human body image information captured by a cloud camera, the human body image information including a pre-projection human body image;
a second image information obtaining module configured to obtain projected human body image information, the projected human body image information including a projected human body image photographed by a cloud camera after a coded pattern is projected onto a surface of a human body by four projectors, the four projectors being located on four vertices of a regular quadrangle centering on the human body, projection planes of the two projectors located on a diagonal of the regular quadrangle being parallel to each other and perpendicular to the diagonal, the coded pattern completely covering the human body and making textures of each region of the human body different;
a cross-image generation module configured to generate a cross-image M from the pre-projection and post-projection body imagesDifference (D),MDifference (D)=MTransparent film-MOriginal sourceThe human body image before projection and the human body image after projection are human body images acquired by the same cloud camera, MTransparent filmRepresenting the projected image of the body, MOriginal sourceRepresenting a pre-projection human body image;
a color removal processing module configured to process the cross-color image to remove color, resulting in a binary cross-color image;
a decoding module configured to obtain a feature point set by decoding the binary interleaved image, the feature point set including a plurality of feature points;
a first coordinate obtaining module configured to obtain first coordinate information, the first coordinate information being coordinate information of a feature point on the binary cross image;
a second coordinate acquisition module configured to acquire second coordinate information, the second coordinate information being coordinate information of the feature point on the projector image plane;
a first computing module configured to compute a cloud camera internal reference matrix using the first coordinate information and the second coordinate information;
a second calculation module configured to obtain an appearance parameter matrix of the camera using the first coordinate information and the second coordinate information, the appearance parameter matrix including a rotation matrix R and a translation matrix T;
a storage module configured to store the internal reference matrix and the external reference matrix.
Further, the process decolorizing module further comprises: a random number module configured to apply all feature points of the cross-over image with a random number R, make the feature points black if R is less than 5, and make the feature points white if R is greater than 5.
The invention has the beneficial effects that:
according to the human body alignment method and the human body alignment device, the cloud camera is used for shooting the human body which is not projected and the human body which is projected in different coding patterns and randomly changes in the horizontal and vertical directions to carry out human body alignment and acquisition of camera parameters. Generating the cross image according to the human body image before projection and the human body image after projection only needs to calculate the matrix represented by the human body image after projection and the matrix represented by the human body image before projection to generate cross information, the whole calculation process is simple and quick, the ground color of the human body image is removed, the interference of the ground color of the human body image on the characteristic point extraction matching process is eliminated, and the speed and the efficiency in human body alignment are improved.
In order to obtain camera parameters with higher accuracy, global binarization operation is carried out on an intersection image generated by a human body image before projection and a human body image after projection by using a random number R, so that each pixel point in the intersection image can be correctly decoded, the alignment calculation efficiency is improved, enough information can be extracted under the condition that the characteristics of the human body surface are not obvious, and the camera parameters can be accurately calculated by using the information.
The cloud camera arrays are regularly arranged, the image plane of each cloud camera is perpendicular to the connecting line of the projection point of the cloud camera on the horizontal plane and the center of the regular octagon, operation of camera parameters in human body alignment is simplified through the setting, and human body alignment efficiency is greatly improved.
The features and advantages of the present invention will be described in detail by embodiments in conjunction with the accompanying drawings.
[ description of the drawings ]
FIG. 1 is a flow chart of a human body alignment method according to an embodiment of the present invention;
FIG. 2 is a pre-projection human image taken from one perspective by a cloud camera in an embodiment of the invention;
FIG. 3 is a projected human image taken from one perspective by a cloud camera in an embodiment of the invention;
FIG. 4 is a diagram illustrating a binary cross-hair image according to an embodiment of the present invention;
FIG. 5 is a flowchart illustrating obtaining an appearance matrix of a camera using first coordinate information and second coordinate information according to an embodiment of the present invention;
FIG. 6 is a block diagram of the human body alignment apparatus according to an embodiment of the present invention;
FIG. 7 is a block diagram of an apparatus for processing a random number module in a decolorizing module according to an embodiment of the present invention.
[ detailed description ] embodiments
Camera alignment refers to establishing a positional relationship between a camera phase plane pixel point and a scene point. And obtaining parameters of the camera imaging model by utilizing the corresponding relation of the characteristic points in the image coordinates and the scene coordinates according to the camera imaging model. The parameters of the camera imaging model include internal parameters and external parameters.
In one embodiment, a human body alignment method is provided. Referring to fig. 1, the human body alignment method specifically includes the steps of:
s101, human body image information shot by the cloud camera is obtained.
Wherein the human body image information comprises a human body image before projection.
In the present embodiment, fig. 2 is a pre-projection human body image captured from a perspective by a cloud camera, where the human body image does not have a coding pattern projected on the surface of the human body. Human body patterns shot by different cloud cameras at different angles have slight differences in details, and are mainly reflected in that any three-dimensional human body feature point (namely scene point) is located at different positions in pictures shot by different cloud cameras, and corresponding two-dimensional coordinates are different.
In other embodiments, the person stands still for one second in a position with both arms open at 30 degrees and both legs open at 15 degrees. Therefore, the shot human body image before projection does not generate ghost images due to movement, and each characteristic point is clear and misaligned, thereby being more beneficial to the subsequent processing of the human body image.
In other embodiments, the plurality of cloud cameras are fixed on the fixing frame at the same vertical distance, the number of the cloud cameras on each fixing frame is equal, each fixing frame is located on eight vertexes of the regular octagon, the image plane of each cloud camera is perpendicular to a central connecting line, and the central connecting line is a connecting line between the vertex of the octagon where the cloud camera is located and the center of the regular octagon. The cloud camera arrays are regularly arranged, the image plane of each cloud camera is perpendicular to the connecting line of the projection point of the cloud camera on the horizontal plane and the center of the regular octagon, operation of camera parameters in human body alignment is simplified through the setting, and human body alignment efficiency is greatly improved.
In other embodiments, the plurality of cloud cameras acquire images of a human body from multiple angles, and the images of the human body before projection are stored according to the serial numbers, so that subsequent processing of the images of the human body is facilitated.
S102, acquiring projected human body image information.
The projected human body image information comprises a projected human body image which is shot by a cloud camera after a coding pattern is projected to the surface of a human body through four projectors, the four projectors are positioned on four vertexes of a regular quadrangle taking the human body as the center, projection planes of the two projectors positioned on a diagonal line of the regular quadrangle are parallel to each other and are perpendicular to the diagonal line, the coding pattern completely covers the human body, and the texture of each region of the human body is different.
In the present embodiment, as shown in fig. 3, fig. 3 is a projected human body image taken by a cloud camera from a viewing angle, and the encoding pattern completely covers the human body and makes the texture of each region of the human body different. The coding pattern is used to help determine the correspondence of feature points between the camera-projector.
In other embodiments, a human body can be regarded as a cuboid, four projectors respectively project coding patterns to the surfaces opposite to the cuboid, light projected by one projector can only irradiate one surface of the human body, and the coding patterns on the projection planes (DMD) of the four projectors are different, so that the coding patterns projected to the four surfaces of the human body are different, confusion of one-to-one correspondence relations among three characteristic points of a three-dimensional scene point (human body characteristic point), a characteristic point on a camera image plane and a characteristic point corresponding to the coding pattern on the projection plane can be avoided, and subsequent characteristic points can be extracted.
S103, generating an intersection image M according to the human body image before projection and the human body image after projectionDifference (D)
MDifference (D)=MRear end-MFront sideThe human body image before projection and the human body image after projection are human body images acquired by the same cloud camera, MRear endRepresenting the projected image of the body, MFront sideRepresenting the pre-projection human image. The cross image is the human body image after projection minus the human body image before projection.
In the present embodiment, the pre-projection body image, the post-projection body image, and the cross image may be represented as a matrix. For example
Figure BDA0002606188590000071
The pixel points in the human body image before projection, the pixel points in the human body image after projection and the pixel points in the cross image can be expressed as elements m in the matrix. Generating the cross image according to the human body image before projection and the human body image after projection only needs to calculate the matrix represented by the human body image after projection and the matrix represented by the human body image before projection to generate cross information, the whole calculation is simple and quick, the ground color of the human body image is removed, and the interference of the ground color of the human body image on the characteristic point extraction matching process is eliminated.
S104, the cross image is processed and decolored to obtain a binary cross image.
In this embodiment, in order to obtain a more accurate feature point extraction and matching result with a higher resolution, an accurate binarization operation is performed on an intersection image generated by the pre-projection human body image and the post-projection human body image, so that each pixel point in the intersection image can be correctly decoded. And (3) utilizing the random number R to act on all pixel points of the cross-over image, if R is less than 5, enabling the pixel point to be black, and if R is more than 5, enabling the pixel point to be white.
Since the environment in which a human body is photographed using a cloud camera is often unknown and complicated. For example, the same projection light illuminates a black surface with a lower brightness than a white surface. This means that the cross images generated by the pre-projection and post-projection body images differ in gray value at different parts of the body. Because the human body surface information in the three-dimensional scene environment cannot be predicted in advance, the decoding of subsequent coding patterns and the extraction and matching of characteristic points are often difficult, and the problems of low resolution and accuracy and the like are caused. The method of setting a random number function and all pixel points in the cross image enables local textures on the surface of a human body to be changed drastically and displayed randomly in a global mode, so that the decoding accuracy is improved greatly, and the calculation complexity and the decoding time required by decoding are reduced greatly.
In other embodiments, by setting a global gray threshold, the pixels of the cross images with gray values higher than the threshold are set to 1 (displayed as white) and the pixels of the cross images with gray values lower than the threshold are set to 0 (displayed as black).
S105, decoding the binary cross-over image to obtain a feature point set.
Wherein the set of feature points comprises a plurality of feature points. Fig. 4 is a schematic diagram of a binary cross image, and by binarizing the cross image, the complexity of calculation required for decoding and the time required for decoding are greatly reduced, and feature points are more easily obtained. The feature points on the binary cross images and the coding pattern on the projector plane (DMD) are in one-to-one correspondence. The one-to-one correspondence relationship between the feature points on the binary cross images and the feature points on the projector image plane can be established by decoding the binary cross images through the human body alignment device.
In this embodiment, the coding pattern in each grid is random, the coding pattern in each grid is different, that is, the minimum unit of the coding pattern is different in the horizontal direction and the vertical direction, and the minimum unit of the coding pattern has a unique characteristic value in the horizontal direction and the vertical direction. The coding pattern is projected to a human body through the projector, the cross images are the projections of the coding pattern on the DMD on the camera image plane, so that the decoding of the binary cross images is to extract the characteristic value of the minimum unit of each binary cross image, when the coding pattern is projected to the human body, the characteristic of each part is ensured to be different, the characteristic points are obvious and easy to extract, and the calculation complexity and the required time for extracting the characteristic points are reduced.
In other embodiments, the detail information of the code pattern is different at different positions of the human body, for example, the code pattern projected to a certain feature point on the left shoulder of the human body is different from the code pattern of the rest part of the human body. The purpose of this is to establish a one-to-one correspondence between the coding pattern and the feature points of the binary cross images to better determine and match the feature points in different images. By decoding the binary interleaved image, it is possible to know which pixel of the projector DMD emits the feature point of the binary interleaved image, and also to know the imaging position of the human body surface on the virtual projector image.
S106 acquires the first coordinate information.
And the first coordinate information is the coordinate information of the feature point on the binary cross image.
In this embodiment, [ u ]C,vC]The coordinates of the feature points on the binary cross image are expressed, and the coordinates of the feature points on the binary cross image are also the coordinates at the minimum unit center point of the binary cross image. The origin of coordinates can be the top left vertex of the binary cross image, the X axis extends horizontally with the origin of coordinates, and the Y axis extends vertically with the origin of coordinates. The minimum unit of the binary interleaved image has a uniquely defined coordinate value.
S107 acquires second coordinate information.
The second coordinate information is coordinate information of the feature points on the projector image plane, and the feature points of the projector image plane are matched with the feature points of each binary cross image.
In this embodiment, [ u ]P,vP]Coordinates representing the feature points on the projector image plane DMD, the coordinates of the feature points on the projector image plane (DMD) are also the coordinates at the center point of the smallest unit of the coding pattern on the DMD. Wherein the origin of coordinates may be the top left corner vertex of the projector image plane, the X-axis extending horizontally with the origin of coordinates, and the Y-axis extending vertically with the origin of coordinates. The smallest element of the coding pattern on the DMD has a uniquely defined coordinate value.
And S108, calculating the cloud camera internal reference matrix by using the first coordinate information and the second coordinate information.
The internal reference matrix is a matrix that transforms the camera coordinates to image coordinates,
Figure BDA0002606188590000091
the meanings fu and fv of the internal reference matrix parameters are focal lengths along the u0 and v0 coordinate directions respectively, and the focal length is the distance from the optical center of the lens to the light gathering point when parallel light is incident, and reflects the light gathering capacity of the lens. In camera projection, the focal length is the distance of the aperture (lens) to the image plane. y is a magnification, and u0, v0 are first coordinate information.
S109, obtaining the external parameter matrix of the cloud camera by using the first coordinate information and the second coordinate information.
Wherein the external reference matrix comprises a rotation matrix
Figure BDA0002606188590000101
And translation matrix
Figure BDA0002606188590000102
The external reference matrix may be represented as: the left one is a 3 x 3 rotation matrix and the right one is a 3 x 1 translation column vector.
In the present embodiment, the position of the cloud camera in the world coordinates described by the external reference matrix of the cloud camera, and its pointing direction. The external reference matrix transforms the coordinates of the 3-dimensional world into cloud camera coordinates. The translation vector t describes the position of the origin of coordinates in the cloud camera coordinates in the 3-dimensional world coordinates, and the R column represents the direction of the 3-dimensional world coordinate axis in the cloud camera coordinates.
S110 stores the internal reference matrix and the external reference matrix.
In this embodiment, the internal reference matrix and the external reference matrix of each cloud camera are stored in the corresponding storage unit according to the corresponding cloud camera.
In an embodiment, as shown in fig. 5, the step of obtaining the external parameter matrix of the camera by using the first coordinate information and the second coordinate information may specifically be:
s510, obtaining third coordinate information through the first coordinate information and the second coordinate information by utilizing a triangle relation.
And the third coordinate information is the three-dimensional coordinate information of the feature point in the scene.
In this embodiment, the three-dimensional scene point and its projected point on the two-dimensional camera plane (CCD), the projected pixels of the projector image plane (DMD) form a triangle. Oc, Op are the focal point of the cloud camera lens and the focal point of the projector image plane, respectively, which is the point where all rays converge. For a given three-dimensional scene point, its point in the image plane of the cloud camera and the two foci Oc, Op form a plane. This plane intersects the image plane of the cloud camera and the phase plane of the projector at Ec and Ep, respectively. According to the coordinate [ u ] of the feature point on the binary cross imageC,vC]Coordinate information on the projector image plane uP,vP]And the triangular geometrical relationship between the three-dimensional scene points obtains the coordinates of the three-dimensional scene points. The coordinates of a three-dimensional scene point may be represented as [ x ]W,yW,zW]。
S520, obtaining the external parameter matrix of the camera through the first coordinate information, the second coordinate information and the third coordinate information.
In this embodiment, the formula is used:
sC[uC,vC,1]=AC[RC|TC][xW,yW,zW,1]T
sP[uP,vP,1]=AP[RP|TP][xW,yW,zW,1]T
the superscripts C and P here denote the cloud camera and projector, respectively, from the known first coordinate information uC,vC]And second coordinate information uP,vP]The internal reference matrix A can be solved according to simultaneous equations to obtain an external reference matrix [ R ] of the cameraC|TC]And the external parameter matrix R of the projectorP|TP]。
As shown in fig. 6, in one embodiment, there is provided a human body alignment device, including:
a first image information obtaining module 600 configured to obtain human body image information photographed by a cloud camera, the human body image information including a pre-projection human body image;
a second image information obtaining module 610 configured to obtain projected human body image information including a projected human body image photographed by a cloud camera after a coded pattern is projected onto a surface of a human body by four projectors, the four projectors being located on four vertices of a regular quadrangle centering on the human body, projection planes of the two projectors located on a diagonal of the regular quadrangle being parallel to each other and perpendicular to the diagonal, the coded pattern completely covering the human body and making textures of each region of the human body different;
a cross-image generation module 620 configured to generate a cross-image M from the pre-projection and post-projection body imagesDifference (D),MDifference (D)=MTransparent film-MOriginal sourceThe human body image before projection and the human body image after projection are human body images acquired by the same cloud camera, MTransparent filmRepresenting the projected image of the body, MOriginal sourceRepresenting a pre-projection human body image;
a process decolorizing module 630 configured to process decolorizing the cross-color image resulting in a binary cross-color image;
a decoding module 640 configured to obtain a feature point set by decoding the binary interleaved image, the feature point set including a plurality of feature points;
a first coordinate obtaining module 650 configured to obtain first coordinate information, which is coordinate information of a feature point on the binary cross image;
a second coordinate obtaining module 660 configured to obtain second coordinate information, which is coordinate information of the feature point on the projector image plane;
a first computing module 670 configured to compute a cloud camera internal reference matrix using the first coordinate information and the second coordinate information;
a second calculation module 680 configured to obtain an appearance matrix of the camera using the first coordinate information and the second coordinate information, the appearance matrix including a rotation matrix R and a translation matrix T;
a storage module 690 configured to store the internal reference matrix and the external reference matrix.
In one embodiment, as shown in FIG. 7, the process decolorizing module 630 further comprises:
a random number module 710 configured to apply a random number R to all pixels of the cross-over image, if R is less than 5, the pixels are made black, and R is greater than 5, the pixels are made white.
In this embodiment, the human body alignment device provided by the present application may be implemented in a program form, and the program runs on the intelligent terminal device. The memory of the smart terminal may store various program modules constituting the human body alignment apparatus, for example, a first image information obtaining module 600, a second image information obtaining module 610, an intersection image generating module 620, a color removing processing module 630, a decoding module 640, a first coordinate obtaining module 650, a second coordinate obtaining module 660, a first calculating module 670, a second calculating module 680, and a storage module 690. The program constituted by the respective program modules causes the processor to execute the steps in a human body alignment method of the respective embodiments of the present application described in the present specification.
For example, the smart terminal may perform S101 through the first image information acquiring module 600 in the human body alignment apparatus as shown in fig. 6. S102 is performed by the second image information acquisition module 610. S103 is performed by the cross image generation module 620. S104 is performed by the process decolorizing module 630. S105 is performed by the decoding module 640. S106 is performed by the first coordinate acquiring module 650. S107 is performed by the second coordinate acquiring module 660. S108 is performed by the first calculation module 670. S109 is performed by the second calculation module 680. S110 is performed by the storage module 690.
It should be noted that the features of the embodiments and examples of the present application may be combined with each other without conflict. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. Other embodiments, which can be derived by one of ordinary skill in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged under appropriate circumstances in order to facilitate the description of the embodiments of the invention herein.

Claims (9)

1. A human body alignment method, comprising:
acquiring human body image information shot by a cloud camera, wherein the human body image information comprises a human body image before projection;
acquiring projected human body image information, wherein the projected human body image information comprises a projected human body image which is shot by a cloud camera after a coding pattern is projected to the surface of a human body through four projectors, the four projectors are positioned on four vertexes of a regular quadrangle which takes the human body as the center, projection planes of two projectors positioned on a diagonal line of the regular quadrangle are parallel to each other and are vertical to the diagonal line, the coding pattern completely covers the human body, and the texture of each region of the human body is different;
generating a cross image M from the pre-projection and post-projection body imagesDifference (D),MDifference (D)=MRear end-MFront sideThe human body image before projection and the human body image after projection are human body images acquired by the same cloud camera, MRear endRepresenting the projected image of the body, MFront sideRepresenting a pre-projection human body image;
processing and decolorizing the cross image to obtain a binary cross image;
decoding the binary cross image to obtain a feature point set, wherein the feature point set comprises a plurality of feature points;
acquiring first coordinate information, wherein the first coordinate information is coordinate information of a feature point on the binary cross image;
acquiring second coordinate information, wherein the second coordinate information is coordinate information of the feature point on an image plane of the projector;
obtaining a cloud camera internal reference matrix by using the first coordinate information and the second coordinate information;
obtaining an external parameter matrix of the cloud camera by using the first coordinate information and the second coordinate information, wherein the external parameter matrix comprises a rotation matrix R and a translation matrix T;
storing the internal reference matrix and the external reference matrix.
2. The human body alignment method according to claim 1, wherein the cloud cameras are fixed on a fixing frame at the same vertical distance, the fixing frame is located at eight vertices of a regular octagon, the image plane of each cloud camera is perpendicular to a central connecting line, and the central connecting line is a connecting line between the vertex of the octagon where the cloud camera is located and the center of the regular octagon.
3. The human body alignment method according to claim 1, wherein the processing the cross images to remove color to obtain the binary cross images specifically comprises:
and (4) utilizing the random number R to act on all pixel points of the cross-over image, if R is less than 5, enabling the pixel points to be black, and if R is more than 5, enabling the pixel points to be white.
4. The human body alignment method as claimed in claim 1, wherein the coding patterns are randomly distributed, and minimum units of the coding patterns are different in both horizontal and vertical directions.
5. The human body alignment method according to claim 1, wherein the obtaining of the external reference matrix of the cloud camera using the first coordinate information and the second coordinate information specifically includes:
obtaining third coordinate information through the first coordinate information and the second coordinate information by utilizing a triangular relation, wherein the third coordinate information is three-dimensional coordinate information of the feature point in the scene;
and obtaining an external parameter matrix of the cloud camera according to the first coordinate information, the second coordinate information and the third coordinate information.
6. The human body alignment method as claimed in claim 1, wherein the light projected by the projector irradiates only one surface of the human body.
7. The human body alignment method as claimed in claim 1, wherein the human body stands still for one second in a posture where both arms are opened at 30 degrees and both legs are opened at 15 degrees.
8. A body alignment device, comprising:
a first image information acquisition module configured to acquire human body image information captured by a cloud camera, the human body image information including a pre-projection human body image;
a second image information obtaining module configured to obtain projected human body image information, the projected human body image information including a projected human body image photographed by a cloud camera after a coded pattern is projected onto a surface of a human body by four projectors, the four projectors being located on four vertices of a regular quadrangle centering on the human body, projection planes of the two projectors located on a diagonal of the regular quadrangle being parallel to each other and perpendicular to the diagonal, the coded pattern completely covering the human body and making textures of each region of the human body different;
a cross-image generation module configured to generate a cross-image M from the pre-projection and post-projection body imagesDifference (D),MDifference (D)=MTransparent film-MOriginal sourceThe human body image before projection and the human body image after projection are human body images acquired by the same cloud camera, MTransparent filmRepresenting the projected image of the body, MOriginal sourceRepresenting a pre-projection human body image;
a color removal processing module configured to process the cross-color image to remove color, resulting in a binary cross-color image;
a decoding module configured to obtain a feature point set by decoding the binary interleaved image, the feature point set including a plurality of feature points;
a first coordinate obtaining module configured to obtain first coordinate information, the first coordinate information being coordinate information of a feature point on the binary cross image;
a second coordinate acquisition module configured to acquire second coordinate information, the second coordinate information being coordinate information of the feature point on the projector image plane;
a first computing module configured to compute a cloud camera internal reference matrix using the first coordinate information and the second coordinate information;
a second calculation module configured to obtain an appearance parameter matrix of the camera using the first coordinate information and the second coordinate information, the appearance parameter matrix including a rotation matrix R and a translation matrix T;
a storage module configured to store the internal reference matrix and the external reference matrix.
9. The body alignment device of claim 8, wherein the process decolorizing module further comprises:
a random number module configured to apply a random number R to all pixels of the cross-over image, wherein if R is less than 5, the pixels are black, and R is greater than 5, the pixels are white.
CN202010739570.8A 2020-07-28 2020-07-28 Human body alignment method and device Active CN111862241B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010739570.8A CN111862241B (en) 2020-07-28 2020-07-28 Human body alignment method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010739570.8A CN111862241B (en) 2020-07-28 2020-07-28 Human body alignment method and device

Publications (2)

Publication Number Publication Date
CN111862241A true CN111862241A (en) 2020-10-30
CN111862241B CN111862241B (en) 2024-04-12

Family

ID=72948173

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010739570.8A Active CN111862241B (en) 2020-07-28 2020-07-28 Human body alignment method and device

Country Status (1)

Country Link
CN (1) CN111862241B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160300383A1 (en) * 2014-09-10 2016-10-13 Shenzhen University Human body three-dimensional imaging method and system
CN110060304A (en) * 2019-03-31 2019-07-26 南京航空航天大学 A kind of organism three-dimensional information acquisition method
CN111028295A (en) * 2019-10-23 2020-04-17 武汉纺织大学 A 3D imaging method based on encoded structured light and binocular
CN111275776A (en) * 2020-02-11 2020-06-12 北京淳中科技股份有限公司 Projection augmented reality method and device and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160300383A1 (en) * 2014-09-10 2016-10-13 Shenzhen University Human body three-dimensional imaging method and system
CN110060304A (en) * 2019-03-31 2019-07-26 南京航空航天大学 A kind of organism three-dimensional information acquisition method
CN111028295A (en) * 2019-10-23 2020-04-17 武汉纺织大学 A 3D imaging method based on encoded structured light and binocular
CN111275776A (en) * 2020-02-11 2020-06-12 北京淳中科技股份有限公司 Projection augmented reality method and device and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
V. L. TRAN.ET AL: "Accurate RGB-D camera based on structured light techniques", 2017 INTERNATIONAL CONFERENCE ON SYSTEM SCIENCE AND ENGINEERING (ICSSE), pages 235 - 238 *
林嘉鑫: "基于面结构光的机械工件三维扫描系统研究与设计", 基于面结构光的机械工件三维扫描系统研究与设计, pages 25 - 62 *

Also Published As

Publication number Publication date
CN111862241B (en) 2024-04-12

Similar Documents

Publication Publication Date Title
US11354840B2 (en) Three dimensional acquisition and rendering
JPWO2018235163A1 (en) Calibration apparatus, calibration chart, chart pattern generation apparatus, and calibration method
CN110827392B (en) Monocular image three-dimensional reconstruction method, system and device
CN110910431A (en) A multi-view 3D point set restoration method based on monocular camera
KR102222290B1 (en) Method for gaining 3D model video sequence
CN111009030A (en) A multi-view high-resolution texture image and binocular 3D point cloud mapping method
CN106170086B (en) Method and device thereof, the system of drawing three-dimensional image
JPH11175762A (en) Light environment measuring instrument and device and method for shading virtual image using same
JP2024537798A (en) Photographing and measuring method, device, equipment and storage medium
Pagani et al. Dense 3D Point Cloud Generation from Multiple High-resolution Spherical Images.
CN113902781B (en) Three-dimensional face reconstruction method, device, equipment and medium
CN114359406A (en) Calibration of auto-focusing binocular camera, 3D vision and depth point cloud calculation method
Yu et al. Calibration for camera–projector pairs using spheres
Krutikova et al. Creation of a depth map from stereo images of faces for 3D model reconstruction
CN110059537B (en) Three-dimensional face data acquisition method and device based on Kinect sensor
CN112446926B (en) Relative position calibration method and device for laser radar and multi-eye fish-eye camera
CN113624159A (en) Micro-laser three-dimensional model reconstruction system and method
CN116147534A (en) A mirror-assisted multi-view 3D laser scanning system and complex surface panorama measurement method
CN108322730A (en) A kind of panorama depth camera system acquiring 360 degree of scene structures
CN113763480B (en) Combined calibration method for multi-lens panoramic camera
CN113781305A (en) Point cloud fusion method of double-monocular three-dimensional imaging system
CN116524022B (en) Offset data calculation method, image fusion device and electronic equipment
CN111862241B (en) Human body alignment method and device
Yamazaki et al. Coplanar shadowgrams for acquiring visual hulls of intricate objects
WO2022175688A1 (en) Image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant