[go: up one dir, main page]

CN111862241B - Human body alignment method and device - Google Patents

Human body alignment method and device Download PDF

Info

Publication number
CN111862241B
CN111862241B CN202010739570.8A CN202010739570A CN111862241B CN 111862241 B CN111862241 B CN 111862241B CN 202010739570 A CN202010739570 A CN 202010739570A CN 111862241 B CN111862241 B CN 111862241B
Authority
CN
China
Prior art keywords
human body
image
coordinate information
projection
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010739570.8A
Other languages
Chinese (zh)
Other versions
CN111862241A (en
Inventor
蒋亚洪
潘永路
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Youchain Times Technology Co ltd
Original Assignee
Hangzhou Youchain Times Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Youchain Times Technology Co ltd filed Critical Hangzhou Youchain Times Technology Co ltd
Priority to CN202010739570.8A priority Critical patent/CN111862241B/en
Publication of CN111862241A publication Critical patent/CN111862241A/en
Application granted granted Critical
Publication of CN111862241B publication Critical patent/CN111862241B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • G06T2207/30208Marker matrix

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a human body alignment method and a device, comprising the following steps: acquiring human body image information shot by a cloud camera; acquiring projection human body image information; generating an intersection image according to the human body image before projection and the human body image after projection; the intersection image is processed and de-colored to obtain a binary intersection image; acquiring a feature point set by decoding the binary intersection image; acquiring first coordinate information; acquiring second coordinate information; obtaining a cloud camera internal reference matrix by utilizing the first coordinate information and the second coordinate information; obtaining an external parameter matrix of the cloud camera by utilizing the first coordinate information and the second coordinate information; and storing the internal reference matrix and the external reference matrix. The invention can obtain camera parameters with higher accuracy and improve the alignment efficiency of human bodies.

Description

Human body alignment method and device
Technical Field
The invention relates to the technical field of machine vision, in particular to a human body alignment method and device.
Background
The human body alignment refers to a process of establishing a positional relationship between a camera image plane pixel point and a human body scene point and acquiring camera parameters from the positional relationship, so that subsequent human body image processing can be better performed. However, the camera is affected by its own factors and various environmental factors during use, so that the image shot by the camera cannot provide enough information, and an accurate corresponding relation cannot be established between the obtained two-dimensional image information and the real human body target information, so that the camera parameters cannot be accurately calculated. In order to correctly and effectively use two-dimensional images of the human body to acquire camera parameters, alignment of the human body images is a problem that must be solved. In the prior art, the problems of complex calculation, low alignment efficiency, poor effect under specific environments, low accuracy of acquired camera parameters and the like exist.
Disclosure of Invention
The invention aims to solve the problems of complex operation, low alignment efficiency, poor effect in a specific environment, low accuracy of acquired camera parameters and the like of a human body alignment method in the prior art.
In order to achieve the above purpose, the present invention provides a method and apparatus for human alignment.
The human body alignment method comprises the following steps:
acquiring human body image information shot by a cloud camera, wherein the human body image information comprises a human body image before projection;
the method comprises the steps of obtaining projection human body image information, wherein the projection human body image information comprises a projected human body image shot by a cloud camera after a coding pattern is projected onto the surface of a human body through four projectors, the four projectors are positioned on four vertexes of a regular quadrangle taking the human body as a center, projection planes of two projectors positioned on a diagonal line of the regular quadrangle are parallel to each other and perpendicular to the diagonal line, and the coding pattern completely covers the human body and enables textures of each area of the human body to be different;
generating an intersection image M from the pre-projection human body image and the post-projection human body image Difference of difference ,M Difference of difference =M Rear part (S) -M Front part The human body image before projection and the human body image after projection are human body images acquired by the same cloud camera, M Rear part (S) Representing the projected human body image M Front part Representing a pre-projection human body image;
the intersection image is processed and de-colored to obtain a binary intersection image;
obtaining a characteristic point set by decoding the binary intersection image, wherein the characteristic point set comprises a plurality of characteristic points;
acquiring first coordinate information, wherein the first coordinate information is coordinate information of a feature point on the binary intersection image;
acquiring second coordinate information, wherein the second coordinate information is coordinate information of the feature points on the image plane of the projector;
obtaining a cloud camera internal reference matrix by utilizing the first coordinate information and the second coordinate information;
obtaining an external parameter matrix of the cloud camera by utilizing the first coordinate information and the second coordinate information, wherein the external parameter matrix comprises a rotation matrix R and a translation matrix T;
and storing the internal reference matrix and the external reference matrix.
Further, the cloud cameras are fixed on a fixing frame at the same vertical distance, the fixing frame is located on eight vertexes of the regular octagon, the image plane of each cloud camera is perpendicular to a central connecting line, and the central connecting line is a connecting line of the octagon vertexes where the cloud cameras are located and the center of the regular octagon.
Further, the step of processing the intersection image to obtain the binary intersection image specifically includes: and using the random number R to act all the characteristic points of the intersected image, if R is smaller than 5, making the characteristic points black, and if R is larger than 5, making the characteristic points white.
Further, the coding patterns are randomly distributed, and the minimum units of the coding patterns are different in both the horizontal direction and the vertical direction.
Further, the obtaining the external parameter matrix of the camera by using the first coordinate information and the second coordinate information specifically includes:
obtaining third coordinate information by utilizing a triangle relation through the first coordinate information and the second coordinate information, wherein the third coordinate information is three-dimensional coordinate information of the feature point in the scene;
and obtaining an external parameter matrix of the camera through the first coordinate information, the second coordinate information and the third coordinate information.
Further, the light projected by the projector can only illuminate one face of the human body.
Further, the human body stands in a posture in which both arms are opened by 30 degrees and both legs are opened by 15 degrees and remains stationary for one second.
A body alignment device, the device comprising:
the system comprises a first image information acquisition module, a second image information acquisition module and a third image information acquisition module, wherein the first image information acquisition module is configured to acquire human body image information shot by a cloud camera, and the human body image information comprises a human body image before projection;
the second image information acquisition module is configured to acquire projection human body image information, the projection human body image information comprises a projected human body image shot by a cloud camera after a coding pattern is projected onto the surface of a human body through four projectors, the four projectors are positioned on four vertexes of a regular quadrangle taking the human body as a center, projection planes of two projectors positioned on a diagonal line of the regular quadrangle are parallel to each other and perpendicular to the diagonal line, and the coding pattern completely covers the human body and enables textures of each area of the human body to be different;
an intersection image generation module configured to generate an intersection image M from the pre-projection human body image and the post-projection human body image Difference of difference ,M Difference of difference =M Rear part (S) -M Front part The human body image before projection and the human body image after projection are human body images acquired by the same cloud camera, M Rear part (S) Representing the projected human body image M Front part Representing a pre-projection human body image;
the processing and de-coloring module is configured to process and de-color the intersection image to obtain a binary intersection image;
the decoding module is configured to obtain a characteristic point set by decoding the binary intersection image, wherein the characteristic point set comprises a plurality of characteristic points;
the first coordinate acquisition module is configured to acquire first coordinate information, wherein the first coordinate information is coordinate information of the feature points on the binary intersection image;
the second coordinate acquisition module is configured to acquire second coordinate information, wherein the second coordinate information is coordinate information of the feature points on the image plane of the projector;
the first computing module is configured to compute a cloud camera internal reference matrix by utilizing the first coordinate information and the second coordinate information;
the second computing module is configured to obtain an external parameter matrix of the camera by using the first coordinate information and the second coordinate information, wherein the external parameter matrix comprises a rotation matrix R and a translation matrix T;
a storage module configured to store the internal reference matrix and the external reference matrix.
Further, the process decolouring module further comprises: and the random number module is configured to utilize the random number R to act on all characteristic points of the intersected image, if R is smaller than 5, the characteristic points are black, R is larger than 5, and the characteristic points are white.
The invention has the beneficial effects that:
according to the human body alignment method and device, the cloud camera is used for respectively shooting an un-projected human body and the human body projected with different coding patterns which are randomly changed in the horizontal direction and the vertical direction, so that human body alignment and camera parameters are obtained. The method has the advantages that the intersection image is generated according to the human body image before projection and the human body image after projection, only the matrix represented by the human body image after projection and the matrix represented by the human body image before projection are needed to be operated to generate the intersection information, the whole operation process is simple and quick, the ground color of the human body image is removed, the interference of the ground color of the human body image on the characteristic point extraction and matching process is eliminated, and the speed and efficiency in human body alignment are improved.
In order to obtain camera parameters with higher accuracy, a random number R is utilized to carry out global binarization operation on an intersection image generated by a human body image before projection and a human body image after projection, so that each pixel point in the intersection image can be correctly decoded, the alignment calculation efficiency is improved, enough information can be extracted under the condition that the characteristics of the human body surface are not obvious, and the camera parameters can be accurately calculated by utilizing the information.
The regular arrangement of the cloud camera arrays, the image plane of each cloud camera is perpendicular to the connection line between the projection point of the cloud camera on the horizontal plane and the regular octagon center, and the calculation of camera parameters in human alignment is simplified through the setting, so that the human alignment efficiency is greatly improved.
The features and advantages of the present invention will be described in detail by way of example with reference to the accompanying drawings.
Drawings
FIG. 1 is a flow chart of a human alignment method according to an embodiment of the present invention;
FIG. 2 is a front-projection human body image taken by a cloud camera from one perspective in an embodiment of the present invention;
FIG. 3 is a projected human body image taken by a cloud camera from one perspective in an embodiment of the present invention;
FIG. 4 is a schematic diagram of a binary intersection image in an embodiment of the present invention;
FIG. 5 is a flowchart of obtaining an extrinsic matrix of a camera using first coordinate information and second coordinate information in an embodiment of the present invention;
FIG. 6 is a block diagram of a human alignment device in accordance with an embodiment of the present invention;
fig. 7 is a block diagram of an apparatus for processing a random number module in a color removal module according to an embodiment of the present invention.
Detailed Description
Camera alignment refers to establishing a positional relationship between camera phase plane pixels and scene points. And obtaining parameters of the camera imaging model by utilizing the corresponding relation of the characteristic points in the image coordinates and the scene coordinates according to the camera imaging model. The parameters of the camera imaging model include internal parameters and external parameters.
In one embodiment, a human alignment method is provided. Referring to fig. 1, the human alignment method specifically includes the steps of:
s101, acquiring human body image information shot by a cloud camera.
Wherein the human body image information includes a pre-projection human body image.
In this embodiment, fig. 2 is a pre-projection human body image captured by a cloud camera from a single viewing angle, where the human body image has no coding pattern projected on the human body surface. The human body patterns shot by different cloud cameras at different angles have slight differences in detail, and the human body patterns are mainly characterized in that any three-dimensional human body characteristic point (namely scene point) is positioned at different positions in pictures shot by different cloud cameras, and corresponding two-dimensional coordinates are different.
In other embodiments, the human body stands in a posture with both arms open 30 degrees and both legs open 15 degrees and remains stationary for one second. Therefore, the photographed human body image before projection can not generate double images due to movement, and each characteristic point is clear and not overlapped, so that the human body image is more beneficial to subsequent processing.
In other embodiments, the plurality of cloud cameras are fixed on one fixing frame at the same vertical distance, the number of the cloud cameras on each fixing frame is equal, each fixing frame is located on eight vertexes of the regular octagon, the image plane of each cloud camera is perpendicular to a central connecting line, and the central connecting line is a connecting line of the vertexes of the octagon where the cloud cameras are located and the center of the regular octagon. The regular arrangement of the cloud camera arrays, the image plane of each cloud camera is perpendicular to the connection line between the projection point of the cloud camera on the horizontal plane and the regular octagon center, and the calculation of camera parameters in human alignment is simplified through the setting, so that the human alignment efficiency is greatly improved.
In other embodiments, the plurality of cloud cameras acquire images of the human body from a plurality of angles, project the pre-human body images and store the pre-human body images according to sequence numbers, and further facilitate subsequent processing of the human body images.
S102, acquiring projection human body image information.
The projected human body image information comprises projected human body images shot by a cloud camera after coding patterns are projected onto the surface of a human body through four projectors, the four projectors are positioned on four vertexes of a regular quadrangle taking the human body as a center, projection planes of two projectors positioned on diagonal lines of the regular quadrangle are parallel to each other and perpendicular to the diagonal lines, and the coding patterns completely cover the human body and enable textures of each area of the human body to be different.
In this embodiment, as shown in fig. 3, fig. 3 is a projected human body image captured by a cloud camera from a single viewing angle, where the coding pattern completely covers the human body and makes textures of each region of the human body different. The coding pattern is used to help determine the correspondence of the feature points between camera-projector.
In other embodiments, the human body can be regarded as a cuboid, the four projectors respectively project the coding patterns on the surface opposite to the cuboid, the light projected by one projector can only irradiate one surface of the human body, and the coding patterns on the projection planes (DMD) of the four projectors are different, so that the coding patterns projected on the four surfaces of the human body are different, the confusion of one-to-one correspondence relation of the three characteristic points (characteristic points of the human body), the characteristic points on the image plane of the camera and the characteristic points corresponding to the coding patterns on the projection plane is avoided, and the extraction of the subsequent characteristic points is facilitated.
S103, generating an intersection image M according to the human body image before projection and the human body image after projection Difference of difference
M Difference of difference =M Rear part (S) -M Front part The human body image before projection and the human body image after projection are human body images acquired by the same cloud camera, M Rear part (S) Representing the projected human body image M Front part Representing the human body image before projection. The intersection image is the post-projection human body image minus the pre-projection human body image.
In this embodiment, the pre-projection human body image, the post-projection human body image, and the cross image may be expressed as a matrix. For exampleThe pixels in the human body image before projection, the pixels in the human body image after projection and the pixels in the intersection image can be expressed as the element m in the matrix. The cross image is generated according to the human body image before projection and the human body image after projection, only the matrix represented by the human body image after projection and the matrix represented by the human body image before projection are needed to be operated to generate the cross information, the whole operation is simple and quick, the ground color of the human body image is removed, and the interference of the ground color of the human body image on the characteristic point extraction and matching process is eliminated.
S104, the intersection image is processed and de-colored, and a binary intersection image is obtained.
In this embodiment, in order to obtain a more accurate feature point extraction and matching result with higher resolution, an accurate binarization operation is performed on an intersection image generated by a pre-projection human body image and a post-projection human body image, so that each pixel point in the intersection image can be correctly decoded. And utilizing the random number R to act on all pixel points of the intersected image, if R is smaller than 5, enabling the pixel points to be black, and if R is larger than 5, enabling the pixel points to be white.
Because the environment in which the human body is photographed with a cloud camera is often unknown and complex. For example, the same projection light is illuminated at a lower brightness on a black surface than on a white surface. This means that the grey values of the cross image generated by the pre-projection human body image and the post-projection human body image differ at different parts of the human body. Because the human body surface information in the three-dimensional scene environment cannot be predicted in advance, difficulties are often brought to decoding of subsequent coding patterns and extraction and matching of characteristic points, and problems of low resolution, low accuracy and the like are caused. The method of setting a random number function and all pixel points in the cross image enables the local texture of the surface of the human body to be changed drastically and displayed globally at random, so that the decoding accuracy is greatly improved, and the calculation complexity and the decoding time are greatly reduced.
In other embodiments, by setting a global gray threshold, 1 is set (shown as white) for intersecting image pixels having gray values above the threshold, and 0 is set (shown as black) for intersecting image pixels having gray values below the threshold.
S105, decoding the binary intersection image to obtain a feature point set.
Wherein the set of feature points includes a plurality of feature points. Fig. 4 is a schematic diagram of a binary cross image, and by binarizing the cross image, the complexity of the calculation required for decoding and the time required for decoding are greatly reduced, so that feature points are easier to obtain. The feature points on the binary intersection image and the coded pattern on the projector plane (DMD) are in a one-to-one correspondence. The one-to-one correspondence between the feature points on the binary cross images and the feature points on the projector image plane can be established by decoding the binary cross images by the human body alignment device.
In this embodiment, the coding pattern is random in each lattice, and the coding pattern is different in each lattice, that is, the minimum unit of the coding pattern is different in both the horizontal direction and the vertical direction, and the minimum unit of the coding pattern has unique feature values in both the horizontal direction and the vertical direction. The coding pattern is projected onto the human body through the projector, the cross image is the projection of the coding pattern on the DMD on the camera image plane, so that the binary cross image is decoded, namely, the minimum unit of each binary cross image is extracted with the characteristic value, when the coding pattern is projected onto the human body, the characteristic difference of each part is ensured, the characteristic points are obvious and are easy to extract, and the calculation complexity and the time required for extracting the characteristic points are reduced.
In other embodiments, the details of the coding pattern are different at different locations of the body, for example, the coding pattern projected to a feature point on the left shoulder of the body is different from the coding pattern of the rest of the body. The aim of this is to better determine and match the feature points in different images by establishing a one-to-one correspondence between the coding pattern and the feature points of the binary cross image. By decoding the binary cross image, it is possible to know which pixel of the projector DMD the feature point of the binary cross image is emitted from, and also to know the imaging position of the human body surface on the virtual projector image.
S106 acquires first coordinate information.
The first coordinate information is coordinate information of the feature points on the binary intersection image.
In the present embodiment, [ u ] C ,v C ]The coordinates of the feature points on the binary intersection image are represented, and the coordinates of the feature points on the binary intersection image are also coordinates at the minimum unit center point of the binary intersection image. The coordinate origin may be the top left corner vertex of the binary intersection image, the X axis extends in the horizontal direction with the coordinate origin, and the Y axis extends in the vertical direction with the coordinate origin. The minimum units of the binary cross images all have a uniquely determined coordinate value.
S107 acquires second coordinate information.
The second coordinate information is coordinate information of the feature points on the projector image plane, and the feature points of the projector image plane are matched with the feature points of each binary intersection image.
In the present embodiment, [ u ] P ,v P ]Representing the coordinates of the feature points on the projector image plane DMD, which are also the coordinates at the center point of the minimum cell of the coding pattern on the DMD.
The coordinate origin may be an upper left corner vertex of the projector image plane, the X axis extends in a horizontal direction with the coordinate origin, and the Y axis extends in a vertical direction with the coordinate origin. The minimum unit of the coding pattern on the DMD has a uniquely determined coordinate value.
S108, calculating a cloud camera internal reference matrix by using the first coordinate information and the second coordinate information.
The reference matrix is a matrix that transforms camera coordinates into image coordinates,
the meaning fu, fv of the internal reference matrix parameter is the focal length along the u0 and v0 coordinate directions, the focal length is the distance from the optical center of the lens to the light collecting point when parallel light is incident, and the focusing capability of the lens is reflected. In camera projection, the focal length is the distance of the aperture (lens) from the image plane. y is the magnification, u0, v0 are the first coordinate information.
And S109, obtaining an external parameter matrix of the cloud camera by using the first coordinate information and the second coordinate information.
Wherein the extrinsic matrix comprises a rotation matrixThe extrinsic matrix can be expressed as: the left side is a 3*3 rotation matrix and the right side is a 3*1 translation column vector.
In this embodiment, the position of the cloud camera in the world coordinates described by the external reference matrix of the cloud camera, and its pointing direction. The extrinsic matrix transforms the coordinates of the 3-dimensional world into cloud camera coordinates. The translation vector t describes the position of the origin of coordinates in the 3-dimensional world coordinates in the cloud camera coordinates, and the R column represents the direction of the 3-dimensional world coordinate axis in the cloud camera coordinates.
S110 stores the internal reference matrix and the external reference matrix.
In this embodiment, the internal reference matrix and the external reference matrix of each cloud camera are stored in the corresponding storage unit according to the corresponding cloud camera.
In one embodiment, as shown in fig. 5, the step of obtaining the extrinsic matrix of the camera by using the first coordinate information and the second coordinate information may specifically be:
s510, obtaining third coordinate information through the first coordinate information and the second coordinate information by utilizing a triangle relation.
Wherein the third coordinate information is three-dimensional coordinate information of the feature point in the scene.
In this embodiment, the three-dimensional scene point and its projection point on the two-dimensional camera plane (CCD), the projection pixels (DMD) of the projector image plane form a triangle. Oc and Op are the focal point of the lens of the cloud camera and the focal point of the image plane of the projector, respectively, where all light rays are converged. For a given three-dimensional scene point, its point on the image plane of the cloud camera and the two foci Oc, op form a plane. The plane intersects the image plane of the cloud camera and the phase plane of the projector at Ec and Ep, respectively. According to the coordinates [ u ] of the feature points on the binary intersection image C ,v C ]Coordinate information [ u ] on projector image plane P ,v P ]The triangular geometric relationship between them obtains the coordinates of the three-dimensional scene point. The coordinates of the three-dimensional scene point can be expressed as [ x ] W ,y W ,z W ]。
S520, obtaining an external parameter matrix of the camera through the first coordinate information, the second coordinate information and the third coordinate information.
In this embodiment, the formula is used:
s C [u C ,v C ,1]=A C [R C |T C ][x W ,y W ,z W ,1] T
s P [u P ,v P ,1]=A P [R P |T P ][x W ,y W ,z W ,1] T
the superscripts C and P here denote a cloud camera and a projector, respectively, according to known first coordinate information [ u ] C ,v C ]And second coordinate information [ u ] P ,v P ]The internal reference matrix A can be solved according to simultaneous equations to obtain an external reference matrix [ R ] of the camera C |T C ]And an extrinsic matrix [ R ] of a projector P |T P ]。
As shown in fig. 6, in one embodiment, a body alignment device is provided, the device comprising:
a first image information acquisition module 600 configured to acquire human body image information captured by a cloud camera, the human body image information including a pre-projection human body image;
a second image information acquisition module 610 configured to acquire projected human body image information including a projected human body image photographed by a cloud camera after a coding pattern is projected onto a human body surface by four projectors located on four vertices of a regular quadrangle centering on a human body, projection planes of two projectors located on a diagonal line of the regular quadrangle being parallel to each other and perpendicular to the diagonal line, the coding pattern completely covering the human body and making textures of each region of the human body different;
an intersection image generation module 620 configured to generate an intersection image M from the pre-projection human body image and the post-projection human body image Difference of difference ,M Difference of difference =M Rear part (S) -M Front part The human body image before projection and the human body image after projection are human body images acquired by the same cloud camera, M Rear part (S) Representing the projected human body image M Front part Representing a pre-projection human body image;
a process decolouring module 630 configured to process decolour the cross image to obtain a binary cross image;
a decoding module 640 configured to obtain a feature point set by decoding the binary cross image, the feature point set including a plurality of feature points;
a first coordinate acquisition module 650 configured to acquire first coordinate information, which is coordinate information of a feature point on the binary intersection image;
a second coordinate acquisition module 660 configured to acquire second coordinate information, which is coordinate information of the feature point on the projector image plane;
a first calculation module 670 configured to calculate a cloud camera reference matrix using the first coordinate information and the second coordinate information;
a second calculation module 680 configured to obtain an extrinsic matrix of the camera using the first coordinate information and the second coordinate information, the extrinsic matrix including a rotation matrix R and a translation matrix T;
a storage module 690 configured to store the internal reference matrix and the external reference matrix.
In one embodiment, as shown in FIG. 7, the process decolouring module 630 further comprises:
a random number module 710 configured to cross all pixels of the image with a random number R, if R is less than 5, making the pixels black, and R is greater than 5, making the pixels white.
In this embodiment, the human alignment device provided in the present application may be implemented in a program form, where the program runs on an intelligent terminal device. The memory of the intelligent terminal may store various program modules constituting the human body alignment apparatus, such as a first image information acquisition module 600, a second image information acquisition module 610, an intersection image generation module 620, a process color removal module 630, a decoding module 640, a first coordinate acquisition module 650, a second coordinate acquisition module 660, a first calculation module 670, a second calculation module 680, and a storage module 690. The program of each program module causes the processor to carry out the steps of a human alignment method of each embodiment of the present application described in the present specification.
For example, the smart terminal may perform S101 through the first image information acquisition module 600 in the human body alignment apparatus as shown in fig. 6. S102 is performed by the second image information acquisition module 610. S103 is performed by the cross image generation module 620. S104 is performed by the process color removal module 630. S105 is performed by the decoding module 640. S106 is performed by the first coordinate acquisition module 650. S107 is performed by the second coordinate acquisition module 660. S108 is performed by the first computing module 670. S109 is performed by the second calculation module 680.
S110 is performed by the storage module 690.
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments of the present application may be combined with each other. It will be apparent that the described embodiments are merely some, but not all embodiments of the invention. Based on the embodiments of the present invention, other embodiments that may be obtained by those of ordinary skill in the art without making any inventive effort should fall within the scope of the present invention. It should be noted that the terms "first," "second," and the like in the description and the claims and drawings of the present invention are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate in order to describe the embodiments of the invention herein.

Claims (8)

1. A method of human alignment, comprising:
acquiring human body image information shot by a cloud camera, wherein the human body image information comprises a human body image before projection;
the method comprises the steps of obtaining projection human body image information, wherein the projection human body image information comprises a projected human body image shot by a cloud camera after a coding pattern is projected onto the surface of a human body through four projectors, the four projectors are positioned on four vertexes of a regular quadrangle taking the human body as a center, projection planes of two projectors positioned on a diagonal line of the regular quadrangle are parallel to each other and perpendicular to the diagonal line, and the coding pattern completely covers the human body and enables textures of each area of the human body to be different;
generating an intersection image M from the pre-projection human body image and the post-projection human body image Difference of difference ,M Difference of difference =M Rear part (S) -M Front part The human body image before projection and the human body image after projection are human body images acquired by the same cloud camera, M Rear part (S) Representing the projected human body image M Front part Representing a pre-projection human body image;
the intersection image is processed and de-colored to obtain a binary intersection image;
obtaining a characteristic point set by decoding the binary intersection image, wherein the characteristic point set comprises a plurality of characteristic points;
acquiring first coordinate information, wherein the first coordinate information is coordinate information of a feature point on the binary intersection image;
acquiring second coordinate information, wherein the second coordinate information is coordinate information of the feature points on the image plane of the projector;
obtaining a cloud camera internal reference matrix by utilizing the first coordinate information and the second coordinate information;
obtaining an external reference matrix of the cloud camera by utilizing the first coordinate information and the second coordinate information, wherein the external reference matrix comprises a rotation matrix R and a translation matrix T, and the step of obtaining the external reference matrix of the cloud camera comprises the following steps: obtaining third coordinate information through the first coordinate information and the second coordinate information by utilizing a triangle relation, wherein the third coordinate information is three-dimensional coordinate information of the feature points in the scene; obtaining an external parameter matrix of the camera through the first coordinate information, the second coordinate information and the third coordinate information;
and storing the internal reference matrix and the external reference matrix.
2. The method of aligning human bodies according to claim 1, wherein the cloud cameras are fixed on a fixing frame at the same vertical distance, the fixing frame is located on eight vertexes of a regular octagon, an image plane of each cloud camera is perpendicular to a central connecting line, and the central connecting line is a connecting line of the vertexes of the octagon where the cloud cameras are located and the center of the regular octagon.
3. The method for aligning a human body according to claim 1, wherein the step of processing the cross image to remove color comprises:
by setting a global gray threshold, setting 1 for the intersection image pixels with gray values higher than the threshold, and setting 0 for the intersection image pixels with gray values lower than the threshold, wherein the pixel setting 1 is displayed as white, and the pixel setting 0 is displayed as black.
4. The human alignment method of claim 1, wherein the coding patterns are randomly distributed, and minimum units of the coding patterns are different in both horizontal and vertical directions.
5. The body alignment method of claim 1, wherein the light projected by the projector can only illuminate one face of the body.
6. The method of aligning a human body according to claim 1, wherein the human body stands in a posture in which both arms are opened by 30 degrees and both legs are opened by 15 degrees and remains stationary for one second.
7. A body alignment device, comprising:
the system comprises a first image information acquisition module, a second image information acquisition module and a third image information acquisition module, wherein the first image information acquisition module is configured to acquire human body image information shot by a cloud camera, and the human body image information comprises a human body image before projection;
the second image information acquisition module is configured to acquire projection human body image information, the projection human body image information comprises a projected human body image shot by a cloud camera after a coding pattern is projected onto the surface of a human body through four projectors, the four projectors are positioned on four vertexes of a regular quadrangle taking the human body as a center, projection planes of two projectors positioned on a diagonal line of the regular quadrangle are parallel to each other and perpendicular to the diagonal line, and the coding pattern completely covers the human body and enables textures of each area of the human body to be different;
an intersection image generation module configured to generate an intersection image M from the pre-projection human body image and the post-projection human body image Difference of difference ,M Difference of difference =M Rear part (S) -M Front part The human body image before projection and the human body image after projection are human body images acquired by the same cloud camera, M Rear part (S) Representing the projected human body image M Front part Representing a pre-projection human body image;
the processing and de-coloring module is configured to process and de-color the intersection image to obtain a binary intersection image;
the decoding module is configured to obtain a characteristic point set by decoding the binary intersection image, wherein the characteristic point set comprises a plurality of characteristic points;
the first coordinate acquisition module is configured to acquire first coordinate information, wherein the first coordinate information is coordinate information of the feature points on the binary intersection image;
the second coordinate acquisition module is configured to acquire second coordinate information, wherein the second coordinate information is coordinate information of the feature points on the image plane of the projector;
the first computing module is configured to compute a cloud camera internal reference matrix by utilizing the first coordinate information and the second coordinate information;
the second computing module is configured to obtain an external parameter matrix of the camera by using the first coordinate information and the second coordinate information, wherein the external parameter matrix comprises a rotation matrix R and a translation matrix T, and the step of obtaining the external parameter matrix of the cloud camera comprises the following steps: obtaining third coordinate information through the first coordinate information and the second coordinate information by utilizing a triangle relation, wherein the third coordinate information is three-dimensional coordinate information of the feature points in the scene; obtaining an external parameter matrix of the camera through the first coordinate information, the second coordinate information and the third coordinate information;
a storage module configured to store the internal reference matrix and the external reference matrix.
8. The body alignment device of claim 7, wherein the process decolouring module further comprises:
by setting a global gray threshold, setting 1 for the intersection image pixels with gray values higher than the threshold, and setting 0 for the intersection image pixels with gray values lower than the threshold, wherein the pixel setting 1 is displayed as white, and the pixel setting 0 is displayed as black.
CN202010739570.8A 2020-07-28 2020-07-28 Human body alignment method and device Active CN111862241B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010739570.8A CN111862241B (en) 2020-07-28 2020-07-28 Human body alignment method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010739570.8A CN111862241B (en) 2020-07-28 2020-07-28 Human body alignment method and device

Publications (2)

Publication Number Publication Date
CN111862241A CN111862241A (en) 2020-10-30
CN111862241B true CN111862241B (en) 2024-04-12

Family

ID=72948173

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010739570.8A Active CN111862241B (en) 2020-07-28 2020-07-28 Human body alignment method and device

Country Status (1)

Country Link
CN (1) CN111862241B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110060304A (en) * 2019-03-31 2019-07-26 南京航空航天大学 A kind of organism three-dimensional information acquisition method
CN111028295A (en) * 2019-10-23 2020-04-17 武汉纺织大学 A 3D imaging method based on encoded structured light and binocular
CN111275776A (en) * 2020-02-11 2020-06-12 北京淳中科技股份有限公司 Projection augmented reality method and device and electronic equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104299261B (en) * 2014-09-10 2017-01-25 深圳大学 Three-dimensional imaging method and system for human body

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110060304A (en) * 2019-03-31 2019-07-26 南京航空航天大学 A kind of organism three-dimensional information acquisition method
CN111028295A (en) * 2019-10-23 2020-04-17 武汉纺织大学 A 3D imaging method based on encoded structured light and binocular
CN111275776A (en) * 2020-02-11 2020-06-12 北京淳中科技股份有限公司 Projection augmented reality method and device and electronic equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Accurate RGB-D camera based on structured light techniques;V. L. Tran.et al;2017 International Conference on System Science and Engineering (ICSSE);235-238 *
基于面结构光的机械工件三维扫描系统研究与设计;林嘉鑫;基于面结构光的机械工件三维扫描系统研究与设计;25-62 *

Also Published As

Publication number Publication date
CN111862241A (en) 2020-10-30

Similar Documents

Publication Publication Date Title
CN110782394A (en) Panoramic video rapid splicing method and system
JP4245963B2 (en) Method and system for calibrating multiple cameras using a calibration object
CN110490916A (en) Three dimensional object modeling method and equipment, image processing apparatus and medium
WO2018235163A1 (en) Calibration device, calibration chart, chart pattern generation device, and calibration method
JP7657308B2 (en) Method, apparatus and system for generating a three-dimensional model of a scene - Patents.com
CN110910431A (en) A multi-view 3D point set restoration method based on monocular camera
CN111009030A (en) A multi-view high-resolution texture image and binocular 3D point cloud mapping method
CN106203429A (en) Based on the shelter target detection method under binocular stereo vision complex background
JP2024537798A (en) Photographing and measuring method, device, equipment and storage medium
CN114549651B (en) Calibration method and device for multiple 3D cameras based on polyhedral geometric constraint
Zhang et al. Development of an omni-directional 3D camera for robot navigation
KR20200129657A (en) Method for gaining 3D model video sequence
CN112446926B (en) Relative position calibration method and device for laser radar and multi-eye fish-eye camera
Lanman et al. Surround structured lighting for full object scanning
Ringaby et al. Scan rectification for structured light range sensors with rolling shutters
CN111862241B (en) Human body alignment method and device
Zhao et al. Novel optical-markers-assisted point clouds registration for panoramic 3d shape measurement
CN116524022B (en) Offset data calculation method, image fusion device and electronic equipment
CN116433769B (en) Space calibration method, device, electronic equipment and storage medium
Kawasaki et al. Calibration technique for underwater active oneshot scanning system with static pattern projector and multiple cameras
Yamazaki et al. Coplanar shadowgrams for acquiring visual hulls of intricate objects
CN116381712A (en) Measurement method based on linear array camera and ground laser radar combined device
CN112995641B (en) 3D module imaging device and method and electronic equipment
CN111860544B (en) Projection auxiliary clothing feature extraction method and system
Tai et al. A fully automatic approach for fisheye camera calibration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant