[go: up one dir, main page]

CN112509055A - Acupuncture point positioning system and method based on combination of binocular vision and coded structured light - Google Patents

Acupuncture point positioning system and method based on combination of binocular vision and coded structured light Download PDF

Info

Publication number
CN112509055A
CN112509055A CN202011308196.2A CN202011308196A CN112509055A CN 112509055 A CN112509055 A CN 112509055A CN 202011308196 A CN202011308196 A CN 202011308196A CN 112509055 A CN112509055 A CN 112509055A
Authority
CN
China
Prior art keywords
structured light
coordinate system
image
camera
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011308196.2A
Other languages
Chinese (zh)
Other versions
CN112509055B (en
Inventor
刘军
郭剑峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202011308196.2A priority Critical patent/CN112509055B/en
Publication of CN112509055A publication Critical patent/CN112509055A/en
Application granted granted Critical
Publication of CN112509055B publication Critical patent/CN112509055B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本发明公开基于双目视觉和编码结构光相结合的穴位定位系统及方法。对人体背部模型进行三维重建,将摄像头拍摄的二维坐标还原成世界坐标系下的三维坐标,并从图像中分割出人体背部模型,结合中医找到人体背部相应的穴位信息。本发明采用两个相机和结构光重构背部的三维信息,进一步提高定位精准度。本发明将编码结构光和双目视觉相结合,有效的解决了室内白墙和无纹理物体的难题,增强抗环境干扰能力,可靠性更强,深度图质量有更大提升空间。

Figure 202011308196

The invention discloses an acupoint positioning system and method based on the combination of binocular vision and coded structured light. The model of the back of the human body is reconstructed in three dimensions, the two-dimensional coordinates captured by the camera are restored to the three-dimensional coordinates in the world coordinate system, and the model of the back of the human body is segmented from the image, and the corresponding acupoint information on the back of the human body is found in combination with traditional Chinese medicine. The present invention uses two cameras and structured light to reconstruct the three-dimensional information of the back, and further improves the positioning accuracy. The invention combines coded structured light and binocular vision, effectively solves the problem of indoor white walls and non-textured objects, enhances the ability to resist environmental interference, has stronger reliability, and has greater room for improving the quality of the depth map.

Figure 202011308196

Description

Acupuncture point positioning system and method based on combination of binocular vision and coded structured light
Technical Field
The invention belongs to the field of medical technology application, and particularly relates to an acupuncture point positioning method based on combination of binocular vision and coded structured light.
Background
In traditional Chinese medicine, acupuncture therapy has a long history and a unique curative effect. However, the treatment method has high requirements on the experience and technique of the practitioner, which is particularly shown in the accuracy of the acupuncture point positioning. The acupoint location method based on body, which is given in medical books, mainly uses a certain organ or a certain characteristic part of body as a reference to find the relative position of the acupoint. At present, the related technology of automatic acupoint selection at home and abroad is not mature, most of the technologies are in the research stage, and the examples of clinical application are almost not available. To accurately determine the positions of the acupuncture points, years of clinical practice are required, and doctors with little clinical experience have great difficulty in accurately positioning the acupuncture points. Therefore, there is an urgent need to develop a positioning apparatus for automatically positioning the acupuncture points.
In order to solve the problem, an acupuncture point positioning method combining binocular vision and structured light is developed, a human back model is subjected to three-dimensional reconstruction, two-dimensional coordinates shot by a camera are restored to three-dimensional coordinates in a world coordinate system, the human back model is segmented from images, and corresponding acupuncture point information of the human back is found by combining traditional Chinese medicine.
Disclosure of Invention
The invention aims to provide a binocular vision and structured light combined acupuncture point positioning system and method, aiming at the defects of the prior art, the binocular vision and structured light combined acupuncture point positioning system and method are used for carrying out three-dimensional reconstruction on a human back model, restoring two-dimensional coordinates shot by a camera into three-dimensional coordinates under a world coordinate system, segmenting the human back model from images, and finding corresponding acupuncture point information of the human back by combining traditional Chinese medicine.
In order to achieve the purpose, the technical scheme of the invention is as follows:
the acupuncture point positioning system based on combination of binocular vision and coded structured light comprises two identical cameras, a structured light generator, a support frame, a calibration board and a main controller; the two cameras and the structural light generator are positioned above the support frame, and the structural light generator is positioned in front of the two cameras;
the structured light generator is used for projecting structured light on the back of a human body; wherein the pattern of structured light is a stripe pattern encoded with gray codes;
the camera is used for capturing an image of the back of the human body projected with the structured light pattern;
the calibration plate is used for calibrating the camera;
the main controller is used for controlling the starting of the structured light generator, setting a structured light pattern by the code of the structured light generator, receiving the pattern information transmitted by the camera and transmitting the pattern information to the computer for data analysis.
An acupuncture point positioning method based on combination of binocular vision and coded structured light comprises the following steps:
step (1), camera calibration:
and calibrating by adjusting the direction of the calibration board or the camera by adopting a Zhang Zhengyou calibration method.
In the calibration process, the camera extracts angular points as characteristic points, a least square method is applied to estimate distortion parameters under the condition of actual radial distortion, and finally a maximum likelihood method is applied to optimize and improve the precision, so that rotation and translation parameters and camera distortion parameters are obtained.
Step (2), starting the structured light generator, and enabling the structured light pattern to fall on the back of the human body;
step (3), two cameras acquire a left image and a right image of the back of the human body, which are projected with structured light patterns;
step (4) image stereo correction
The left image and the right image are corrected in a three-dimensional way by the prior art; the stereo correction is to align two images which are not strictly coplanar and aligned into a coplanar and line alignment.
Step (5) obtaining matching points
5-1 Gray code values
And projecting Gray code pattern structured light to the back of the human body for coding and decoding, so that each pixel point acquired by the two cameras obtains a Gray code value.
The decoding adopts the structured light of the coded original gray code pattern projected to the back of the human body, then the structured light of the gray code anti-code pattern is projected to the back of the human body again, and finally the decoding is carried out through the two patterns.
The Gray code decoding pattern is obtained by inverting the original Gray code pattern.
In the decoding stage, a simple dual-threshold segmentation method is adopted: let I (x, y) be the gray value of the image at point (x, y), I+And (x, y) is the corresponding gray value when the original Gray code pattern is projected, and I- (x, y) is the corresponding gray value when the gray code anti-code pattern is projected. If I+(x,y)<I- (x, y), the gray code value at the coordinate is considered to be 0, otherwise, the gray code value is 1.
5-2 phase value
The structured light with N patterns is projected on the back of a human body, each pattern corresponds to a phase value, and the phase period is N. And extracting the boundary of the black and white stripe from each phase diagram, wherein if the pixel point (x, y) in the nth phase diagram is extracted to the boundary, the phase value corresponding to the pixel point (x, y) is N, and N is less than or equal to N.
5-3 search for matching points
And traversing pixel points on the same line of the right image by a traversal search method, and searching a point of the left image P (x, y) corresponding to the right image and having the same gray code value and phase value, wherein the point is a matching point.
Difference in structured light and general binocular vision: for scenes with few texture features, the effect of binocular vision reconstruction is not good. The structured light is projected, the texture of the image is improved, the environment interference resistance is enhanced, the reliability is higher, and the quality of the depth map is improved greatly.
Step (6), acquiring a disparity map and a depth map:
6-1 assume left plot point PL(xl,yl) The matching point in the right diagram is Pr(xr,yr) Point PLCorresponding to a disparity value of xl-xr(ii) a And solving a parallax value of each effective pixel point in the left image to obtain a parallax image.
The effective pixel points are pixel points with right image matching points.
6-2, eliminating invalid pixel points in the disparity map by adopting a median filtering mode.
The invalid pixel points are pixel points without matching points.
6-3, converting the disparity map into a depth map by a triangulation formula, specifically:
a) establishing a pixel coordinate system O on the left image0-uv。
b) And establishing an image coordinate system O-XY by taking the intersection point of the optical axis of the camera and the plane as an origin.
c) And establishing a camera coordinate system by taking the optical center of the camera as an origin, taking the optical axis of the camera as a Z axis, and setting the X axis and the Y axis to be the same as the X axis and the Y axis of the image coordinate system.
d) Constructing a relation between a pixel coordinate system and an image coordinate system as follows:
Figure BDA0002788940700000031
wherein u and v respectively represent the u axis and the v axis of the pixel coordinate system; (u)0,v0) Representing pixel points in a pixel coordinate system, (X, y) representing coordinate points in an image coordinate system, (X)c,Yc,Zc) And d, dy are physical sizes of the coordinate points in the image coordinate system on an x axis and a y axis.
e) The relationship between the camera coordinate system and the image coordinate system is constructed through projection perspective relationship transformation as follows:
Figure BDA0002788940700000032
where f represents the focal length of the left camera.
f) The relationship of the camera coordinate system and the world coordinate system can be described by the rotation parameter R and the translation parameter T determined by the camera external parameters, and the relationship is as follows:
Figure BDA0002788940700000041
wherein (X)w,Yw,Zw) Coordinate points representing a world coordinate system.
g) The conversion relation between the world coordinate system and the pixel coordinate system of a certain point in the single-camera imaging can be obtained through the conversion of the four coordinate systems as follows:
Figure BDA0002788940700000042
wherein f isx,fyAnd calibrating the internal parameter focal length of the left camera.
h) And (4) obtaining the three-dimensional coordinate values of all effective pixel points of the left image in the world coordinate system by combining the formula (4) according to the matching point information obtained in the step (5).
i) Three-dimensional measurement of the back of the human body is carried out according to the parallax principle, and the three-dimensional coordinates of the space points are as follows:
Figure BDA0002788940700000043
wherein, B is the base line distance of the binocular camera, f is the focal length of the camera, (x)i,yi) Is the coordinate of the effective pixel point of the left camera under the image coordinate system, (x)r,yr) Is the image coordinate system and (x)i,yi) The corresponding right camera matches the point coordinates.
Step (7) of obtaining the position information of the acupuncture points
7-1 obtaining a Back image Profile
And preprocessing the left image, and then carrying out edge detection by using a canny operator to obtain a back image contour map.
The preprocessing is to divide the left image by using a watershed algorithm, the watershed algorithm divides the regions which are close in space and have similar gray values into one region, and the gray values are the same, so that the contour segmentation and extraction are performed.
7-2, searching two obvious characteristic points on the back based on the back image contour map to obtain two-dimensional pixel coordinates of the two characteristic points; the distinct feature points are at the widest and narrowest two-point locations on the median ridge.
And 7-3, further obtaining the three-dimensional coordinates of the characteristic points by combining the formula (4) based on the pixel coordinates of the two characteristic points.
7-4, according to the three-dimensional coordinates of the characteristic points, combining the traditional Chinese medicine bone degree cunning method to obtain the position information of the acupuncture points.
The invention has the beneficial effects that:
1) according to the invention, the three-dimensional information of the back is reconstructed by adopting the two cameras and the structured light, so that the positioning accuracy is further improved.
2) The medical instrument for automatically detecting and positioning the specific acupuncture points of the body, such as the acupuncture points of the head and the back, is adopted to assist a clinician in accurately positioning the acupuncture points and reduce the subjective misjudgment probability of the clinician.
3) The invention combines the coding structure light and the binocular vision, effectively solves the difficult problems of indoor white walls and non-texture objects, enhances the environmental interference resistance, has stronger reliability and greatly improves the quality of the depth map.
Drawings
FIG. 1 is a schematic structural view of the present invention;
the labels in the figure are: 1. the system comprises a left camera, a 2-structure light generator, a 3-right camera and a 4-support frame;
FIG. 2 is a stripe pattern with Gray code encoding;
fig. 3(a) is a left image, and fig. 3(b) is a back image contour diagram.
Detailed Description
The present invention is further analyzed with reference to the following specific examples.
As shown in fig. 1, the acupuncture point positioning system based on binocular vision and coded structured light combination comprises a left camera 1, a right camera 3 with the same parameters, a structured light generator 2, a support frame 4, a calibration board and a main controller; the two cameras and the structural light generator are positioned above the support frame, and the structural light generator is positioned in front of the two cameras;
the structured light generator is used for projecting structured light on the back of a human body; wherein the pattern of structured light is a stripe pattern encoded with gray codes as in fig. 2;
the camera is used for capturing an image of the back of the human body projected with the structured light pattern;
the calibration plate is used for calibrating the camera;
the main controller is used for controlling the starting of the structured light generator, setting a structured light pattern by the code of the structured light generator, receiving the pattern information transmitted by the camera and transmitting the pattern information to the computer for data analysis.
An acupuncture point positioning method based on combination of binocular vision and coded structured light comprises the following steps:
step (1), camera calibration:
and calibrating by adjusting the direction of the calibration board or the camera by adopting a Zhang Zhengyou calibration method.
In the calibration process, the camera extracts angular points as characteristic points, a least square method is applied to estimate distortion parameters under the condition of actual radial distortion, and finally a maximum likelihood method is applied to optimize and improve the precision, so that rotation and translation parameters and camera distortion parameters are obtained.
Step (2), starting the structured light generator, and enabling the structured light pattern to fall on the back of the human body;
step (3), two cameras acquire a left image and a right image of the back of the human body, which are projected with structured light patterns;
step (4) image stereo correction
The left image and the right image are corrected in a three-dimensional way by the prior art; the stereo correction is to align two images which are not strictly coplanar and aligned into a coplanar and line alignment.
Step (5) obtaining matching points
5-1 Gray code values
And projecting Gray code pattern structured light to the back of the human body for coding and decoding, so that each pixel point acquired by the two cameras obtains a Gray code value.
The decoding is to project the structured light of the coded original gray code pattern to the back of the human body, then to invert the original gray code pattern to be the gray code inverted pattern, to project the structured light of the gray code inverted pattern to the back of the human body again, and finally to decode through the two patterns.
In the decoding stage, a simple dual-threshold segmentation method is adopted: let I (x, y) be the gray value of the image at point (x, y), I+(x, y) is the corresponding gray value when projecting the original Gray code pattern, I-And (x, y) is the corresponding gray value when the gray code anti-code pattern is projected. If I+(x,y)<I-(x, y), the gray code value at the coordinate is considered to be 0, otherwise, the gray code value is 1.
5-2 phase value
The structured light with N patterns is projected on the back of a human body, each pattern corresponds to a phase value, and the phase period is N. And extracting the boundary of the black and white stripe from each phase diagram, wherein if the pixel point (x, y) in the nth phase diagram is extracted to the boundary, the phase value corresponding to the pixel point (x, y) is N, and N is less than or equal to N.
5-3 search for matching points
And traversing pixel points on the same line of the right image by a traversal search method, and searching a point of the left image P (x, y) corresponding to the right image and having the same gray code value and phase value, wherein the point is a matching point.
Difference in structured light and general binocular vision: for scenes with few texture features, the effect of binocular vision reconstruction is not good. The structured light is projected, the texture of the image is improved, the environment interference resistance is enhanced, the reliability is higher, and the quality of the depth map is improved greatly.
Step (6), acquiring a disparity map and a depth map:
6-1 assume left plot point PL(xl,yl) The matching point in the right diagram is Pr(xr,yr) Point PLCorresponding to a disparity value of xl-xr(ii) a And solving a parallax value of each effective pixel point in the left image to obtain a parallax image.
The effective pixel points are pixel points with right image matching points.
6-2, eliminating invalid pixel points in the disparity map by adopting a median filtering mode.
The invalid pixel points are pixel points without matching points.
6-3, converting the disparity map into a depth map by a triangulation formula, specifically:
a) establishing a pixel coordinate system O on the left image0-uv。
b) And establishing an image coordinate system O-XY by taking the intersection point of the optical axis of the camera and the plane as an origin.
c) And establishing a camera coordinate system by taking the optical center of the camera as an origin, taking the optical axis of the camera as a Z axis, and setting the X axis and the Y axis to be the same as the X axis and the Y axis of the image coordinate system.
d) Constructing a relation between a pixel coordinate system and an image coordinate system as follows:
Figure BDA0002788940700000071
wherein the ratio of u,v represents the u-axis and the v-axis of the pixel coordinate system respectively; (u)0,v0) Representing pixel points in a pixel coordinate system, (X, y) representing coordinate points in an image coordinate system, (X)c,Yc,Zc) And d, dy are physical sizes of the coordinate points in the image coordinate system on an x axis and a y axis.
e) The relationship between the camera coordinate system and the image coordinate system is constructed through projection perspective relationship transformation as follows:
Figure BDA0002788940700000072
where f represents the focal length of the left camera.
f) The relationship of the camera coordinate system and the world coordinate system can be described by the rotation parameter R and the translation parameter T determined by the camera external parameters, and the relationship is as follows:
Figure BDA0002788940700000081
wherein (X)w,Yw,Zw) Coordinate points representing a world coordinate system.
g) The conversion relation between the world coordinate system and the pixel coordinate system of a certain point in the single-camera imaging can be obtained through the conversion of the four coordinate systems as follows:
Figure BDA0002788940700000082
wherein f isx,fyAnd calibrating the internal parameter focal length of the left camera.
h) And (4) obtaining the three-dimensional coordinate values of all effective pixel points of the left image in the world coordinate system by combining the formula (4) according to the matching point information obtained in the step (5).
i) Three-dimensional measurement of the back of the human body is carried out according to the parallax principle, and the three-dimensional coordinates of the space points are as follows:
Figure BDA0002788940700000083
wherein, B is the base line distance of the binocular camera, f is the focal length of the camera, (x)i,yi) Is the coordinate of the effective pixel point of the left camera under the image coordinate system, (x)r,yr) Is the image coordinate system and (x)i,yi) The corresponding right camera matches the point coordinates.
Step (7) of obtaining the position information of the acupuncture points
7-1 obtaining a Back image Profile
The left image in fig. 3(a) is preprocessed, and then edge detection is performed by using a canny operator, so as to obtain a back image contour map fig. 3 (b).
The preprocessing is to divide the left image by using a watershed algorithm, the watershed algorithm divides the regions which are close in space and have similar gray values into one region, and the gray values are the same, so that the contour segmentation and extraction are performed.
7-2, searching two obvious characteristic points on the back based on the back image contour map to obtain two-dimensional pixel coordinates of the two characteristic points; the distinct feature points are at the widest and narrowest two-point locations on the median ridge.
And 7-3, further obtaining the three-dimensional coordinates of the characteristic points by combining the formula (4) based on the pixel coordinates of the two characteristic points. 7-4, according to the three-dimensional coordinates of the characteristic points, combining the traditional Chinese medicine bone degree cunning method to obtain the position information of the acupuncture points.
The above embodiments are not intended to limit the present invention, and the present invention is not limited to the above embodiments, and all embodiments are within the scope of the present invention as long as the requirements of the present invention are met.

Claims (8)

1.基于双目视觉和编码结构光相结合的穴位定位方法,其特征在于包括以下步骤:1. the acupoint location method based on the combination of binocular vision and encoded structured light, is characterized in that comprising the following steps: 步骤(1)、两个相机进行标定;Step (1), two cameras are calibrated; 步骤(2)、开启结构光发生器,并使得结构光图案落在人体背部;Step (2), turn on the structured light generator, and make the structured light pattern fall on the back of the human body; 步骤(3)、两个相机获取左右两幅投射有结构光图案的人体背部图像;Step (3), the two cameras acquire the left and right images of the back of the human body projected with the structured light pattern; 步骤(4)、图像立体校正Step (4), image stereo correction 步骤(5)、获取匹配点Step (5), get matching points 5-1格雷码值5-1 Gray code value 向人体背部投射格雷码图案结构光进行编解码,使两个相机采集到的每个像素点获得格雷码值;The Gray code pattern structured light is projected to the back of the human body to encode and decode, so that each pixel point collected by the two cameras can obtain the Gray code value; 5-2相位值5-2 Phase Value 投射N幅图案的结构光在人体背部,每幅图案对应一个相位值,相位周期为N;对每一幅相位图提取黑白条纹的边界,如果在第n张相位图中像素点(x,y)提取到边界,即像素点(x,y)对应的相位值为n,n≤N;Project N patterns of structured light on the back of the human body, each pattern corresponds to a phase value, and the phase period is N; extract the boundaries of black and white stripes for each phase image, if the pixel point (x, y in the nth phase image) ) is extracted to the boundary, that is, the phase value corresponding to the pixel point (x, y) is n, n≤N; 5-3搜索匹配点5-3 Search for matching points 通过遍历搜寻法遍历右图像同一行的像素点,寻找左图像P(x,y)在右图像对应格雷码值和相位值相同的点,该点即为匹配点;Traverse the pixels in the same row of the right image by traversing the search method, and find the point where the left image P(x, y) corresponds to the same Gray code value and phase value in the right image, and this point is the matching point; 步骤(6)、获取视差图和深度图:Step (6), obtain disparity map and depth map: 6-1假设左图中点PL(xl,yl)在右图中的匹配点为Pr(xr,yr),点PL的对应的视差值为xl-xr;对左图中的每个有效像素点求视差值,获得视差图;6-1 Suppose that the matching point of the point PL (x l , y l ) in the left picture in the right picture is P r (x r , y r ), and the corresponding disparity value of the point PL is x l -x r ; Calculate the disparity value for each valid pixel in the left image to obtain the disparity map; 6-2对视差图中无效像素点,采用中值滤波的方式进行消除;6-2 Use median filtering to eliminate invalid pixels in the disparity map; 6-3通过三角测量公式将视差图转化为深度图;6-3 Convert the disparity map to a depth map through the triangulation formula; 步骤(7)、获取穴位位置信息Step (7), obtain acupoint location information 7-1对左图像获取背部图像轮廓图7-1 Obtain the contour map of the back image for the left image 7-2基于背部图像轮廓图找寻背部两个明显特征点,得到两个特征点的二维像素坐标;明显特征点为中脊线上最宽和最窄两点位置处;7-2 Find two obvious feature points on the back based on the contour map of the back image, and obtain the two-dimensional pixel coordinates of the two feature points; the obvious feature points are the positions of the widest and narrowest points on the middle ridge line; 7-3基于两个特征点的像素坐标,结合公式(4),进一步得到特征点的三维坐标;7-3 Based on the pixel coordinates of the two feature points, combined with formula (4), the three-dimensional coordinates of the feature points are further obtained; 7-4根据特征点的三维坐标,结合中医骨度分寸法,得到穴位位置信息。7-4 According to the three-dimensional coordinates of the feature points, combined with the traditional Chinese medicine bone degree method, the position information of the acupoints is obtained. 2.如权利要求1所述的基于双目视觉和编码结构光相结合的穴位定位方法,其特征在于步骤(2)结构光的图案是带格雷码编码的条纹图案。2 . The acupoint location method based on the combination of binocular vision and coded structured light as claimed in claim 1 , wherein the pattern of the structured light in step (2) is a striped pattern with Gray code coding. 3 . 3.如权利要求1所述的基于双目视觉和编码结构光相结合的穴位定位方法,其特征在于步骤(4)所述的立体校正是将两幅不严格共面对齐的两幅图像校准成共面且行对齐。3. the acupoint location method based on the combination of binocular vision and coded structured light as claimed in claim 1, it is characterized in that the stereo correction described in step (4) is to align two images that are not strictly coplanar Calibrated to be coplanar and row aligned. 4.如权利要求1所述的基于双目视觉和编码结构光相结合的穴位定位方法,其特征在于步骤(5-1)上述的解码是采用向人体背部投射已编码的原始格雷码图案的结构光,然后将格雷码反码图案的结构光再次投射到人体背部,最终通过上述两幅图案进行解码。4. the acupoint location method based on binocular vision combined with coded structured light as claimed in claim 1, it is characterized in that the above-mentioned decoding of step (5-1) adopts the original Gray code pattern that has been projected to the back of the human body. Structured light, and then project the structured light of the Gray code inverse code pattern to the back of the human body again, and finally decode through the above two patterns. 5.如权利要求4所述的基于双目视觉和编码结构光相结合的穴位定位方法,其特征在于所述格雷码反码图案为原始格雷码图案取反得到。5 . The acupoint location method based on the combination of binocular vision and coded structured light according to claim 4 , wherein the inverse Gray code pattern is obtained by inverting the original Gray code pattern. 6 . 6.如权利要求4所述的基于双目视觉和编码结构光相结合的穴位定位方法,其特征在于在解码阶段,采取简易的双阈值分割方法:设I(x,y)为图像在点(x,y)处的灰度值,I+(x,y)为投射原始格雷码图案时对应的灰度值,I-(x,y)为投射格雷码反码图案时对应的灰度值;如果I+(x,y)<I-(x,y),则认为该坐标处的格雷码值为0,反之为1。6. the acupoint location method based on the combination of binocular vision and coding structured light as claimed in claim 4, it is characterized in that in decoding stage, take simple double threshold segmentation method: let I (x, y) be the image point at the point The gray value at (x, y), I + (x, y) is the corresponding gray value when the original Gray code pattern is projected, and I - (x, y) is the corresponding gray value when the gray code inverted pattern is projected value; if I + (x,y)<I - (x,y), the Gray code value at this coordinate is considered to be 0, otherwise it is 1. 7.如权利要求1所述的基于双目视觉和编码结构光相结合的穴位定位方法,其特征在于步骤(6-3)具体是:7. the acupoint location method based on the combination of binocular vision and encoded structured light as claimed in claim 1, it is characterized in that step (6-3) is specifically: a)在左图像上建立像素坐标系O0-uv;a) Establish a pixel coordinate system O 0 -uv on the left image; b)以相机光轴与平面的交点为原点,建立图像坐标系O-XY;b) Taking the intersection of the optical axis of the camera and the plane as the origin, establish the image coordinate system O-XY; c)以相机光心为原点,相机光轴为Z轴,x轴、y轴与图像坐标系X轴、Y轴相同,建立相机坐标系;c) Taking the optical center of the camera as the origin, the optical axis of the camera as the Z axis, the x axis and the y axis are the same as the X axis and the Y axis of the image coordinate system, and the camera coordinate system is established; d)构建像素坐标系和图像坐标系的关系为:d) The relationship between the pixel coordinate system and the image coordinate system is constructed as:
Figure FDA0002788940690000021
Figure FDA0002788940690000021
其中u,v分别表示像素坐标系的u轴,v轴;(u0,v0)表示像素坐标系下的像素点,(x,y)表示图像坐标系下的坐标点,(Xc,Yc,Zc)表示相机坐标系下的坐标点,dx,dy为图像坐标系下的坐标点在x轴和y轴上的物理尺寸;Among them, u and v respectively represent the u-axis and v-axis of the pixel coordinate system; (u 0 , v 0 ) represent the pixel point in the pixel coordinate system, (x, y) represent the coordinate point in the image coordinate system, (X c , Y c , Z c ) represent the coordinate points in the camera coordinate system, dx, dy are the physical dimensions of the coordinate points in the image coordinate system on the x-axis and y-axis; e)通过投影透视关系变换构建相机坐标系与图像坐标系的关系如下:e) The relationship between the camera coordinate system and the image coordinate system is constructed through the transformation of the projected perspective relationship as follows:
Figure FDA0002788940690000031
Figure FDA0002788940690000031
其中f表示左相机的焦距;where f represents the focal length of the left camera; f)相机坐标系和世界坐标系的关系可以通过相机外参确定的旋转参数R和平移参数T来描述,关系如下:f) The relationship between the camera coordinate system and the world coordinate system can be described by the rotation parameter R and the translation parameter T determined by the camera external parameters. The relationship is as follows:
Figure FDA0002788940690000032
Figure FDA0002788940690000032
其中(Xw,Yw,Zw)表示世界坐标系的坐标点;Where (X w , Y w , Z w ) represents the coordinate point of the world coordinate system; g)通过以上四个坐标系转换可以得到单相机成像时某点的世界坐标系和像素坐标系的转换关系如下式:g) Through the transformation of the above four coordinate systems, the conversion relationship between the world coordinate system and the pixel coordinate system of a point when a single camera is imaged can be obtained as follows:
Figure FDA0002788940690000033
Figure FDA0002788940690000033
其中fx,fy为左相机标定的内部参数焦距;where f x and f y are the internal parameter focal lengths calibrated by the left camera; h)根据步骤(5)获得的匹配点信息,结合公式(4)获得左图像所有有效像素点在世界坐标系的三维坐标值;h) according to the matching point information obtained in step (5), in combination with formula (4), obtain the three-dimensional coordinate values of all valid pixels of the left image in the world coordinate system; i)根据视差原理进行人体背部的三维测量,空间点的三维坐标为:i) The three-dimensional measurement of the back of the human body is carried out according to the principle of parallax, and the three-dimensional coordinates of the space point are:
Figure FDA0002788940690000041
Figure FDA0002788940690000041
其中,B为双目相机基线距离,f为相机焦距,(xi,yi)为图像坐标系下左相机有效像素点的坐标,(xr,yr)为图像坐标系下与(xi,yi)对应的右相机匹配点坐标。Among them, B is the baseline distance of the binocular camera, f is the camera focal length, (x i , y i ) is the coordinate of the effective pixel point of the left camera in the image coordinate system, (x r , y r ) is the image coordinate system and (x r , y r ) i , y i ) corresponding to the right camera matching point coordinates.
8.利用权利要求1-7任一所述方法的穴位定位系统,其特征在于包括两个相同相机、结构光发生器、支撑架、标定板、主控器;两个相机、结构光发生器位于支撑架上方,且结构光发生器位于两个相机之前;8. The acupoint locating system utilizing any one of claims 1-7, characterized in that it comprises two identical cameras, a structured light generator, a support frame, a calibration board, and a main controller; two cameras, a structured light generator Above the support frame, and the structured light generator is located in front of the two cameras; 所述结构光发生器用于在人体背部投射结构光;其中结构光的图案是带格雷码编码的条纹图案;The structured light generator is used for projecting structured light on the back of the human body; wherein the pattern of the structured light is a fringe pattern with Gray code encoding; 所述相机用于捕捉投射有结构光图案的人体背部图像;the camera is used to capture the image of the back of the human body projected with the structured light pattern; 所述标定板用于相机标定;The calibration board is used for camera calibration; 所述主控器用于控制结构光发生器的开启以及对结构光发生器编码设定结构光图案,接收相机传送的图案信息,并将其传送至计算机进行数据分析。The main controller is used for controlling the opening of the structured light generator, encoding and setting the structured light pattern on the structured light generator, receiving the pattern information transmitted by the camera, and transmitting it to the computer for data analysis.
CN202011308196.2A 2020-11-20 2020-11-20 Acupuncture point positioning system and method based on combination of binocular vision and coded structured light Active CN112509055B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011308196.2A CN112509055B (en) 2020-11-20 2020-11-20 Acupuncture point positioning system and method based on combination of binocular vision and coded structured light

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011308196.2A CN112509055B (en) 2020-11-20 2020-11-20 Acupuncture point positioning system and method based on combination of binocular vision and coded structured light

Publications (2)

Publication Number Publication Date
CN112509055A true CN112509055A (en) 2021-03-16
CN112509055B CN112509055B (en) 2022-05-03

Family

ID=74959064

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011308196.2A Active CN112509055B (en) 2020-11-20 2020-11-20 Acupuncture point positioning system and method based on combination of binocular vision and coded structured light

Country Status (1)

Country Link
CN (1) CN112509055B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991437A (en) * 2021-04-08 2021-06-18 上海盛益精密机械有限公司 Full-automatic acupuncture point positioning method based on image expansion and contraction technology
CN113129430A (en) * 2021-04-02 2021-07-16 中国海洋大学 Underwater three-dimensional reconstruction method based on binocular structured light
CN113538548A (en) * 2021-06-24 2021-10-22 七海测量技术(深圳)有限公司 A 3D inspection system and method for semiconductor solder balls
CN113689326A (en) * 2021-08-06 2021-11-23 西南科技大学 Three-dimensional positioning method based on two-dimensional image segmentation guidance
CN114519674A (en) * 2022-01-18 2022-05-20 贵州省质安交通工程监控检测中心有限责任公司 Slope stability analysis system and method based on machine vision
CN114812429A (en) * 2022-03-06 2022-07-29 南京理工大学 Binocular vision metal gear three-dimensional appearance measuring device and method based on Gray code structured light

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016037486A1 (en) * 2014-09-10 2016-03-17 深圳大学 Three-dimensional imaging method and system for human body
CN108020175A (en) * 2017-12-06 2018-05-11 天津中医药大学 A kind of more optical grating projection binocular vision tongue body surface three dimension entirety imaging methods
CN108340371A (en) * 2018-01-29 2018-07-31 珠海市俊凯机械科技有限公司 Target follows localization method and system a little
CN109191509A (en) * 2018-07-25 2019-01-11 广东工业大学 A kind of virtual binocular three-dimensional reconstruction method based on structure light
CN111028295A (en) * 2019-10-23 2020-04-17 武汉纺织大学 A 3D imaging method based on encoded structured light and binocular

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016037486A1 (en) * 2014-09-10 2016-03-17 深圳大学 Three-dimensional imaging method and system for human body
CN108020175A (en) * 2017-12-06 2018-05-11 天津中医药大学 A kind of more optical grating projection binocular vision tongue body surface three dimension entirety imaging methods
CN108340371A (en) * 2018-01-29 2018-07-31 珠海市俊凯机械科技有限公司 Target follows localization method and system a little
CN109191509A (en) * 2018-07-25 2019-01-11 广东工业大学 A kind of virtual binocular three-dimensional reconstruction method based on structure light
CN111028295A (en) * 2019-10-23 2020-04-17 武汉纺织大学 A 3D imaging method based on encoded structured light and binocular

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
QI ZHOU: "Combing structured light measurement technology with binocular stereo vision", 《2017 IEEE 2ND INTERNATIONAL CONFERENCE ON OPTO-ELECTRONIC INFORMATION PROCESSING (ICOIP)》 *
戴红芬等: "基于增强现实和双目视觉技术的针灸辅助系统", 《自动化技术与应用》 *
王兵等: "基于格雷码和多步相移法的双目立体视觉三维测量技术研究", 《计算机测量与控制》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113129430A (en) * 2021-04-02 2021-07-16 中国海洋大学 Underwater three-dimensional reconstruction method based on binocular structured light
CN113129430B (en) * 2021-04-02 2022-03-04 中国海洋大学 Underwater three-dimensional reconstruction method based on binocular structured light
CN112991437A (en) * 2021-04-08 2021-06-18 上海盛益精密机械有限公司 Full-automatic acupuncture point positioning method based on image expansion and contraction technology
CN112991437B (en) * 2021-04-08 2023-01-10 上海盛益精密机械有限公司 Full-automatic acupuncture point positioning method based on image expansion and contraction technology
CN113538548A (en) * 2021-06-24 2021-10-22 七海测量技术(深圳)有限公司 A 3D inspection system and method for semiconductor solder balls
CN113538548B (en) * 2021-06-24 2024-09-06 七海测量技术(深圳)有限公司 3D detection system and method for semiconductor tin ball
CN113689326A (en) * 2021-08-06 2021-11-23 西南科技大学 Three-dimensional positioning method based on two-dimensional image segmentation guidance
CN113689326B (en) * 2021-08-06 2023-08-04 西南科技大学 A 3D Positioning Method Based on 2D Image Segmentation Guidance
CN114519674A (en) * 2022-01-18 2022-05-20 贵州省质安交通工程监控检测中心有限责任公司 Slope stability analysis system and method based on machine vision
CN114812429A (en) * 2022-03-06 2022-07-29 南京理工大学 Binocular vision metal gear three-dimensional appearance measuring device and method based on Gray code structured light
CN114812429B (en) * 2022-03-06 2022-12-13 南京理工大学 Binocular vision metal gear three-dimensional appearance measuring device and method based on Gray code structured light

Also Published As

Publication number Publication date
CN112509055B (en) 2022-05-03

Similar Documents

Publication Publication Date Title
CN112509055B (en) Acupuncture point positioning system and method based on combination of binocular vision and coded structured light
US7953271B2 (en) Enhanced object reconstruction
CN108564041B (en) Face detection and restoration method based on RGBD camera
CN111028295A (en) A 3D imaging method based on encoded structured light and binocular
CN110487216A (en) A kind of fringe projection 3-D scanning method based on convolutional neural networks
CN113129430B (en) Underwater three-dimensional reconstruction method based on binocular structured light
CN110458874B (en) Image non-rigid registration method and system
CN103106688A (en) Indoor three-dimensional scene rebuilding method based on double-layer rectification method
CN107154014A (en) A kind of real-time color and depth Panorama Mosaic method
CN109724537B (en) Binocular three-dimensional imaging method and system
CN110633005A (en) An Optical Marker-Free 3D Human Motion Capture Method
CN109579695A (en) A kind of parts measurement method based on isomery stereoscopic vision
CN111160232A (en) Front face reconstruction method, device and system
Wang et al. Robust motion estimation and structure recovery from endoscopic image sequences with an adaptive scale kernel consensus estimator
CN107374638A (en) A kind of height measuring system and method based on binocular vision module
CN110060304A (en) A kind of organism three-dimensional information acquisition method
CN116958419A (en) A binocular stereo vision three-dimensional reconstruction system and method based on wavefront coding
CN116309829B (en) Cuboid scanning body group decoding and pose measuring method based on multi-view vision
CN116883471A (en) Line structured light contactless point cloud registration method for percutaneous puncture of chest and abdomen
CN113409242A (en) Intelligent monitoring method for point cloud of rail intersection bow net
CN106447734A (en) Intelligent mobile phone camera calibration algorithm adopting human face calibration object
CN113781305A (en) Point cloud fusion method of double-monocular three-dimensional imaging system
CN116597488A (en) Face recognition method based on Kinect database
Lacher et al. Low-cost surface reconstruction for aesthetic results assessment and prediction in breast cancer surgery
WO2023165451A1 (en) Three-dimensional model creation method, endoscope, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant