[go: up one dir, main page]

CN108537834B - A depth image-based volume measurement method, system and depth camera - Google Patents

A depth image-based volume measurement method, system and depth camera Download PDF

Info

Publication number
CN108537834B
CN108537834B CN201810225912.7A CN201810225912A CN108537834B CN 108537834 B CN108537834 B CN 108537834B CN 201810225912 A CN201810225912 A CN 201810225912A CN 108537834 B CN108537834 B CN 108537834B
Authority
CN
China
Prior art keywords
measured
point cloud
depth camera
axis
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810225912.7A
Other languages
Chinese (zh)
Other versions
CN108537834A (en
Inventor
侯方超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Aixin Intelligent Technology Co ltd
Original Assignee
Hangzhou Aixin Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Aixin Intelligent Technology Co ltd filed Critical Hangzhou Aixin Intelligent Technology Co ltd
Priority to CN201810225912.7A priority Critical patent/CN108537834B/en
Publication of CN108537834A publication Critical patent/CN108537834A/en
Application granted granted Critical
Publication of CN108537834B publication Critical patent/CN108537834B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention belongs to the technical field of logistics and volume measurement, and particularly relates to a depth image-based volume measurement method and system and a depth camera, wherein the depth image-based volume measurement method comprises the following steps: s1, acquiring a scene depth map containing the object to be detected to obtain a scene point cloud coordinate; s2, transforming the scene point cloud coordinates to obtain scene point cloud coordinates under a depth camera coordinate system; s3, processing scene point cloud coordinates under a depth camera coordinate system to obtain a coordinate set of the object to be measured; and S4, calculating the length, width and height of the object to be measured according to the coordinate set of the object to be measured, and multiplying the length, width and height to obtain the volume of the object to be measured. Compared with the existing logistics volume measurement scheme, the method can be realized by using a common depth camera on the market in the aspect of hardware, and is low in cost; under the camera inclination state, the volume of the object to be measured can be accurately measured in real time.

Description

Volume measurement method and system based on depth image and depth camera
Technical Field
The invention belongs to the technical field of logistics and volume measurement, and particularly relates to a depth image-based volume measurement method and system and a depth camera.
Background
In recent years, with the rapid development of economic globalization, a large amount of materials need to flow frequently between areas, and particularly, with the rise of electronic commerce generated along with the revolution of information technology, the logistics industry is rapidly developed, the competition among logistics enterprises is intensified, how to reduce the labor cost, and how to efficiently send express mail to a destination is the key for gaining competitive advantages.
In logistics and storage management, the volume attribute of the article is of great importance to the logistics center for optimizing receiving, warehousing, picking, packaging and shipping management, so that the automatic accurate measurement of the size and the volume of the article is realized, and the efficiency of storage logistics and the intelligence and automation level of a logistics system can be greatly improved.
Most of the existing volume measuring devices are based on light curtain or linear array laser scanning, and the volume can be calculated only by matching with a conveyor belt encoder. This technology, while mature, is expensive and has a high system complexity.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a depth image-based volume measurement method, a depth image-based volume measurement system and a depth camera, compared with the existing logistics volume measurement scheme, the method can be realized by using a common depth camera on the market in the aspect of hardware, and the cost is lower; under the camera inclination state, the volume of the object to be measured can be accurately measured in real time.
In a first aspect, the present invention provides a depth image-based volume measurement method, including the following steps:
s1, acquiring a scene depth map containing the object to be detected to obtain a scene point cloud coordinate;
s2, transforming the scene point cloud coordinates to obtain scene point cloud coordinates under a depth camera coordinate system;
s3, processing scene point cloud coordinates under a depth camera coordinate system to obtain a coordinate set of the object to be measured;
and S4, calculating the length, width and height of the object to be measured according to the coordinate set of the object to be measured, and multiplying the length, width and height to obtain the volume of the object to be measured.
Preferably, the step S2 is specifically:
s21, setting a reference plane in the scene depth map;
s22, calculating the tilt attitude data of the depth camera according to the reference plane;
s23, transforming the scene point cloud coordinates according to the tilt attitude data to obtain the scene point cloud coordinates under the depth camera coordinate system
The scene point cloud coordinates of (1).
Preferably, the S22 is specifically:
s221, setting an included angle range of the X axis and the Y axis of the depth camera and the normal of the reference plane, wherein the included angle range comprises a plurality of included angles theta of the X axisxAnd angle theta with Y axisy
S222, traversing each X-axis included angle and each Y-axis included angle, and utilizing a coordinate transformation formula to carry out Z-axis transformation on the reference planeCKTransforming the coordinates to obtain a plurality of transformed ZCKCoordinates, the transformation formula is:
Z'=Y0*sinθx+Z0cosθx
Zck=Z'*cosθy-X0sinθy
wherein X0、Y0、Z0As original coordinate points of a reference plane, ZCKFor transformed ZCKCoordinates;
s223, calculating all transformed ZCKThe mean Zmean and the minimum variance Zsigma of the coordinates;
s224, the included angle theta of the X axis corresponding to the minimum variance ZsigmaxX-axis canted angle α as a depth cameraxCorresponding angle of Y-axis thetayY-axis canted angle α as a depth camerayThereby obtaining tilt attitude data: zCKMean value of coordinates Zmean, minimum variance Zsigma, X-axis tilt angle αxInclined at an angle α with respect to the Y axisy
Preferably, the S23 is specifically:
inclined at an included angle α according to the X-axisxInclined at an angle α with respect to the Y axisyAnd transforming the scene point cloud coordinate by using a transformation formula to obtain the scene point cloud coordinate under a depth camera coordinate system, wherein the transformation formula is as follows:
Z'i=Yio*sinαx+Ziocosαx
Xi=Z'i*sinαy+Xiocosαy
Yi=Yio*cosαy-Ziosinαy
Zi=Z'i*cosαy-Xiosinαy
wherein Xio、Yio、ZioAs original scene point cloud coordinates, Xi、Yi、ZiIs the scene point cloud coordinate under the depth camera coordinate system.
Preferably, the S3 is specifically:
according to a screening formula, screening X of the object to be tested meeting the conditions from the scene cloud coordinates under the depth camera coordinate systemi、Yi、ZiThe coordinate point set and the screening formula are as follows:
Zi-Zmean > N Zsigma, where N is a positive number.
Preferably, the S4 is specifically:
s41, calculating the X of the object to be measured according to the preset grid precisioni、YiProjecting the coordinate points to corresponding grid areas in a reference plane, calibrating the connected areas of the grid areas, and counting the size of each connected area;
s42, selecting X corresponding to the communication area with the largest areai、YiCoordinate point set, calculating selected X by principal component analysisi、YiObtaining the length and width of the projection of the object to be measured in the reference plane by the minimum circumscribed rectangle corresponding to the coordinate point;
s43, calculating ZiAnd obtaining the height of the object to be measured by the maximum difference with Zmean, and multiplying the length, the width and the height to obtain the volume of the object to be measured.
In a second aspect, the present invention provides a depth image-based volume measurement system, which is suitable for the depth image-based volume measurement method in the first aspect, and includes:
the scene acquisition unit is used for acquiring a scene depth map containing the object to be detected to obtain a scene point cloud coordinate;
the coordinate transformation unit is used for transforming the scene point cloud coordinates to obtain the scene point cloud coordinates under the depth camera coordinate system;
the device comprises a to-be-detected object extracting unit, a depth camera coordinate system acquiring unit and a depth camera coordinate system acquiring unit, wherein the to-be-detected object extracting unit is used for processing scene point cloud coordinates under the depth camera coordinate system to obtain a coordinate set of the to-be-detected object;
and the volume calculation unit is used for calculating the length, the width and the height of the object to be measured according to the coordinate set of the object to be measured, and multiplying the length, the width and the height to obtain the volume of the object to be measured.
In a third aspect, the present invention provides a depth camera comprising a processor, an input device, an output device and a memory, the processor, the input device, the output device and the memory being interconnected, the memory being configured to store a computer program comprising program instructions, the processor being configured to invoke the program instructions to perform the method according to the first aspect.
The invention has the beneficial effects that: compared with the existing logistics volume measurement scheme, the method can be realized by using a common depth camera on the market in the aspect of hardware, and the cost is lower; under the camera inclination state, the volume of the object to be measured can be accurately measured in real time.
Drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings that are needed in the detailed description of the invention or the prior art will be briefly described below. Throughout the drawings, like elements or portions are generally identified by like reference numerals. In the drawings, elements or portions are not necessarily drawn to scale.
FIG. 1 is a flow chart of a depth image-based volume measurement method according to the present embodiment;
fig. 2 is a structural diagram of the depth image-based volume measurement system according to the present embodiment.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
The first embodiment is as follows:
the present embodiment provides a depth image-based volume measurement method, as shown in fig. 1, including the following four steps S1, S2, S3, and S4:
and S1, acquiring a scene depth map containing the object to be detected to obtain scene point cloud coordinates. The scene depth map acquired by the embodiment can adopt an optical flight time principle, a structured light principle, a binocular distance measurement principle and the like. A depth map, i.e. a coordinate set of the Z-axis of depth, also called a range image, is an image in which the distance (depth) from an image grabber to each point in a scene is taken as a pixel value, and directly reflects the geometry of the visible surface of each object in the scene. The depth map can be calculated into scene point cloud data through coordinate conversion, the image collector of the embodiment is a depth camera, and the depth camera can be used for collecting the depth map and can apply an optical flight time principle, a structured light principle, a binocular distance measurement principle and the like.
And S2, transforming the scene point cloud coordinates to obtain the scene point cloud coordinates under the depth camera coordinate system.
The step S2 specifically includes three steps S21, S22, and S23:
and S21, setting a reference plane in the scene depth map.
S22, calculating the tilt posture data of the depth camera according to the reference plane. The S22 specifically includes four steps of S221, S222, S223, and S224:
s221, setting an included angle range of the X axis and the Y axis of the depth camera and the normal of the reference plane, wherein the included angle range comprises a plurality of included angles theta of the X axisxAnd angle theta with Y axisy
S222, traversing each X-axis included angle and each Y-axis included angle, and utilizing a coordinate transformation formula to carry out Z-axis transformation on the reference planeCKTransforming the coordinates to obtain a plurality of transformed ZCKCoordinates, the transformation formula is:
Z'=Y0*sinθx+Z0cosθx
Zck=Z'*cosθy-X0sinθy
wherein X0、Y0、Z0As original coordinate points of a reference plane, ZCKFor transformed ZCKCoordinates;
s223, calculating all transformed ZCKThe mean Zmean and the minimum variance Zsigma of the coordinates;
s224, the included angle theta of the X axis corresponding to the minimum variance ZsigmaxX-axis canted angle α as a depth cameraxCorresponding angle of Y-axis thetayY-axis canted angle α as a depth camerayThereby obtaining tilt attitude data: zCKMean value of coordinates Zmean, minimum variance Zsigma, X-axis tilt angle αxInclined at an angle α with respect to the Y axisy
In this embodiment, the X-axis angle, the Y-axis angle, and the distance between the depth camera and the reference plane are obtained according to step S22.
And S23, transforming the scene point cloud coordinates according to the tilt attitude data to obtain the scene point cloud coordinates under the depth camera coordinate system. The method comprises the following specific steps:
inclined at an included angle α according to the X-axisxInclined at an angle α with respect to the Y axisyAnd transforming the scene point cloud coordinate by using a transformation formula to obtain the scene point cloud coordinate under a depth camera coordinate system, wherein the transformation formula is as follows:
Z'i=Yio*sinαx+Ziocosαx
Xi=Z'i*sinαy+Xiocosαy
Yi=Yio*cosαy-Ziosinαy
Zi=Z'i*cosαy-Xiosinαy
wherein Xio、Yio、ZioAs original scene point cloud coordinates, Xi、Yi、ZiIs the scene point cloud coordinate under the depth camera coordinate system.
And S3, processing the scene point cloud coordinates under the depth camera coordinate system to obtain a coordinate set of the object to be measured. The method comprises the following specific steps:
according to a screening formula, screening X of the object to be tested meeting the conditions from the scene cloud coordinates under the depth camera coordinate systemi、Yi、ZiThe coordinate point set and the screening formula are as follows:
Zi-Zmean > N Zsigma, where N is a positive number.
In this embodiment, in step S3, the coordinate data of the object to be measured is extracted to remove the relevant information of other objects in the scene.
And S4, calculating the length, width and height of the object to be measured according to the coordinate set of the object to be measured, and multiplying the length, width and height to obtain the volume of the object to be measured. The S4 specifically includes two steps S41 and S42:
s41, calculating the X of the object to be measured according to the preset grid precisioni、YiCoordinate pointProjecting to a corresponding grid area in a reference plane, calibrating a connected area of the grid area, and counting the size of each connected area;
s42, selecting X corresponding to the communication area with the largest areai、YiCoordinate point set, calculating selected X by principal component analysisi、YiObtaining the length and width of the projection of the object to be measured in the reference plane by the minimum circumscribed rectangle corresponding to the coordinate point;
s43, calculating ZiAnd obtaining the height of the object to be measured by the maximum difference with Zmean, and multiplying the length, the width and the height to obtain the volume of the object to be measured.
In summary, compared with the existing logistics volume measurement scheme, the method can be realized by using a common depth camera on the market in terms of hardware, and is low in cost; under the camera inclination state, the volume of the object to be measured can be accurately measured in real time. The camera inclination posture is calibrated in advance, and the length, the width and the height of the object to be measured can be calculated only by the key steps of multiplication, connected region calibration, principal component analysis and the like in the operation process, so that the volume of the object to be measured is calculated, and very good measurement instantaneity can be achieved. The embodiment supports the inclination existing when the camera is installed, is easy to install and expands the measuring range. The point clouds output by a plurality of high-precision depth cameras in the existing market are unstructured, and the point cloud data (point cloud coordinates) are not required to be structured, so that the type selection of the measuring equipment is easier to perform.
Example two:
the present embodiment provides a depth image-based volume measurement system, as shown in fig. 2, including:
the scene acquisition unit is used for acquiring a scene depth map containing the object to be detected to obtain a scene point cloud coordinate;
the coordinate transformation unit is used for transforming the scene point cloud coordinates to obtain the scene point cloud coordinates under the depth camera coordinate system;
the device comprises a to-be-detected object extracting unit, a depth camera coordinate system acquiring unit and a depth camera coordinate system acquiring unit, wherein the to-be-detected object extracting unit is used for processing scene point cloud coordinates under the depth camera coordinate system to obtain a coordinate set of the to-be-detected object;
and the volume calculation unit is used for calculating the length, the width and the height of the object to be measured according to the coordinate set of the object to be measured, and multiplying the length, the width and the height to obtain the volume of the object to be measured.
The system is suitable for the depth image-based volume measurement method described in the first embodiment, and as shown in fig. 1, includes the following four steps S1, S2, S3, and S4:
and S1, acquiring a scene depth map containing the object to be detected to obtain scene point cloud coordinates. The scene depth map acquired by the embodiment can adopt an optical flight time principle, a structured light principle, a binocular distance measurement principle and the like. A depth map, i.e. a coordinate set of the Z-axis of depth, also called a range image, is an image in which the distance (depth) from an image grabber to each point in a scene is taken as a pixel value, and directly reflects the geometry of the visible surface of each object in the scene. The depth map can be calculated into scene point cloud data through coordinate conversion, the image collector of the embodiment is a depth camera, and the depth camera can be used for collecting the depth map and can apply an optical flight time principle, a structured light principle, a binocular distance measurement principle and the like.
And S2, transforming the scene point cloud coordinates to obtain the scene point cloud coordinates under the depth camera coordinate system.
The step S2 specifically includes three steps S21, S22, and S23:
and S21, setting a reference plane in the scene depth map.
S22, calculating the tilt posture data of the depth camera according to the reference plane. The S22 specifically includes four steps of S221, S222, S223, and S224:
s221, setting an included angle range of the X axis and the Y axis of the depth camera and the normal of the reference plane, wherein the included angle range comprises a plurality of included angles theta of the X axisxAnd angle theta with Y axisy
S222, traversing each X-axis included angle and each Y-axis included angle, and utilizing a coordinate transformation formula to carry out Z-axis transformation on the reference planeCKTransforming the coordinates to obtain a plurality of transformed ZCKCoordinates, the transformation formula is:
Z'=Y0*sinθx+Z0cosθx
Zck=Z'*cosθy-X0sinθy
wherein X0、Y0、Z0As original coordinate points of a reference plane, ZCKFor transformed ZCKCoordinates;
s223, calculating all transformed ZCKThe mean Zmean and the minimum variance Zsigma of the coordinates;
s224, the included angle theta of the X axis corresponding to the minimum variance ZsigmaxX-axis canted angle α as a depth cameraxCorresponding angle of Y-axis thetayY-axis canted angle α as a depth camerayThereby obtaining tilt attitude data: zCKMean value of coordinates Zmean, minimum variance Zsigma, X-axis tilt angle αxInclined at an angle α with respect to the Y axisy
In this embodiment, the X-axis angle, the Y-axis angle, and the distance between the depth camera and the reference plane are obtained according to step S22.
And S23, transforming the scene point cloud coordinates according to the tilt attitude data to obtain the scene point cloud coordinates under the depth camera coordinate system. The method comprises the following specific steps:
inclined at an included angle α according to the X-axisxInclined at an angle α with respect to the Y axisyAnd transforming the scene point cloud coordinate by using a transformation formula to obtain the scene point cloud coordinate under a depth camera coordinate system, wherein the transformation formula is as follows:
Z'i=Yio*sinαx+Ziocosαx
Xi=Z'i*sinαy+Xiocosαy
Yi=Yio*cosαy-Ziosinαy
Zi=Z'i*cosαy-Xiosinαy
wherein Xio、Yio、ZioAs original scene point cloud coordinates, Xi、Yi、ZiFor scene point cloud under depth camera coordinate systemAnd (4) marking.
And S3, processing the scene point cloud coordinates under the depth camera coordinate system to obtain a coordinate set of the object to be measured. The method comprises the following specific steps:
according to a screening formula, screening X of the object to be tested meeting the conditions from the scene cloud coordinates under the depth camera coordinate systemi、Yi、ZiThe coordinate point set and the screening formula are as follows:
Zi-Zmean > N Zsigma, where N is a positive number.
In this embodiment, in step S3, the coordinate data of the object to be measured is extracted to remove the relevant information of other objects in the scene.
And S4, calculating the length, width and height of the object to be measured according to the coordinate set of the object to be measured, and multiplying the length, width and height to obtain the volume of the object to be measured. The S4 specifically includes two steps S41 and S42:
s41, calculating the X of the object to be measured according to the preset grid precisioni、YiProjecting the coordinate points to corresponding grid areas in a reference plane, calibrating the connected areas of the grid areas, and counting the size of each connected area;
s42, selecting X corresponding to the communication area with the largest areai、YiCoordinate point set, calculating selected X by principal component analysisi、YiObtaining the length and width of the projection of the object to be measured in the reference plane by the minimum circumscribed rectangle corresponding to the coordinate point;
s43, calculating ZiAnd obtaining the height of the object to be measured by the maximum difference with Zmean, and multiplying the length, the width and the height to obtain the volume of the object to be measured.
In summary, compared with the existing logistics volume measurement scheme, the method can be realized by using a common depth camera on the market in terms of hardware, and is low in cost; under the camera inclination state, the volume of the object to be measured can be accurately measured in real time. The camera inclination posture is calibrated in advance, and the length, the width and the height of the object to be measured can be calculated only by the key steps of multiplication, connected region calibration, principal component analysis and the like in the operation process, so that the volume of the object to be measured is calculated, and very good measurement instantaneity can be achieved. The embodiment supports the inclination existing when the camera is installed, is easy to install and expands the measuring range. The point clouds output by a plurality of high-precision depth cameras in the existing market are unstructured, and the point cloud data (point cloud coordinates) are not required to be structured, so that the type selection of the measuring equipment is easier to perform.
Those of ordinary skill in the art will appreciate that the elements and method steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of clearly illustrating the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided in the present application, it should be understood that the disclosed method and system may be implemented in other ways. For example, the above division of elements is merely a logical division, and other divisions may be realized, for example, multiple elements or components may be combined or integrated into another system, or some features may be omitted, or not executed. The units may or may not be physically separate, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment of the present invention.
Example three:
the embodiment provides a depth camera, which comprises a processor, an input device, an output device and a memory, wherein the processor, the input device, the output device and the memory are connected with each other, the memory is used for storing a computer program, the computer program comprises program instructions, and the processor is configured to call the program instructions and execute the method of the first embodiment.
Compared with the existing logistics volume measurement scheme, the method can be realized by using a common depth camera on the market in the aspect of hardware, and is low in cost; under the camera inclination state, the volume of the object to be measured can be accurately measured in real time. The camera inclination posture is calibrated in advance, and the length, the width and the height of the object to be measured can be calculated only by the key steps of multiplication, connected region calibration, principal component analysis and the like in the operation process, so that the volume of the object to be measured is calculated, and very good measurement instantaneity can be achieved. The embodiment supports the inclination existing when the camera is installed, is easy to install and expands the measuring range. The point clouds output by a plurality of high-precision depth cameras in the existing market are unstructured, and the point cloud data (point cloud coordinates) are not required to be structured, so that the type selection of the measuring equipment is easier to perform.
It should be understood that in the present embodiment, the processor may be a Central Processing Unit (CPU), and the processor may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The input device may include an image capture device and the output device may include a display (LCD, etc.), speakers, etc.
The memory may include both read-only memory and random access memory, and provides instructions and data to the processor. The portion of memory may also include non-volatile random access memory. For example, the memory may also store device type information.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present invention, and they should be construed as being included in the following claims and description.

Claims (7)

1.一种基于深度图像的体积测量方法,其特征在于,包括以下步骤:1. a volume measurement method based on depth image, is characterized in that, comprises the following steps: S1,获取含有待测物的场景深度图,得到场景点云坐标;S1, obtain the scene depth map containing the object to be measured, and obtain the scene point cloud coordinates; S2,对场景点云坐标进行变换,得到深度相机坐标系下的场景点云坐标;S2, transform the scene point cloud coordinates to obtain the scene point cloud coordinates in the depth camera coordinate system; S3,对深度相机坐标系下的场景点云坐标进行处理,得到待测物的坐标集合;S3, processing the scene point cloud coordinates in the depth camera coordinate system to obtain a coordinate set of the object to be measured; S4,根据待测物的坐标集合计算待测物的长度、宽度和高度,将长度、宽度和高度相乘得到待测物的体积;S4, calculate the length, width and height of the object to be measured according to the coordinate set of the object to be measured, and multiply the length, width and height to obtain the volume of the object to be measured; 所述步骤S2具体为:The step S2 is specifically: S21,设置场景深度图中的参考平面;S21, setting the reference plane in the scene depth map; S22,根据所述参考平面计算深度相机的倾斜姿态数据;S22, calculating the tilt attitude data of the depth camera according to the reference plane; S23,根据倾斜姿态数据对场景点云坐标进行变换,得到深度相机坐标系下的场景点云坐标;S23, transforming the scene point cloud coordinates according to the tilt attitude data to obtain the scene point cloud coordinates in the depth camera coordinate system; 所述S22具体为:The S22 is specifically: S221,设置深度相机X轴、Y轴与参考平面法线的夹角范围,所述夹角范围内包括若干个X轴夹角θx和与Y轴夹角θyS221, setting the included angle range of the depth camera X-axis, Y-axis and the reference plane normal, and the included angle range includes several X -axis included angles θx and Y-axis included angles θy ; S222,遍历每一个X轴夹角和每一个Y轴夹角,利用坐标变换公式对参考平面内的ZCK坐标进行变换,得到若干变换后的ZCK坐标;S222, traverse each X-axis angle and each Y-axis angle, utilize the coordinate transformation formula to transform the Z CK coordinates in the reference plane, and obtain some transformed Z CK coordinates; S223,计算所有变换后的ZCK坐标的平均值Zmean和最小方差Zsigma;S223, calculate the mean value Zmean and the minimum variance Zsigma of all the transformed Z CK coordinates; S224,将最小方差Zsigma对应的X轴夹角θx作为深度相机的X轴倾斜夹角αx,对应的Y轴夹角θy作为深度相机的Y轴倾斜夹角αy,从而得到倾斜姿态数据:ZCK坐标的平均值Zmean、最小方差Zsigma、X轴倾斜夹角αx和Y轴倾斜夹角αyS224, take the X-axis included angle θx corresponding to the minimum variance Zsigma as the X -axis tilt angle αx of the depth camera, and the corresponding Y-axis included angle θy as the Y -axis tilt angle αy of the depth camera, so as to obtain the tilted attitude Data: mean Zmean of Z CK coordinates, minimum variance Zsigma, X-axis tilt angle α x and Y-axis tilt angle α y . 2.根据权利要求1所述的一种基于深度图像的体积测量方法,其特征在于,所述变换公式为:2. a kind of volume measurement method based on depth image according to claim 1, is characterized in that, described transformation formula is: Z'=Y0*sinθx+Z0cosθxZ'=Y 0 *sinθ x +Z 0 cosθ x ; Zck=Z'*cosθy-X0sinθyZ ck =Z'*cosθ y -X 0 sinθ y ; 其中X0、Y0、Z0为参考平面的原始坐标点,ZCK为变换后的ZCK坐标。Among them, X 0 , Y 0 , and Z 0 are the original coordinate points of the reference plane, and Z CK is the transformed Z CK coordinate. 3.根据权利要求2所述的一种基于深度图像的体积测量方法,其特征在于,所述S23具体为:3. a kind of volume measurement method based on depth image according to claim 2, is characterized in that, described S23 is specifically: 根据X轴倾斜夹角αx和Y轴倾斜夹角αy,利用变换公式对场景点云坐标进行变换,得到深度相机坐标系下的场景点云坐标,变换公式为:According to the inclined angle α x of the X axis and the inclined angle α y of the Y axis, the scene point cloud coordinates are transformed by the transformation formula, and the scene point cloud coordinates in the depth camera coordinate system are obtained. The transformation formula is: Z'i=Yio*sinαx+ZiocosαxZ' i =Y io *sinα x +Z io cosα x ; Xi=Z'i*sinαy+XiocosαyX i =Z' i *sinα y +X io cosα y ; Yi=Yio*cosαy-ZiosinαyY i =Y io *cosα y -Z io sinα y ; Zi=Z'i*cosαy-XiosinαyZ i =Z' i *cosα y -X io sinα y ; 其中Xio、Yio、Zio为原来的场景点云坐标,Xi、Yi、Zi为深度相机坐标系下的场景点云坐标。Among them, X io , Y io , and Z io are the original scene point cloud coordinates, and X i , Yi , and Z i are the scene point cloud coordinates in the depth camera coordinate system. 4.根据权利要求3所述的一种基于深度图像的体积测量方法,其特征在于,所述S3具体为:4. a kind of volume measurement method based on depth image according to claim 3, is characterized in that, described S3 is specifically: 根据筛选公式,从深度相机坐标系下的场景云坐标中,筛选出符合条件的待测物的Xi、Yi、Zi坐标点集合,筛选公式为:According to the screening formula, from the scene cloud coordinates in the depth camera coordinate system, screen out the set of X i , Y i , Z i coordinate points of the object to be tested that meet the conditions. The screening formula is: |Zi-Zmean|>N*Zsigma,其中N为正数。|Zi-Zmean|>N*Zsigma, where N is a positive number. 5.根据权利要求4所述的一种基于深度图像的体积测量方法,其特征在于,所述S4具体为:5. a kind of volume measurement method based on depth image according to claim 4, is characterized in that, described S4 is specifically: S41,按照预设的网格精度,计算待测物的Xi、Yi坐标点投影到参考平面内对应的网格区域,对网格区域进行连通区域标定、并统计每个连通区域大小;S41, according to the preset grid precision, calculate the X i and Y i coordinate points of the object to be measured and project them to the corresponding grid area in the reference plane, perform connected area calibration on the grid area, and count the size of each connected area; S42,选取面积最大的连通区域对应的Xi、Yi坐标点集合,通过主成分分析法计算选取的Xi、Yi坐标点对应的最小外接长方形,得到待测物投影到参考平面内的长度和宽度;S42, select the set of X i , Y i coordinate points corresponding to the connected area with the largest area, calculate the minimum circumscribed rectangle corresponding to the selected X i , Y i coordinate points by the principal component analysis method, and obtain the projection of the object to be measured into the reference plane. length and width; S43,计算Zi与Zmean最大差异得到待测物的高度,将长度、宽度和高度相乘得到待测物体积。S43, calculating the maximum difference between Z i and Zmean to obtain the height of the object to be measured, and multiplying the length, width and height to obtain the volume of the object to be measured. 6.一种基于深度图像的体积测量系统,适用于权利要求1-5任一项所述的基于深度图像的体积测量方法,包括:6. A volume measurement system based on a depth image, applicable to the volume measurement method based on a depth image according to any one of claims 1-5, comprising: 场景采集单元,用于获取含有待测物的场景深度图,得到场景点云坐标;The scene acquisition unit is used to obtain the scene depth map containing the object to be measured, and obtain the scene point cloud coordinates; 坐标变换单元,用于对场景点云坐标进行变换,得到深度相机坐标系下的场景点云坐标;The coordinate transformation unit is used to transform the scene point cloud coordinates to obtain the scene point cloud coordinates in the depth camera coordinate system; 待测物提取单元,用于对深度相机坐标系下的场景点云坐标进行处理,得到待测物的坐标集合;The object to be measured extraction unit is used to process the scene point cloud coordinates in the depth camera coordinate system to obtain the coordinate set of the object to be measured; 体积计算单元,用于根据待测物的坐标集合计算待测物的长宽高,将长宽高相乘得到待测物的体积。The volume calculation unit is used to calculate the length, width and height of the object to be measured according to the coordinate set of the object to be measured, and multiply the length, width and height to obtain the volume of the object to be measured. 7.一种深度相机,其特征在于,包括处理器、输入设备、输出设备和存储器,所述处理器、输入设备、输出设备和存储器相互连接,所述存储器用于存储计算机程序,所述计算机程序包括程序指令,其特征在于,所述处理器被配置用于调用所述程序指令,执行如权利要求1-5任一项所述的方法。7. A depth camera, characterized in that it comprises a processor, an input device, an output device, and a memory, wherein the processor, the input device, the output device, and the memory are connected to each other, and the memory is used to store a computer program, and the computer The program includes program instructions, wherein the processor is configured to invoke the program instructions to perform the method of any one of claims 1-5.
CN201810225912.7A 2018-03-19 2018-03-19 A depth image-based volume measurement method, system and depth camera Expired - Fee Related CN108537834B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810225912.7A CN108537834B (en) 2018-03-19 2018-03-19 A depth image-based volume measurement method, system and depth camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810225912.7A CN108537834B (en) 2018-03-19 2018-03-19 A depth image-based volume measurement method, system and depth camera

Publications (2)

Publication Number Publication Date
CN108537834A CN108537834A (en) 2018-09-14
CN108537834B true CN108537834B (en) 2020-05-01

Family

ID=63484983

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810225912.7A Expired - Fee Related CN108537834B (en) 2018-03-19 2018-03-19 A depth image-based volume measurement method, system and depth camera

Country Status (1)

Country Link
CN (1) CN108537834B (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111095024A (en) * 2018-09-18 2020-05-01 深圳市大疆创新科技有限公司 Height determination method, height determination device, electronic equipment and computer-readable storage medium
CN109448045B (en) * 2018-10-23 2021-02-12 南京华捷艾米软件科技有限公司 SLAM-based planar polygon measurement method and machine-readable storage medium
CN109376791B (en) * 2018-11-05 2020-11-24 北京旷视科技有限公司 Depth algorithm precision calculation method and device, electronic equipment and readable storage medium
CN109587875A (en) * 2018-11-16 2019-04-05 厦门盈趣科技股份有限公司 A kind of intelligent desk lamp and its adjusting method
CN109631764B (en) * 2018-11-22 2020-12-04 南京理工大学 Dimensional measurement system and method based on RealSense camera
CN109886961B (en) * 2019-03-27 2023-04-11 重庆交通大学 Medium and large cargo volume measuring method based on depth image
CN109916302B (en) * 2019-03-27 2020-11-20 青岛小鸟看看科技有限公司 A method and system for measuring the volume of a cargo box
CN109993785B (en) * 2019-03-27 2020-11-17 青岛小鸟看看科技有限公司 Method for measuring volume of goods loaded in container and depth camera module
CN110310459A (en) * 2019-04-04 2019-10-08 桑尼环保(江苏)有限公司 Multi-parameter extract real-time system
CN110006343B (en) * 2019-04-15 2021-02-12 Oppo广东移动通信有限公司 Method and device for measuring geometric parameters of object and terminal
CN111986250B (en) * 2019-05-22 2024-08-20 顺丰科技有限公司 Object volume measuring method, device, measuring equipment and storage medium
CN110309561A (en) * 2019-06-14 2019-10-08 吉旗物联科技(上海)有限公司 Goods space volume measuring method and device
CN110349205B (en) * 2019-07-22 2021-05-28 浙江光珀智能科技有限公司 Method and device for measuring volume of object
CN110296747A (en) * 2019-08-12 2019-10-01 深圳市知维智能科技有限公司 The measurement method and system of the volume of storage content
CN110425980A (en) * 2019-08-12 2019-11-08 深圳市知维智能科技有限公司 The measurement method and system of the volume of storage facilities content
CN110766744B (en) * 2019-11-05 2022-06-10 北京华捷艾米科技有限公司 MR volume measurement method and device based on 3D depth camera
CN111561872B (en) * 2020-05-25 2022-05-13 中科微至智能制造科技江苏股份有限公司 Method, device and system for measuring package volume based on speckle coding structured light
CN111696152B (en) * 2020-06-12 2023-05-12 杭州海康机器人股份有限公司 Method, device, computing equipment, system and storage medium for detecting package stack
CN112254635B (en) * 2020-09-23 2022-06-28 洛伦兹(北京)科技有限公司 Volume measurement method, device and system
CN113418467A (en) * 2021-06-16 2021-09-21 厦门硅谷动能信息技术有限公司 Method for detecting general and black luggage size based on ToF point cloud data
CN114264277B (en) * 2021-12-31 2024-08-02 英特尔产品(成都)有限公司 Method and device for detecting abnormal flatness of chip substrate
CN114494404B (en) * 2022-02-14 2024-07-23 云从科技集团股份有限公司 Object volume measurement method, system, device and medium
CN115690062A (en) * 2022-11-08 2023-02-03 内蒙古中电物流路港有限责任公司赤峰铁路分公司 Rail surface damage state detection method and device and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7037263B2 (en) * 2003-08-20 2006-05-02 Siemens Medical Solutions Usa, Inc. Computing spatial derivatives for medical diagnostic imaging methods and systems
CN101266131A (en) * 2008-04-08 2008-09-17 长安大学 An image-based volume measuring device and its measuring method
CN106225678A (en) * 2016-09-27 2016-12-14 北京正安维视科技股份有限公司 Dynamic object based on 3D camera location and volume measuring method
CN106813568A (en) * 2015-11-27 2017-06-09 阿里巴巴集团控股有限公司 object measuring method and device
CN106839975A (en) * 2015-12-03 2017-06-13 杭州海康威视数字技术股份有限公司 Volume measuring method and its system based on depth camera
CN107067394A (en) * 2017-04-18 2017-08-18 中国电子科技集团公司电子科学研究院 A kind of oblique photograph obtains the method and device of point cloud coordinate

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7037263B2 (en) * 2003-08-20 2006-05-02 Siemens Medical Solutions Usa, Inc. Computing spatial derivatives for medical diagnostic imaging methods and systems
CN101266131A (en) * 2008-04-08 2008-09-17 长安大学 An image-based volume measuring device and its measuring method
CN106813568A (en) * 2015-11-27 2017-06-09 阿里巴巴集团控股有限公司 object measuring method and device
CN106839975A (en) * 2015-12-03 2017-06-13 杭州海康威视数字技术股份有限公司 Volume measuring method and its system based on depth camera
CN106225678A (en) * 2016-09-27 2016-12-14 北京正安维视科技股份有限公司 Dynamic object based on 3D camera location and volume measuring method
CN107067394A (en) * 2017-04-18 2017-08-18 中国电子科技集团公司电子科学研究院 A kind of oblique photograph obtains the method and device of point cloud coordinate

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
《Accuracy of scaling and DLT reconstruction techniques for planar motion analyses》;Brewin,MA et al;《Journal of Applied Biomechanics》;20030228;第19卷(第19期);第79-88页 *
《Optical flow background estimation for real-time pan/tilt camera object tracking》;D Doyle et al;《Measurement》;20140228;第48卷(第01期);第195-207页 *
《基于像面旋转的画幅遥感相机姿态像移计算》;丁亚林;《光学精密工程》;20070529;第09卷(第15期);第1432-1438页 *
《基于摄影原理的文物表面纹理重建技术研究》;丁立军;《中国优秀硕士学位论文全文数据库 信息科技辑》;20071215(第06期);I138-763 *
《应用数学坐标变换方法计算航空相机像面旋转》;丁亚林;《光学仪器》;20120925;第29卷(第01期);第22-26页 *
《考虑飞机姿态角时倾斜航空相机像移速度计算》;翟林培;《光学精密工程》;20060731;第14卷(第03期);第490-494页 *

Also Published As

Publication number Publication date
CN108537834A (en) 2018-09-14

Similar Documents

Publication Publication Date Title
CN108537834B (en) A depth image-based volume measurement method, system and depth camera
JP2021184307A (en) System and method for detecting lines with vision system
CN109752003B (en) A method and device for locating point and line feature of robot visual inertia
CN110174056A (en) A kind of object volume measurement method, device and mobile terminal
KR20220025028A (en) Method and device for building beacon map based on visual beacon
CN112798811B (en) Speed measurement method, device and equipment
CN115060162B (en) Chamfer dimension measuring method and device, electronic equipment and storage medium
CN110146017A (en) Measuring method of repetitive positioning accuracy of industrial robots
CN117115233B (en) Dimension measurement method and device based on machine vision and electronic equipment
CN111385558A (en) TOF camera module accuracy measurement method and system
CN112991459A (en) Camera calibration method, device, equipment and storage medium
CN113379681B (en) Method, system, electronic device, and storage medium for acquiring tilt angle of LED chip
CN112802143B (en) Spherical map drawing method, device and storage medium
CN114972027A (en) An image stitching method, device, equipment, medium and computer product
CN111915666A (en) Volume measurement method and device based on mobile terminal
JP2020512536A (en) System and method for 3D profile determination using model-based peak selection
CN115719339A (en) Bolt size high-precision measurement method and device based on double-camera calibration
CN108332662B (en) Object measuring method and device
CN118196215A (en) Camera calibration method, device, electronic device and readable storage medium
CN116399306B (en) Tracking measurement method, device, equipment and medium based on visual recognition
US20230228883A1 (en) Z-plane identification and box dimensioning using three-dimensional time-of-flight imaging
CN109000560B (en) Method, Apparatus and Device for Detecting Package Size Based on 3D Camera
CN108680101B (en) Mechanical arm tail end space repetitive positioning accuracy measuring device and method
CN114719759B (en) Object surface perimeter and area measurement method based on SLAM algorithm and image instance segmentation technology
CN116993803A (en) Landslide deformation monitoring methods, devices and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A volume measurement method, system and depth camera based on depth image

Effective date of registration: 20220412

Granted publication date: 20200501

Pledgee: Zhejiang Mintai Commercial Bank Co.,Ltd. Hangzhou Binjiang small and micro enterprise franchise sub branch

Pledgor: HANGZHOU AIXIN INTELLIGENT TECHNOLOGY CO.,LTD.

Registration number: Y2022330000495

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200501