CN114862940A - Volume determination method and device, storage medium and electronic device - Google Patents
Volume determination method and device, storage medium and electronic device Download PDFInfo
- Publication number
- CN114862940A CN114862940A CN202210535894.9A CN202210535894A CN114862940A CN 114862940 A CN114862940 A CN 114862940A CN 202210535894 A CN202210535894 A CN 202210535894A CN 114862940 A CN114862940 A CN 114862940A
- Authority
- CN
- China
- Prior art keywords
- determining
- target
- pixel
- point
- target object
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000010586 diagram Methods 0.000 claims description 25
- 238000004590 computer program Methods 0.000 claims description 16
- 230000011218 segmentation Effects 0.000 claims description 15
- 238000001914 filtration Methods 0.000 claims description 13
- 230000004927 fusion Effects 0.000 claims description 9
- 230000004044 response Effects 0.000 claims description 6
- 230000000694 effects Effects 0.000 abstract description 5
- 238000005516 engineering process Methods 0.000 abstract description 3
- 238000012545 processing Methods 0.000 description 9
- 230000005540 biological transmission Effects 0.000 description 6
- 230000000007 visual effect Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 5
- 230000009467 reduction Effects 0.000 description 4
- 238000000605 extraction Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 235000002566 Capsicum Nutrition 0.000 description 1
- 239000006002 Pepper Substances 0.000 description 1
- 235000016761 Piper aduncum Nutrition 0.000 description 1
- 235000017804 Piper guineense Nutrition 0.000 description 1
- 244000203593 Piper nigrum Species 0.000 description 1
- 235000008184 Piper nigrum Nutrition 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 150000003839 salts Chemical class 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention provides a method and a device for determining a volume, a storage medium and an electronic device, wherein the method comprises the following steps: acquiring a first image obtained by shooting a target object by first equipment and a second image obtained by shooting the target object by second equipment, wherein the first equipment and the second equipment are arranged oppositely; determining an outline point cloud of a target object based on first parameter information of each first pixel point included in a first image and second parameter information of each second pixel point included in a second image, wherein the first parameter information and the second parameter information both comprise three-dimensional coordinate information and color parameters of the pixel points; the volume of the target object is determined based on the outer contour point cloud. By the method and the device, the problems of low volume determination efficiency and inaccuracy in the related technology are solved, and the effects of improving the volume determination efficiency and accuracy are achieved.
Description
Technical Field
The embodiment of the invention relates to the field of computers, in particular to a volume determination method and device, a storage medium and an electronic device.
Background
In the related art, a manual measurement mode or an approximate design mode is usually adopted when measuring the volume, and the two modes have low measurement efficiency and low accuracy for irregular living bodies.
Therefore, the problems of low efficiency and inaccuracy in volume determination exist in the related art.
In view of the above problems in the related art, no effective solution has been proposed.
Disclosure of Invention
The embodiment of the invention provides a method and a device for determining volume, a storage medium and an electronic device, which are used for at least solving the problems of low efficiency and inaccuracy in volume determination in the related art.
According to an embodiment of the present invention, there is provided a volume determination method including: acquiring a first image obtained by shooting a target object by first equipment and a second image obtained by shooting the target object by second equipment, wherein the first equipment and the second equipment are arranged oppositely; determining an outer contour point cloud of the target object based on first parameter information of each first pixel point included in the first image and second parameter information of each second pixel point included in the second image, wherein the first parameter information and the second parameter information both include three-dimensional coordinate information and color parameters of the pixel points; determining a volume of the target object based on the outer contour point cloud.
According to another embodiment of the present invention, there is provided a volume determination apparatus including: the device comprises an acquisition module and a display module, wherein the acquisition module is used for acquiring a first image obtained by shooting a target object by first equipment and a second image obtained by shooting the target object by second equipment, and the first equipment and the second equipment are arranged oppositely; the first determining module is used for determining the outline point cloud of the target object based on first parameter information of each first pixel point included in the first image and second parameter information of each second pixel point included in the second image, wherein the first parameter information and the second parameter information both comprise three-dimensional coordinate information and color parameters of the pixel points; a second determination module to determine a volume of the target object based on the outer contour point cloud.
According to yet another embodiment of the invention, there is also provided a computer-readable storage medium having a computer program stored therein, wherein the computer program, when executed by a processor, implements the steps of the method as set forth in any of the above.
According to yet another embodiment of the present invention, there is also provided an electronic device, including a memory in which a computer program is stored and a processor configured to execute the computer program to perform the steps in any of the above method embodiments.
According to the invention, a first image obtained by shooting the target object by first equipment and a second image obtained by shooting the target object by second equipment arranged opposite to the first equipment are obtained, the outline point cloud of the target object is determined according to first parameter information of each first pixel point included in the first image and second parameter information of each second pixel point included in the second image, wherein the first parameter information and the second parameter information both comprise three-dimensional coordinate information and color parameter information, and the volume of the target object is determined according to the outline point cloud. The three-dimensional coordinate information and the color information of each pixel point are utilized when the outer contour point cloud is determined, namely, the position information and the color information are fully utilized, the object boundary is more accurately extracted, the outer contour point cloud of the target object is determined, and the volume of the target object can be more accurately determined. Therefore, the problems of low volume determination efficiency and inaccuracy in the related technology can be solved, and the effect of improving the volume determination efficiency and accuracy is achieved.
Drawings
Fig. 1 is a block diagram of a hardware configuration of a mobile terminal of a method for determining a volume according to an embodiment of the present invention;
FIG. 2 is a flow chart of a method of determining a volume according to an embodiment of the invention;
FIG. 3 is a schematic diagram of the locations of a first device, a second device, and a target object according to an exemplary embodiment of the invention;
FIG. 4 is a schematic diagram of a relative arrangement of a first device and a second device in accordance with an exemplary embodiment of the present invention;
FIG. 5 is a schematic diagram of another relative arrangement of a first device and a second device according to an exemplary embodiment of the present invention;
FIG. 6 is a schematic illustration of a first image or a second image according to an exemplary embodiment of the invention;
FIG. 7 is a schematic structural diagram of a target network model according to an exemplary embodiment of the present invention;
FIG. 8 is a schematic diagram illustrating the effect of semantic segmentation performed by a target network model to determine pixel points belonging to a target object according to an exemplary embodiment of the present invention;
FIG. 9 is a normal vector diagram for each point included in the target outer contour point cloud in accordance with an exemplary embodiment of the present invention;
FIG. 10 is a schematic diagram of an envelope volume according to an exemplary embodiment of the present invention;
FIG. 11 is a schematic diagram of a tetrahedral subdivision model according to an exemplary embodiment of the present invention;
FIG. 12 is a schematic illustration of a cutaway view according to an exemplary embodiment of the present invention;
FIG. 13 is a flow chart of a method of determining a volume according to an embodiment of the present invention;
fig. 14 is a block diagram of a volume determination apparatus according to an embodiment of the present invention.
Detailed Description
Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings in conjunction with the embodiments.
It should be noted that the terms "first," "second," and the like in the description and claims of the present invention and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
The method provided by the embodiment of the application can be executed in a mobile terminal, a computer terminal or a similar operation device, and can also be executed in a server, a cloud terminal and a distributed server terminal, wherein the mobile terminal can comprise a smart camera, a network camera and the like. Taking the mobile terminal as an example, fig. 1 is a block diagram of a hardware structure of the mobile terminal of a method for determining a volume according to an embodiment of the present invention. As shown in fig. 1, the mobile terminal may include one or more (only one shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a processing device such as a microprocessor MCU or a programmable logic device FPGA), and a memory 104 for storing data, wherein the mobile terminal may further include a transmission device 106 for communication functions and an input-output device 108. It will be understood by those skilled in the art that the structure shown in fig. 1 is only an illustration, and does not limit the structure of the mobile terminal. For example, the mobile terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than shown in FIG. 1.
The memory 104 may be used to store a computer program, for example, a software program of an application software and a module, such as a computer program corresponding to the volume determination method in the embodiment of the present invention, and the processor 102 executes various functional applications and data processing by running the computer program stored in the memory 104, so as to implement the method described above. The memory 104 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory located remotely from the processor 102, which may be connected to the mobile terminal over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a Network adapter (NIC) that can be connected to other Network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is used to communicate with the internet in a wireless manner.
In the present embodiment, a method for determining a volume is provided, and fig. 2 is a flowchart of a method for determining a volume according to an embodiment of the present invention, as shown in fig. 2, the flowchart includes the following steps:
step S202, a first image obtained by shooting a target object by first equipment and a second image obtained by shooting the target object by second equipment are obtained, wherein the first equipment and the second equipment are arranged oppositely;
step S204, determining the outline point cloud of the target object based on the first parameter information of each first pixel point included in the first image and the second parameter information of each second pixel point included in the second image, wherein the first parameter information and the second parameter information both include three-dimensional coordinate information and color parameters of the pixel points;
step S206, determining the volume of the target object based on the outer contour point cloud.
In the above embodiments, the first device and the second device may each be a depth camera, such as an RGBD camera. The first device and the second device may be oppositely disposed and located on both sides of the target object. The problem that a single camera is difficult to acquire the object enveloping point cloud at one time is solved by using relative double-camera acquisition. Wherein the target object may be a human, an animal, a regular shaped item or an irregular shaped item. The schematic position diagram of the first device, the second device and the target object can be seen in fig. 3. As shown in fig. 3, circle 1 represents a first device and circle 2 represents a second device. The first device and the second device are installed oppositely, so that the whole outline information of the target object can be acquired at one time. External reference calibration can be carried out during installation, and a calibration ball method can be adopted for calibration. In order to prevent the target objects from being blocked by each other, it may be set that only one target exists in the defined area, that is, only one target object exists in the photographing areas of the first device and the second device.
It should be noted that the relative arrangement of the first device and the second device includes the relative arrangement of the visual field direction of the first device and the visual field direction of the second device, and the visual field direction of the first device and the visual field direction of the second device may be 180 ° (this value is only an exemplary illustration, and the angle between the visual field direction of the first device and the visual field direction of the second device is still approximately 180 °, such as 180 ° ± 5 °). If the viewing direction of the first device is to the left, then the viewing direction of the first device is to the right. The schematic diagram of the relative arrangement of the first device and the second device can be seen in fig. 4.
The direction of the field of view of the first device and the direction of the field of view of the second device may also be the same as the two relative directions of the target object. For example, the first device is located on the left side of the target object and photographs the target object from the left side of the target object, and the second device is located on the right side of the target object and photographs the target object from the right side of the target object. Alternatively, the first device is located in front of the target object and photographs the target object from the front of the target object, and the second device is located behind the target object and photographs the target object from the rear of the target object. The schematic diagram of the relative arrangement of the first device and the second device can be seen in fig. 5.
In determining the volume of the target object, the target object may be located in an overlapping region of the fields of view of the first and second devices. The heights of the first device and the second device may be the same or different.
In the above embodiment, both the first image and the second image may be RGBD images, the first parameter information of the first pixel point may include three-dimensional coordinate information (x, y, z) of the first pixel point, and may further include a color parameter (r, g, b) of the first pixel point, that is, the first parameter information may be represented as (x, y, z, r, g, b). Similarly, the second parameter information may also be expressed as (x, y, z, r, g, b). The outer contour point cloud of the target object can be determined according to the first parameter information of each first pixel point and the second parameter information of each second pixel point, the target object is restored according to the outer contour point cloud, and then the volume of the target object is determined. As shown in fig. 6, there are many outliers and interference point clouds such as the ground around the target object, so that semantic segmentation can be performed according to the first parameter information and the second parameter information, for example, foreground extraction of RGBD information is performed, and the interference points are removed to obtain an outline point cloud.
Optionally, the main body of the above steps may be a background processor, or other devices with similar processing capabilities, and may also be a machine integrated with at least an image acquisition device and a data processing device, where the image acquisition device may include a graphics acquisition module such as a camera, and the data processing device may include a terminal such as a computer and a mobile phone, but is not limited thereto.
According to the invention, a first image obtained by shooting the target object by first equipment and a second image obtained by shooting the target object by second equipment arranged opposite to the first equipment are obtained, the outline point cloud of the target object is determined according to first parameter information of each first pixel point included in the first image and second parameter information of each second pixel point included in the second image, wherein the first parameter information and the second parameter information both comprise three-dimensional coordinate information and color parameter information, and the volume of the target object is determined according to the outline point cloud. The three-dimensional coordinate information and the color information of each pixel point are utilized when the outer contour point cloud is determined, namely, the position information and the color information are fully utilized, the object boundary is more accurately extracted, the outer contour point cloud of the target object is determined, and the volume of the target object can be more accurately determined. Therefore, the problems of low volume determination efficiency and inaccuracy in the related technology can be solved, and the effect of improving the volume determination efficiency and accuracy is achieved.
In an exemplary embodiment, determining the outline point cloud of the target object based on the first parameter information of each first pixel point included in the first image and the second parameter information of each second pixel point included in the second image includes: determining a pixel point belonging to the target object in the first pixel points as a third pixel point based on the first parameter information according to all the first pixel points; determining a pixel point belonging to the target object in the second pixel points as a fourth pixel point according to the second parameter information based on all the second pixel points; and determining the outer contour point cloud based on the third pixel point and the fourth pixel point. In this embodiment, the pixel points belonging to the target object included in the first pixel points may be determined according to all the first parameter information of the first pixel points, and the pixel points belonging to the target object may be determined as the third pixel points. And determining a fourth pixel point belonging to the target object and included in the second pixel point according to all the second parameter information of the second pixel point. And determining the outer contour point cloud according to the third pixel point and the fourth pixel point. The third pixel points and the fourth pixel points can be determined to eliminate the pixel points which do not belong to the target object in the first image and the second image, and the outer contour point cloud is determined only according to the pixel points which belong to the target object, so that the accuracy of determining the outer contour point cloud of the target object is improved.
In an exemplary embodiment, the determining, based on the first parameter information of all the first pixel points, a pixel point belonging to the target object among the first pixel points as a third pixel point includes: determining a first global feature of the first image based on the first parameter information of all the first pixel points; determining a first local feature of each of the first pixel points based on the first parameter information of each of the first pixel points; fusing the first global feature and the first local feature to obtain a first fused feature; determining the third pixel point included in the first pixel point based on the first fusion feature. In this embodiment, the third pixel point belonging to the target object in the first pixel point may be determined through a target network model, and the target network model may be a PointNet + + network model. Referring to fig. 7, as shown in fig. 7, the point cloud P1 formed by all the first pixel points can be input into the PointNet + + network, wherein the point cloud P1 can be recorded as point cloud P1The global feature of the point cloud P, denoted as F, can be obtained by expressing { x, y, z, r, g, b } of the point cloud, i.e. the 3D coordinates and color information g ∈R 1*k . The local feature of each first pixel point can be determined according to each first parameter information and is marked as F l ∈R N*k . k is 1024, which is the length of the feature vector. By performing dimension reduction processing on the global features, the local features and the fusion features, the operation amount is reduced, the operation speed is improved, and meanwhile, the calculation power is saved.
In an exemplary embodiment, the determining, according to the second parameter information of all the second pixel points, a pixel point belonging to the target object among the second pixel points as a fourth pixel point includes: determining a second global feature of the second image based on the second parameter information of all the second pixel points; determining a second local feature of each second pixel point based on the second parameter information of each second pixel point; fusing the second global feature and the second local feature to obtain a second fused feature; determining the fourth pixel point included in the second pixel point based on the second fusion feature. In this embodiment, a fourth pixel point belonging to the target object in the second pixel points may be determined through the target network model. The target network model may be a PointNet + + network model. Inputting the point cloud P2 formed by all the second pixel points into the PointNet + + network, wherein the point cloud P2 can be recorded asThe { x, y, z, r, g, b } representing the point cloud, i.e., the 3D coordinates and color information, may obtain the global feature of the point cloud P2, denoted as F g ∈R 1*k . The local feature of each second pixel point can be determined according to each second parameter information and is marked as F l ∈R N*k . k is 1024, which is the length of the feature vector. Global feature F of second image through full connection layer g Local feature F of each second pixel point l Performing dimension reduction operation, and compressing the dimension into 256 dimensions from 1024 dimensions;after dimension reduction, the global feature vector is connected to the local feature vector to form a 512-dimensional feature vector, a second fusion feature is obtained, the feature vector is reduced to 128 dimensions through a full connection layer, and the feature is recorded asObtaining the prediction result M of each 3D point through a multilayer perceptron and a sigmoid function i Thereby, the fourth pixel point is determined. By performing dimension reduction processing on the global features, the local features and the fusion features, the operation amount is reduced, the operation speed is improved, and meanwhile, the calculation power is saved.
In the above embodiment, the loss function of semantic segmentation contains the semantic segmentation standard cross entropy l of all points sem The formula can be expressed asp i Representing the true probability that each point belongs to the target object, q i Representing the predicted probability that each point belongs to the target object.
In the above embodiment, { x, y, z, r, g, b } is used as a network input, and the object boundary is extracted more accurately by making full use of the position information and the color information.
In the above embodiment, the schematic effect diagram of the target network model performing semantic segmentation to determine the pixel points belonging to the target object may be shown in fig. 8, and as shown in fig. 8, the 3D point cloud of the target object may be extracted according to the prediction results (the third pixel points and the fourth pixel points), so as to determine the outer contour information of the target object.
In an exemplary embodiment, determining the outer contour point cloud based on the third pixel point and the fourth pixel point comprises: converting the third pixel point and the fourth pixel point into the same coordinate system to obtain a target pixel point; and determining the point cloud formed by the target pixel points as the outer contour point cloud. In this embodiment, the point cloud formed by the third pixel point and the point cloud formed by the fourth pixel point may be converted into the same coordinate system. When the coordinate system is converted, the third pixel point can be converted into the coordinate system where the fourth pixel point is located. Or converting the fourth pixel point into the coordinate system where the third pixel point is located. And the third pixel point and the fourth pixel point can be simultaneously converted into other coordinate systems. When the third pixel point is converted into the coordinate system of the fourth pixel point, or the fourth pixel point is converted into the coordinate system of the third pixel point, the first device and the second device can be calibrated. And determining the calibrated external parameter matrix. And realizing the coordinate conversion of the pixel points according to the external parameter matrix. According to the camera external parameter matrix calibrated in advance, the third pixel point and the fourth pixel point are used for splicing the outline of the complete envelope of the target object.
In one exemplary embodiment, determining the volume of the target object based on the outer contour point cloud comprises: determining a segmentation map of the target object based on the outer contour point cloud; determining a volume of the target object based on the split map. In this embodiment, a segmentation map of the target object may be determined from the outer contour point cloud, and the volume of the target object may be calculated from the segmentation map.
In one exemplary embodiment, determining a segmentation map for the target object based on the outer contour point cloud comprises: filtering the outer contour point cloud to obtain a target outer contour point cloud; determining a normal vector of each point included in the target outer contour point cloud; determining an envelope volume of the target object based on the normal vector; and subdividing the enveloping body to obtain the subdivision diagram. In this embodiment, the outer contour point cloud may be filtered to avoid impulse noise in the image and to make the point cloud smoother. After filtering, a normal vector of the target outer contour point cloud can be determined, and an envelope of the target object is determined according to the normal vector of each point. When the normal vector of each point is calculated, the fitting plane at the current point can be obtained through a least square method, and the normal vector of the current point can be obtained according to the fitting plane. Wherein, the calculation formula for the normal vector can be expressed asWhere Σ is the covariance matrix of the current point, p i As a current point KNN setThe first K/2 point of the KNN set, namely the first K/2 point closest to the current point,is the mean point of the first K/2 points. λ V ═ Σ V
In the above-described embodiment, the normal vector of each point may be determined by λ V ═ Σ V, λ V ═ Σ V being a standard eigenvalue equation, V being an eigenvector, λ being an eigenvalue, and λ V ═ Σ V being a standard eigenvalue equation 2 >λ 1 >λ 0 ,v 0 Namely the normal vector. It should be noted that λ is calculated by the above standard eigenvalue equation 2 >λ 1 >λ 0 Three characteristic values, λ 0 For the minimum value of the three characteristic values, the normal vector is calculated as the sum of lambda 0 Corresponding v 0 . The normal vector diagram of each point included in the target outer contour point cloud can be seen in fig. 9.
In an exemplary embodiment, the filtering the outer contour point cloud to obtain the target outer contour point cloud includes: aiming at each target point included in the outer contour point cloud, executing the following operations to obtain a first outer contour point cloud: determining a first preset space with the target point as a center, determining the density of the first preset space, and determining the target point with the density greater than or equal to the preset density as a point in the target outer contour point cloud; aiming at each target point included in the first outline point cloud, executing the following operations to obtain the target outline point cloud: determining a second preset space with the target point as a center; determining a coordinate mean value of points included in the second preset space, determining a coordinate median value of the points included in the second preset space, determining a first difference value between the coordinate mean value and the coordinates of the target point, determining a second difference value between the coordinate median value and the coordinates of the target point, determining the coordinate mean value as the coordinates of the target point in response to the first difference value being smaller than the second difference value, and determining the coordinate median value as the coordinates of the target point in response to the first difference value being larger than the second difference value. In this embodiment, the points of the outlier point cloud may be first outlier pointsAnd filtering the cloud noise to obtain a first outer contour point cloud. The point cloud density of the first preset space with the target point as the center can be determined, and when the calculated point cloud density is smaller than the preset density, the target point is deleted, so that the point with too low density is filtered. Wherein the density of the point cloud can be calculated by formulaAnd (6) performing calculation. Where N denotes the number of points included in the first preset space, p i Coordinates of points other than the target point in the first preset space are represented, and p represents coordinates of the target point. The first predetermined space may be a sphere with the target point as the center, or a polyhedron with the target point as the center, such as a tetrahedron, a hexahedron, an octahedron, etc.
In the above embodiment, after the point with too low density is filtered to obtain the first outer contour point cloud, mean filtering, median filtering, and the like may be performed to obtain the target outer contour point cloud, so as to implement data smoothing of adaptive filtering.
In the above embodiment, a formula may be adoptedDetermining the coordinate mean value of all points in the second preset space by adoptingAnd determining the median of the coordinates of all the points in the second preset space. And determining a first difference value of the coordinate mean value and the coordinate value of the target point, and a second difference value of the coordinate median value and the coordinate value of the target point. And when the second difference is smaller than the first difference, determining the coordinate median as the coordinate of the target point. I.e. the coordinate values of the target point may be expressed asThe final value of p is a point with a small difference with the original data, so that salt and pepper noise can be avoided, and the point cloud data can be smoother. Wherein the second predetermined space may be a targetThe sphere with the point as the center can also be a polyhedron with the target point as the center, such as a tetrahedron, a hexahedron, an octahedron and the like.
It should be noted that, in the following description,express according to Determining the value of p asOrNamely whenIs taken asWhen p is finally taken asWhenIs taken asWhen p is finally taken asIn the embodiment, the self-adaptive filtering operation after the foreground point cloud is extracted reduces errors caused by equipment precision, so that the result is more accurate.
In one exemplary embodiment, determining the envelope volume of the target object based on the normal vector comprises: performing surface reconstruction on the target outer contour point cloud based on the normal vector to obtain a plurality of target surfaces of the target object; determining an isosurface included in the target outer contour point cloud based on the normal vector; and connecting a plurality of target surfaces based on the isosurface to obtain the enveloping body. In this embodiment, after obtaining the normal vector of each point, surface reconstruction, such as poisson reconstruction, may be performed on the target outer contour point cloud according to the normal vector. The Poisson reconstruction of the point cloud comprises the following steps: and solving the Poisson equation of the point cloud with the normal vector, and then extracting the isosurface, wherein the normal vector of the point set is the set of the gradient of the object model surface indicating function. The Poisson equation is a discrete surface patch, and the surface patches can be connected through isosurface extraction; finally, a surface information model, namely an object enveloping body, which is formed by combining triangular surface patches without gaps is obtained. The schematic diagram of the envelope can be seen in fig. 10. The Poisson equation is used for surface reconstruction of the object to be measured, the problem of partial cavities of the point cloud can be solved, and the boundary information of the complex object can be described.
In an exemplary embodiment, subdividing the envelope volume to obtain the subdivision map includes: determining a tetrahedral subdivision model; and restoring the surface of the tetrahedral subdivision model based on the envelope to obtain the subdivision diagram. In this embodiment, Delaunay tetrahedron subdivision may be performed on the enveloping body, and an unconstrained Delaunay tetrahedron model may be obtained by a point-by-point interpolation algorithm. Wherein, the schematic diagram of the tetrahedron subdivision model can be seen in fig. 11. And then, performing surface recovery on the Delaunay through the Poisson reconstructed surface to obtain a real Delaunay tetrahedron subdivision of the object, and obtaining a subdivision map. The schematic view of the split view can be seen in fig. 12.
In the above embodiment, the Delaunay tetrahedron subdivision result is a three-dimensional tetrahedral mesh model composed of tetrahedral units. By using the Delaunay tetrahedron splitting method, the volume of the object to be measured can be accurately calculated. The volume of the target can be obtained by only calculating the sum of the volumes of all tetrahedrons in the grid. The calculation of the volume of a single tetrahedron can be realized by a determinant method, if T i Has four vertex coordinates of A (x) 0 ,y 0 ,z 0 )、B(x 1 ,y 1 ,z 1 )、C(x 2 ,y 2 ,z 2 )、D(x 3 ,y 3 ,z 3 ) Then volume V i And the total volume V is:
wherein, V i Representing the volume of each tetrahedron, V representing the volume of the target object, A (x) 0 ,y 0 ,z 0 )、B(x 1 ,y 1 ,z 1 )、C(x 2 ,y 2 ,z 2 )、D(x 3 ,y 3 ,z 3 ) Representing the coordinates of each vertex of the tetrahedron.
The method for determining the volume is described below with reference to specific embodiments:
fig. 13 is a flowchart of a method for determining a volume according to an embodiment of the present invention, as shown in fig. 13, the flowchart includes:
in step S1302, an image and point cloud information (corresponding to the first image and the second image) of a scene are obtained by two RGBD cameras arranged oppositely.
In step S1304, foreground extraction of RGBD information is achieved using a segmentation network, such as a convolutional neural network.
Step 1306, extracting the 3D point cloud corresponding to the object to be detected according to the network segmentation result.
Step S1308, according to the camera external parameter matrix calibrated in advance, the 2 device point clouds obtained in step S1306 are spliced into an outer contour (corresponding to the outer contour point cloud) of the complete envelope of the object to be measured.
Step 1310, preprocessing the complete object outline point cloud to obtain a target object outline point cloud.
In step S1312, a normal vector of each point is calculated.
Step S1314, performing poisson reconstruction on the point cloud by using the normal vector of each point to obtain an envelope.
Step S1316, performing Delaunay tetrahedron subdivision on the enveloping body,
in step S1318, the volume is calculated.
In the embodiment, the convolution network for processing the RGBD information can fully utilize the position information and the color information to more accurately extract the object boundary; the self-adaptive filtering operation after the foreground point cloud is extracted reduces errors caused by equipment precision, so that the result is more accurate; the Poisson equation is used for carrying out surface reconstruction on the object to be detected, the problem of partial cavities of the point cloud can be solved, and the boundary information of the complex object can be described; by using the Delaunay tetrahedron splitting method, the volume of the object to be measured can be accurately calculated.
Through the above description of the embodiments, those skilled in the art can clearly understand that the method according to the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but the former is a better implementation mode in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, or a network device) to execute the method according to the embodiments of the present invention.
In this embodiment, a volume determination apparatus is further provided, and the apparatus is used to implement the foregoing embodiments and preferred embodiments, and the description of the apparatus is omitted for brevity. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
Fig. 14 is a block diagram of a structure of a volume determination apparatus according to an embodiment of the present invention, as shown in fig. 14, the apparatus including:
an obtaining module 1402, configured to obtain a first image obtained by a first device shooting a target object, and a second image obtained by a second device shooting the target object, where the first device and the second device are arranged opposite to each other;
a first determining module 1404, configured to determine an outline point cloud of the target object based on first parameter information of each first pixel included in the first image and second parameter information of each second pixel included in the second image, where the first parameter information and the second parameter information each include three-dimensional coordinate information and a color parameter of a pixel;
a second determining module 1406 for determining a volume of the target object based on the outer contour point cloud.
In an exemplary embodiment, the first determining module 1404 may determine the outline point cloud of the target object based on the first parameter information of each first pixel point included in the first image and the second parameter information of each second pixel point included in the second image by: determining a pixel point belonging to the target object in the first pixel points as a third pixel point based on the first parameter information according to all the first pixel points; determining a pixel point belonging to the target object in the second pixel points as a fourth pixel point according to the second parameter information based on all the second pixel points; and determining the outer contour point cloud based on the third pixel point and the fourth pixel point.
In an exemplary embodiment, the first determining module 1404 may determine, based on the first parameter information of all the first pixels, a pixel belonging to the target object among the first pixels as a third pixel by: determining a first global feature of the first image based on the first parameter information of all the first pixel points; determining a first local feature of each of the first pixel points based on the first parameter information of each of the first pixel points; fusing the first global feature and the first local feature to obtain a first fused feature; determining the third pixel point included in the first pixel point based on the first fusion feature.
In an exemplary embodiment, the first determining module 1404 may determine, according to the second parameter information of all the second pixel points, a pixel point belonging to the target object in the second pixel points as a fourth pixel point by: determining a second global feature of the second image based on the second parameter information of all the second pixel points; determining a second local feature of each second pixel point based on the second parameter information of each second pixel point; fusing the second global feature and the second local feature to obtain a second fused feature; determining the fourth pixel point included in the second pixel point based on the second fusion feature.
In an exemplary embodiment, the first determining module 1404 may determine the outer contour point cloud based on the third pixel point and the fourth pixel point by: converting the third pixel point and the fourth pixel point into the same coordinate system to obtain a target pixel point; and determining the point cloud formed by the target pixel points as the outer contour point cloud.
In an exemplary embodiment, the second determination module 1406 may determine the volume of the target object based on the outer contour point cloud by: determining a segmentation map of the target object based on the outer contour point cloud; determining a volume of the target object based on the split map.
In an exemplary embodiment, the second determining module 1406 may determine a segmentation map of the target object based on the outer contour point cloud by: filtering the outer contour point cloud to obtain a target outer contour point cloud; determining a normal vector of each point included in the target outer contour point cloud; determining an envelope volume of the target object based on the normal vector; and subdividing the enveloping body to obtain the subdivision diagram.
In an exemplary embodiment, the second determining module 1406 may perform filtering processing on the outer contour point cloud to obtain a target outer contour point cloud by: aiming at each target point included in the outer contour point cloud, executing the following operations to obtain a first outer contour point cloud: determining a first preset space with the target point as a center, determining the density of the first preset space, and determining the target point with the density greater than or equal to the preset density as a point in the target outer contour point cloud; aiming at each target point included in the first outline point cloud, executing the following operations to obtain the target outline point cloud: determining a second preset space with the target point as a center; determining a coordinate mean value of points included in the second preset space, determining a coordinate median value of the points included in the second preset space, determining a first difference value between the coordinate mean value and the coordinates of the target point, determining a second difference value between the coordinate median value and the coordinates of the target point, determining the coordinate mean value as the coordinates of the target point in response to the first difference value being smaller than the second difference value, and determining the coordinate median value as the coordinates of the target point in response to the first difference value being larger than the second difference value.
In an exemplary embodiment, the second determining module 1406 may determine the envelope volume of the target object based on the normal vector by: performing surface reconstruction on the target outer contour point cloud based on the normal vector to obtain a plurality of target surfaces of the target object; determining an isosurface included in the target outer contour point cloud based on the normal vector; and connecting a plurality of target surfaces based on the isosurface to obtain the enveloping body.
In an exemplary embodiment, the second determining module 1406 may perform subdivision on the envelope volume to obtain the subdivision map by: determining a tetrahedral subdivision model; and restoring the surface of the tetrahedral subdivision model based on the envelope to obtain the subdivision diagram.
It should be noted that, the above modules may be implemented by software or hardware, and for the latter, the following may be implemented, but not limited to: the modules are all positioned in the same processor; alternatively, the modules are respectively located in different processors in any combination.
Embodiments of the present invention also provide a computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the steps of the method as set forth in any of the above.
In an exemplary embodiment, the computer readable storage medium may include, but is not limited to: various media capable of storing computer programs, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Embodiments of the present invention also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the above method embodiments.
In an exemplary embodiment, the electronic apparatus may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
For specific examples in this embodiment, reference may be made to the examples described in the above embodiments and exemplary embodiments, and details of this embodiment are not repeated herein.
It will be apparent to those skilled in the art that the various modules or steps of the invention described above may be implemented using a general purpose computing device, they may be centralized on a single computing device or distributed across a network of computing devices, and they may be implemented using program code executable by the computing devices, such that they may be stored in a memory device and executed by the computing device, and in some cases, the steps shown or described may be performed in an order different than that described herein, or they may be separately fabricated into various integrated circuit modules, or multiple ones of them may be fabricated into a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the principle of the present invention should be included in the protection scope of the present invention.
Claims (13)
1. A method of volume determination, comprising:
acquiring a first image obtained by shooting a target object by first equipment and a second image obtained by shooting the target object by second equipment, wherein the first equipment and the second equipment are arranged oppositely;
determining the outer contour point cloud of the target object based on first parameter information of each first pixel point included in the first image and second parameter information of each second pixel point included in the second image, wherein the first parameter information and the second parameter information both include three-dimensional coordinate information and color parameters of the pixel points;
determining a volume of the target object based on the outer contour point cloud.
2. The method of claim 1, wherein determining the outline point cloud of the target object based on the first parameter information of each first pixel point included in the first image and the second parameter information of each second pixel point included in the second image comprises:
determining a pixel point belonging to the target object in the first pixel points as a third pixel point based on the first parameter information of all the first pixel points; and are
Determining a pixel point belonging to the target object in the second pixel points as a fourth pixel point based on the second parameter information of all the second pixel points;
and determining the outer contour point cloud based on the third pixel point and the fourth pixel point.
3. The method according to claim 2, wherein the determining, based on the first parameter information of all the first pixels, a pixel belonging to the target object among the first pixels as a third pixel comprises:
determining a first global feature of the first image based on the first parameter information of all the first pixel points;
determining a first local feature of each of the first pixel points based on the first parameter information of each of the first pixel points;
fusing the first global feature and the first local feature to obtain a first fused feature;
determining the third one of the first pixels based on the first fusion feature.
4. The method according to claim 2, wherein the determining, according to the second parameter information of all the second pixels, a pixel belonging to the target object among the second pixels as a fourth pixel comprises:
determining a second global feature of the second image based on the second parameter information of all the second pixel points;
determining a second local feature of each second pixel point based on the second parameter information of each second pixel point;
fusing the second global feature and the second local feature to obtain a second fused feature;
determining the fourth pixel point included in the second pixel point based on the second fusion feature.
5. The method of any of claims 2 to 4, wherein determining the outer contour point cloud based on the third pixel point and the fourth pixel point comprises:
converting the third pixel point and the fourth pixel point into the same coordinate system to obtain a target pixel point;
and determining the point cloud formed by the target pixel points as the outer contour point cloud.
6. The method of claim 1, wherein determining the volume of the target object based on the outer contour point cloud comprises:
determining a segmentation map of the target object based on the outer contour point cloud;
determining a volume of the target object based on the split map.
7. The method of claim 6, wherein determining a segmentation map for the target object based on the outer contour point cloud comprises:
filtering the outer contour point cloud to obtain a target outer contour point cloud;
determining a normal vector of each point included in the target outer contour point cloud;
determining an envelope volume of the target object based on the normal vector;
and subdividing the enveloping body to obtain the subdivision diagram.
8. The method of claim 7, wherein filtering the outer contour point cloud to obtain a target outer contour point cloud comprises:
aiming at each target point included in the outer contour point cloud, executing the following operations to obtain a first outer contour point cloud: determining a first preset space with the target point as a center, determining the density of the first preset space, and determining the target point with the density greater than or equal to the preset density as a point in the target outer contour point cloud;
aiming at each target point included in the first outline point cloud, executing the following operations to obtain the target outline point cloud: determining a second preset space with the target point as a center; determining a coordinate mean value of points included in the second preset space, determining a coordinate median value of the points included in the second preset space, determining a first difference value between the coordinate mean value and the coordinates of the target point, determining a second difference value between the coordinate median value and the coordinates of the target point, determining the coordinate mean value as the coordinates of the target point in response to the first difference value being smaller than the second difference value, and determining the coordinate median value as the coordinates of the target point in response to the first difference value being larger than the second difference value.
9. The method of claim 7, wherein determining the envelope volume of the target object based on the normal vector comprises:
performing surface reconstruction on the target outer contour point cloud based on the normal vector to obtain a plurality of target surfaces of the target object;
determining an isosurface included in the target outer contour point cloud based on the normal vector;
and connecting a plurality of target surfaces based on the isosurface to obtain the enveloping body.
10. The method of claim 7, wherein subdividing the envelope volume to obtain the subdivision map comprises:
determining a tetrahedral subdivision model;
and restoring the surface of the tetrahedral subdivision model based on the envelope to obtain the subdivision diagram.
11. A volume determination apparatus, comprising:
the device comprises an acquisition module and a display module, wherein the acquisition module is used for acquiring a first image obtained by shooting a target object by first equipment and a second image obtained by shooting the target object by second equipment, and the first equipment and the second equipment are arranged oppositely;
the first determining module is used for determining the outline point cloud of the target object based on first parameter information of each first pixel point included in the first image and second parameter information of each second pixel point included in the second image, wherein the first parameter information and the second parameter information both comprise three-dimensional coordinate information and color parameters of the pixel points;
a second determination module for determining a volume of the target object based on the outer contour point cloud.
12. A computer-readable storage medium, in which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 10.
13. An electronic device comprising a memory and a processor, wherein the memory has stored therein a computer program, and wherein the processor is arranged to execute the computer program to perform the method of any of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210535894.9A CN114862940A (en) | 2022-05-17 | 2022-05-17 | Volume determination method and device, storage medium and electronic device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210535894.9A CN114862940A (en) | 2022-05-17 | 2022-05-17 | Volume determination method and device, storage medium and electronic device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114862940A true CN114862940A (en) | 2022-08-05 |
Family
ID=82636446
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210535894.9A Pending CN114862940A (en) | 2022-05-17 | 2022-05-17 | Volume determination method and device, storage medium and electronic device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114862940A (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120307010A1 (en) * | 2011-06-06 | 2012-12-06 | Microsoft Corporation | Object digitization |
US20160029648A1 (en) * | 2013-03-15 | 2016-02-04 | Csb-System Ag | Device for volumetrically measuring a slaughter animal body object |
US20170228885A1 (en) * | 2014-08-08 | 2017-08-10 | Cargometer Gmbh | Device and method for determining the volume of an object moved by an industrial truck |
WO2018025842A1 (en) * | 2016-08-04 | 2018-02-08 | 株式会社Hielero | Point group data conversion system, method, and program |
CN108573221A (en) * | 2018-03-28 | 2018-09-25 | 重庆邮电大学 | A Vision-based Saliency Detection Method for Robotic Target Parts |
CN109655019A (en) * | 2018-10-29 | 2019-04-19 | 北方工业大学 | Cargo volume measurement method based on deep learning and three-dimensional reconstruction |
CN113256640A (en) * | 2021-05-31 | 2021-08-13 | 北京理工大学 | Method and device for partitioning network point cloud and generating virtual environment based on PointNet |
CN113538666A (en) * | 2021-07-22 | 2021-10-22 | 河北农业大学 | A fast reconstruction method of three-dimensional model of plant plant |
CN113870435A (en) * | 2021-09-28 | 2021-12-31 | 浙江华睿科技股份有限公司 | Point cloud segmentation method and device, electronic equipment and storage medium |
-
2022
- 2022-05-17 CN CN202210535894.9A patent/CN114862940A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120307010A1 (en) * | 2011-06-06 | 2012-12-06 | Microsoft Corporation | Object digitization |
US20160029648A1 (en) * | 2013-03-15 | 2016-02-04 | Csb-System Ag | Device for volumetrically measuring a slaughter animal body object |
US20170228885A1 (en) * | 2014-08-08 | 2017-08-10 | Cargometer Gmbh | Device and method for determining the volume of an object moved by an industrial truck |
WO2018025842A1 (en) * | 2016-08-04 | 2018-02-08 | 株式会社Hielero | Point group data conversion system, method, and program |
CN108573221A (en) * | 2018-03-28 | 2018-09-25 | 重庆邮电大学 | A Vision-based Saliency Detection Method for Robotic Target Parts |
CN109655019A (en) * | 2018-10-29 | 2019-04-19 | 北方工业大学 | Cargo volume measurement method based on deep learning and three-dimensional reconstruction |
CN113256640A (en) * | 2021-05-31 | 2021-08-13 | 北京理工大学 | Method and device for partitioning network point cloud and generating virtual environment based on PointNet |
CN113538666A (en) * | 2021-07-22 | 2021-10-22 | 河北农业大学 | A fast reconstruction method of three-dimensional model of plant plant |
CN113870435A (en) * | 2021-09-28 | 2021-12-31 | 浙江华睿科技股份有限公司 | Point cloud segmentation method and device, electronic equipment and storage medium |
Non-Patent Citations (1)
Title |
---|
邵巍;崔平远;崔祜涛;: "利用三角剖分算法进行小天体物理属性计算", 哈尔滨工业大学学报, no. 05, 15 May 2010 (2010-05-15) * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7125512B2 (en) | Object loading method and device, storage medium, electronic device, and computer program | |
CN110874864B (en) | Method, device, electronic equipment and system for obtaining three-dimensional model of object | |
CN108876926B (en) | Navigation method and system in panoramic scene and AR/VR client equipment | |
CN107862744B (en) | Three-dimensional modeling method for aerial image and related product | |
CN113192179B (en) | Three-dimensional reconstruction method based on binocular stereo vision | |
CN108648270A (en) | Unmanned plane real-time three-dimensional scene reconstruction method based on EG-SLAM | |
EP3249613A1 (en) | Data processing method and apparatus | |
CN111612898B (en) | Image processing method, image processing device, storage medium and electronic equipment | |
CN112270709B (en) | Map construction method and device, computer readable storage medium and electronic equipment | |
CN113744408B (en) | Grid generation method, device and storage medium | |
CN112991458A (en) | Rapid three-dimensional modeling method and system based on voxels | |
CN118172507B (en) | Digital twinning-based three-dimensional reconstruction method and system for fusion of transformer substation scenes | |
CN107133977A (en) | A kind of quick stereo matching process that model is produced based on probability | |
GB2534903A (en) | Method and apparatus for processing signal data | |
CN112233149B (en) | Method and device for determining scene flow, storage medium, and electronic device | |
CN113902802A (en) | Visual positioning method and related device, electronic equipment and storage medium | |
CN117710583A (en) | Space-to-ground image three-dimensional reconstruction method, system and equipment based on nerve radiation field | |
CN114066999A (en) | Target positioning system and method based on three-dimensional modeling | |
CN105466399A (en) | Quick semi-global dense matching method and device | |
CN117456114A (en) | Multi-view-based three-dimensional image reconstruction method and system | |
CN118247429A (en) | A method and system for rapid three-dimensional modeling in air-ground collaboration | |
EP3906530B1 (en) | Method for 3d reconstruction of an object | |
CN114862940A (en) | Volume determination method and device, storage medium and electronic device | |
US10861174B2 (en) | Selective 3D registration | |
CN117437357A (en) | Model construction method, device, non-volatile storage medium and electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |