Disclosure of Invention
The application mainly aims to provide a loading rate measuring method, a device, a system, equipment, a storage medium and a product, which can effectively improve the accuracy of the measured cargo loading rate.
In order to achieve the above object, the present application provides a load rate measurement method comprising:
collecting a current visible light image under a cargo loading scene and collecting target depth information of a plurality of pixel points in the current visible light image, wherein the target depth information indicates the space positions of object points corresponding to the plurality of pixel points;
Image segmentation and background modeling are carried out on the current visible light image so as to determine objects to which a plurality of pixel points in the current visible light image belong respectively;
determining interference points and failure points in the plurality of pixel points, wherein the interference points are pixel points corresponding to loading personnel, and the failure points are pixel points with deviation of target depth information;
Deleting the target depth information of the interference point, and updating the target depth information of the failure point based on the target depth information of the pixel points around the failure point and belonging to the same object with the failure point;
And determining cargo volume change based on the target depth information of the pixel points in the current visible light image and the target depth information of the pixel points in the historical visible light image, and determining the current cargo loading rate based on the cargo volume change, wherein the historical visible light image is acquired under the cargo loading scene before the current visible light image.
Optionally, the acquiring the target depth information of the plurality of pixels in the current visible light image includes:
acquiring first depth information of the plurality of pixel points by using a laser ranging through a depth camera;
acquiring second depth information of the plurality of pixel points by using binocular parallax through a multi-eye camera;
and fusing the first depth information and the second depth information of the plurality of pixel points to obtain target depth information of the plurality of pixel points.
Optionally, the fusing the first depth information and the second depth information of the plurality of pixel points to obtain target depth information of the plurality of pixel points includes:
For any one of the plurality of pixel points, taking the second depth information as the target depth information when the depth value in the second depth information is smaller than a reference threshold value;
Taking the first depth information as the target depth information in the case that the depth value in the second depth information is not less than the reference threshold value;
Wherein the depth value indicates a distance between an object point corresponding to the pixel point and the camera.
Optionally, the fusing the first depth information and the second depth information of the plurality of pixel points to obtain target depth information of the plurality of pixel points includes:
for any one of the plurality of pixel points, performing weighted average on the depth value in the first depth information and the depth value in the second depth information to obtain a depth value in the target depth information;
Wherein the depth value indicates a distance between an object point corresponding to the pixel point and the camera.
Optionally, the acquiring the current visible light image in the cargo loading scene and acquiring the target depth information of a plurality of pixel points in the current visible light image includes:
collecting a plurality of reference images, wherein the plurality of reference images comprise the current visible light image and adjacent frames of the current visible light image, and the reference images comprise the plurality of pixel points;
acquiring reference depth information of the plurality of pixel points in the plurality of reference images;
And fusing a plurality of pieces of reference depth information of the pixel points for any one of the plurality of pixel points to obtain target depth information of the pixel points.
Optionally, the updating the target depth information of the failure point based on the target depth information of the pixels around the failure point and belonging to the same object as the failure point includes:
And fusing target depth information of a plurality of pixel points around the failure point and belonging to the same object with the failure point to obtain updated target depth information of the failure point.
In addition, in order to achieve the above object, the present application also proposes a load rate measuring device including:
The data acquisition module is used for acquiring a current visible light image in a cargo loading scene and acquiring target depth information of a plurality of pixel points in the current visible light image, wherein the target depth information indicates the space positions of object points corresponding to the pixel points;
The image processing module is used for carrying out image segmentation and background modeling on the current visible light image so as to determine an object to which a plurality of pixel points in the current visible light image respectively belong;
The pixel point determining module is used for determining an interference point and a failure point in the plurality of pixel points, wherein the interference point is a pixel point corresponding to a loading person, and the failure point is a pixel point with deviation of target depth information;
The information updating module is used for deleting the target depth information of the interference point and updating the target depth information of the failure point based on the target depth information of the pixel points around the failure point and belonging to the same object with the failure point;
and the loading rate determining module is used for determining the cargo volume change amount based on the target depth information of the pixel points in the current visible light image and the target depth information of the pixel points in the historical visible light image, and determining the current cargo loading rate based on the cargo volume change amount, wherein the historical visible light image is acquired in the cargo loading scene before the current visible light image.
Optionally, the data acquisition module includes:
the first acquisition submodule is used for acquiring first depth information of the plurality of pixel points by utilizing laser ranging through the depth camera;
The second acquisition sub-module is used for acquiring second depth information of the plurality of pixel points by utilizing binocular parallax through the multi-view camera;
and the fusion sub-module is used for fusing the first depth information and the second depth information of the plurality of pixel points to obtain target depth information of the plurality of pixel points.
Optionally, the fusion submodule is configured to, for any one of the plurality of pixel points, take the second depth information as the target depth information when a depth value in the second depth information is smaller than a reference threshold value, and take the first depth information as the target depth information when the depth value in the second depth information is not smaller than the reference threshold value, where the depth value indicates a distance between an object point corresponding to the pixel point and a camera.
Optionally, the fusion submodule is configured to, for any one of the plurality of pixel points, perform weighted average on a depth value in the first depth information and a depth value in the second depth information to obtain a depth value in the target depth information, where the depth value indicates a distance between an object point corresponding to the pixel point and a camera.
Optionally, the data acquisition module is configured to acquire a plurality of reference images, where the plurality of reference images include the current visible light image and an adjacent frame of the current visible light image, and the reference images include the plurality of pixel points, acquire reference depth information of the plurality of pixel points in the plurality of reference images, and fuse the plurality of reference depth information of the pixel points for any one of the plurality of pixel points to obtain target depth information of the pixel point.
Optionally, the information updating module is configured to fuse target depth information of a plurality of pixel points around the failure point and belonging to the same object with the failure point, so as to obtain target depth information updated by the failure point.
In addition, in order to achieve the above object, the present application also proposes a load rate measurement apparatus including a memory, a processor, and a load rate measurement program stored on the memory and executable on the processor, the load rate measurement program being configured to implement the load rate measurement method as described above.
In addition, in order to achieve the above object, the present application also proposes a storage medium having stored thereon a load rate measurement program which, when executed by a processor, implements the load rate measurement method as described above.
Furthermore, to achieve the above object, the present application provides a computer program product comprising a load-rate measurement program which, when executed by a processor, implements a load-rate measurement method as described above.
One or more technical schemes provided by the application have at least the following technical effects:
In the scheme provided by the application, the current visible light image and the target depth information of a plurality of pixel points in the current visible light image in the cargo loading scene are acquired, and the target depth information indicates the space positions of the object points corresponding to the plurality of pixel points, so that the current cargo loading rate can be determined based on the target depth information. Considering that the acquired multiple pixel points have interference points and failure points, wherein the interference points are the pixel points corresponding to loading personnel, and the failure points are the pixel points with deviation of target depth information. The target depth information of the interference point or the failure point introduces errors into calculation of the loading rate of the current goods, so that image segmentation and background modeling are carried out on the current visible light image to determine objects to which a plurality of pixel points in the current visible light image belong respectively, the interference point corresponding to a loading person in the plurality of pixel points can be obtained, the target depth information of the interference point is deleted, and the target depth information of the failure point can be updated based on the target depth information of the pixel points around the failure point and belonging to the same object with the failure point. Because large jump does not occur between the target depth information of the pixels around the failure point and the failure point belong to the same object and the target depth information of the failure point, the target depth information of the failure point is updated based on the target depth information of the pixels around the failure point and the failure point belong to the same object, and the accuracy of the updated target depth information can be ensured. Therefore, the cargo volume change amount can be accurately determined based on the target depth information of the pixel point in the current visible light image and the target depth information of the pixel point in the historical visible light image, so that the current cargo loading rate is determined based on the cargo volume change amount, and the accuracy of the current cargo loading rate can be effectively improved.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the technical solution of the present application and are not intended to limit the present application.
For a better understanding of the technical solution of the present application, the following detailed description will be given with reference to the drawings and the specific embodiments.
Fig. 1 is a schematic diagram of a load rate measurement system according to an embodiment of the present application. Referring to fig. 1, the load rate measurement system includes a vision apparatus 101 and a measurement apparatus 102. The vision device 101 is electrically connected with the measuring device 102. Illustratively, the vision device 101 and the measurement device 102 are connected by a wireless or wired network. The vision apparatus 101 is disposed at a position having a predetermined distance from the cargo loading area, and its field of view covers the cargo loading area.
Illustratively, the vision apparatus 101 is disposed in the vicinity of the cargo loading station at a predetermined distance from the cargo loading station to capture a cargo loading scene. For example, a photographing lever is provided right in front of the cargo loading table, and the vision apparatus 101 is installed above the photographing lever, facing the direction of the cargo loading table.
Wherein the vision device 101 comprises a visible light camera and a depth camera. By way of example, the measurement device 102 includes a computer, a cell phone, a tablet computer, or other type of measurement device.
In one possible implementation, referring to fig. 2, the vision device 101 in the load rate measurement system includes one depth camera and one visible light camera. The visible light camera is used for collecting visible light images in the cargo loading scene, the depth camera is used for collecting target depth information of a plurality of pixel points in the visible light images, and the target depth information indicates the space positions of object points corresponding to the pixel points. The measurement device 102 is used for performing image segmentation and background modeling on a current visible light image to determine objects to which a plurality of pixel points in the current visible light image belong respectively, determining interference points and failure points in the plurality of pixel points, wherein the interference points are pixel points corresponding to loading personnel, the failure points are pixel points with deviation of target depth information, deleting the target depth information of the interference points, updating the target depth information of the failure points based on the target depth information of the pixel points around the failure points and belonging to the same object, determining cargo volume change amount based on the target depth information of the pixel points in the current visible light image and the target depth information of the pixel points in the historical visible light image, and determining the current cargo loading rate based on the cargo volume change amount. Wherein the historical visible light image is acquired in the cargo loading scene prior to the current visible light image. The load rate is the volume ratio of the cargo within the cargo compartment, i.e. the ratio of the volume of the cargo to the volume of the cargo compartment.
It should be noted that, in the case where the vision apparatus 101 in the loading rate measurement system includes one depth camera and one visible light camera, the depth camera and the visible light camera in the vision apparatus 101 may constitute a binocular camera. Correspondingly, the depth camera is used for acquiring first depth information of a plurality of pixel points in the current visible light image by utilizing laser ranging. The binocular camera formed by the depth camera and the visible light camera is used for acquiring second depth information of a plurality of pixel points by utilizing binocular parallax, namely parallax between the depth camera and the visible light camera. The measurement device 102 is configured to fuse the first depth information and the second depth information of the plurality of pixels to obtain target depth information of the plurality of pixels. Wherein the depth camera comprises a laser and a detector. The depth camera emits laser through a laser and then detects echo signals through a detector, so that depth information of pixel points in a visible light image in a cargo loading scene is obtained.
In another possible implementation, the vision device 101 in the load rate measurement system includes a depth camera and a multi-view camera. The multi-view camera includes at least two visible light cameras. Accordingly, any one of the multiple cameras is used to capture a current visible light image of the cargo loading scene. The depth camera is used for acquiring first depth information of a plurality of pixel points in a current visible light image by utilizing laser ranging, the multi-view camera is used for acquiring second depth information of the plurality of pixel points by utilizing binocular parallax, and the measuring equipment 102 is used for fusing the first depth information and the second depth information of the plurality of pixel points to obtain target depth information of the plurality of pixel points.
Referring to fig. 3, the vision apparatus 101 in the load rate measurement system includes one depth camera and one binocular camera, which is composed of two visible light cameras. The binocular camera is capable of determining second depth information of pixels in the visible light image under the cargo loading scene using binocular parallax. In this way, the vision device 101 has two sources of depth information, namely a depth camera and a binocular camera, so that more accurate depth information can be obtained using the depth information of these two sources.
Illustratively, referring to fig. 4, the vision apparatus 101 in the load rate measurement system includes one depth camera and one three-eye camera, which is composed of three visible light cameras. Illustratively, the three visible light cameras have different focal lengths, and accordingly, each visible light camera is responsible for capturing images within a different range of distances. For example, the first visible camera is provided with a wide angle lens for close range environmental viewing, providing a wide viewing angle. The second visible camera is provided with a standard lens for medium range viewing, providing balanced viewing angle and distance measurement capability. The third visible light camera is provided with a long-focus lens for long-distance observation, and is beneficial to detecting objects at a distance. The configuration of a three-eye camera provides a wider viewing angle and more accurate distance measurement capability than a single-or dual-eye camera. Since there is parallax between three visible light cameras of the three-eye camera, in the embodiment of the present application, the vision apparatus 101 can determine the second depth information of the pixel point in the visible light image in the cargo loading scene through the parallax between any two visible light cameras of the three-eye camera.
Fig. 5 is a flowchart of a first embodiment of the loading rate measuring method according to the present application. Referring to fig. 5, taking an execution subject as an example of a load rate measurement system, the load rate measurement method includes:
Step S10, collecting a current visible light image in a cargo loading scene and collecting target depth information of a plurality of pixel points in the current visible light image, wherein the target depth information indicates the space positions of object points corresponding to the plurality of pixel points.
Visible light images are also called RGB (Red Green Blue) images, i.e. full color images. The visible light image has rich pixel points, each pixel point has color, and the image picture is clear, so that the object identification is convenient. The visible light image is acquired by a visible light camera in the load rate measurement system. Visible light cameras, also known as RGB cameras, capture light in three colors, red, green, and blue, respectively, through three separate sensors or filters, thereby generating full color images.
The cargo loading scene includes various objects such as cargo, cargo box, etc. And acquiring a current visible light image in the cargo loading scene, and recording objects in the cargo loading scene through pixel points in the visible light image. Each pixel point in the visible light image corresponds to an object point in real space. In the cargo loading scene, the object point has an actual spatial position, and the target depth information of the pixel point reflects the spatial position of the object point corresponding to the pixel point.
Optionally, the target depth information includes depth values and angle information of the pixel points. The depth value is the distance between the object point corresponding to the pixel point and the camera. The angle information describes the direction in which the pixel point corresponds to the object point. For example, the angle information is an included angle between a connecting line between the object point corresponding to the pixel point and the camera and a reference line, and the position and the direction of the reference line are not limited by the application. Optionally, the target depth information includes spatial coordinates of the pixel point corresponding to the object point. For example, after depth values and angle information of a plurality of pixel points are acquired by a depth camera, a point cloud is drawn based on the acquired depth values and angle information, and the point cloud comprises three-dimensional coordinates of the plurality of pixel points corresponding to object points.
Optionally, the load rate measurement system comprises a depth camera, such as a TOF (Time Of Flight) camera, which performs distance measurements by the Time Of light propagation. Accordingly, target depth information of a plurality of pixel points in the visible light image is acquired by the depth camera. The specific implementation mode is that the loading rate measuring system collects a current visible light image through a visible light camera in the loading rate measuring system, and collects depth images at the same moment through a depth camera, and the current depth image is aligned with the current visible light image based on image registration parameters between the visible light camera and the depth camera. And then determining the target depth information of a plurality of pixel points in the current depth image as the target depth information of a plurality of pixel points in the current visible light image.
It should be noted that before the image is acquired, RGBD (Red Green Blue Depth, color image and depth information) calibration is performed on the visible light camera and the depth camera in the loading rate measurement system, that is, the visible light camera and the depth camera are calibrated. In particular to accurate measurement and adjustment of internal parameters and external parameters of a camera so as to ensure that the acquired visible light image and depth image can accurately reflect the real world spatial relationship. The calibration process comprises the following steps:
and (3) internal parameter calibration, namely determining parameters such as focal length, principal point coordinates and the like of the camera, wherein the parameters determine the projection mode of the image on an imaging plane.
The extrinsic calibration is to determine the spatial relationship between the visible camera and the depth camera, including rotation and translation matrices, to ensure that the two images are properly aligned.
Depth calibration-since there may be errors in the depth information of the depth image, these errors need to be corrected by calibration to obtain more accurate depth information.
And (3) image registration, namely aligning the depth image with the visible light image, and ensuring that the depth image and the visible light image are in the same coordinate system.
Optionally, the load rate measurement system comprises a binocular camera consisting of two visible light cameras. Target depth information of a plurality of pixel points in the visible light image is acquired by the binocular camera. The specific implementation mode is that the loading rate measuring system respectively collects visible light images through two visible light cameras in the binocular camera, and determines target depth information of a plurality of pixel points in any visible light image based on position deviation between corresponding points of the two visible light images.
Optionally, the target depth information of a plurality of pixels in the current visible light image is determined in combination with the previous and subsequent frame images. Correspondingly, acquiring a current visible light image and target depth information of a plurality of pixel points in the current visible light image in a cargo loading scene comprises the steps of acquiring a plurality of reference images, wherein the plurality of reference images comprise the current visible light image and adjacent frames of the current visible light image, the reference images comprise the plurality of pixel points, acquiring reference depth information of the plurality of pixel points in the plurality of reference images, and fusing the plurality of reference depth information of the pixel points for any pixel point in the plurality of pixel points to obtain the target depth information of the pixel point.
Wherein the number of acquired reference images is not limited. For example, the current visible light image and two frames of images before the current visible light image are acquired as reference images, or the current visible light image and one frame of images before and after the current visible light image are acquired as reference images. In addition, the manner of fusing the plurality of reference depth information of the pixel point is not limited. For example, the target depth information of the pixel point is obtained by weighted averaging the plurality of reference depth information.
In the embodiment of the application, the fact that the acquisition frame rate of a camera is fast is considered, and the depth information of the pixel points in the adjacent images is not changed greatly is considered, so that the target depth information of a plurality of pixel points in the current visible light image is determined by combining the front frame image and the rear frame image, namely, a plurality of reference images including the current visible light image are acquired, the reference depth information of a plurality of pixel points in the plurality of reference images is acquired, and for any pixel point, the plurality of reference depth information of the pixel points is fused to obtain the target depth information of the pixel point, so that the accuracy of the target depth information can be greatly improved.
And step S20, performing image segmentation and background modeling on the current visible light image to determine an object to which a plurality of pixel points in the current visible light image belong respectively.
The loading rate measuring system collects the current visible light image in the cargo loading scene through the vision equipment and the target depth information of a plurality of pixel points in the current visible light image, then sends the visible light image and the target depth information of a plurality of pixel points in the visible light image to the measuring equipment, and the measuring equipment executes the step S20 and the following steps.
Image segmentation refers to the process of dividing an image into a plurality of image regions, each image region corresponding to an object in the image that has similar properties. The method for realizing image segmentation of the current visible light image comprises the steps of setting a plurality of object labels for a cargo loading scene, and carrying out image segmentation on the current visible light image based on the object labels to obtain a plurality of image areas and object labels corresponding to each image area. Wherein, the pixels in the same image area belong to the same object. The application does not limit the algorithm adopted for image segmentation.
The pixels belonging to different objects in the plurality of pixels are already distinguished through image segmentation, and then the pixels corresponding to the objects in a motion state are determined through background modeling. Background modeling refers to building a mathematical model representing a static background for distinguishing moving objects from the background. The application does not limit the algorithm adopted by background modeling. Under the cargo loading scene, the loading personnel is in a motion state, and the background modeling can determine the object in the motion state in the image, so that the corresponding pixel point of the loading personnel in the current visible light image can be accurately determined through the background modeling.
And S30, determining interference points and failure points in the plurality of pixel points, wherein the interference points are the pixel points corresponding to loading personnel, and the failure points are the pixel points with deviation of target depth information.
Considering that the calculation of the cargo loading rate mainly refers to the depth information of the cargo and the cargo box, and the depth information of the loading personnel is an interference information for the calculation of the cargo loading rate, therefore, the interference points, namely the pixel points corresponding to the loading personnel, are determined from a plurality of pixel points so as to eliminate the interference caused by the depth information of the loading personnel. Through the background modeling, the pixel point corresponding to the loading person is determined, and then the pixel point corresponding to the loading person can be determined as the interference point.
The implementation manner of determining the failure point in the plurality of pixel points comprises the following two modes:
First, determining a failure point where the target depth information deviates by the set depth information reference range. The implementation method comprises the steps of comparing target depth information of each pixel point with the depth information reference range, and determining the pixel point as a failure point when the target depth information of the pixel point is out of the depth information reference range. The depth information reference range can be set according to the distance between the goods and the camera in the goods loading scene and the direction of the goods relative to the camera. For example, the depth value range in the set depth information reference range is 2-17 meters.
In the embodiment of the application, considering that the distance between the goods and the camera and the direction relative to the camera can be determined in the goods loading scene, the depth information reference range is set based on the distance between the goods and the camera and the direction of the goods relative to the camera, and if the target depth information of the pixel point is not in the depth information reference range, the deviation exists in the measurement of the target depth information, the pixel point is determined to be the failure point, and the failure point can be rapidly and accurately determined in this way.
Second, failure points where the target depth information is deviated are judged by the target depth information of surrounding pixel points. The implementation mode is that for each pixel point in a plurality of pixel points, the depth values of the surrounding pixel points are weighted and averaged to obtain a reference depth value. And determining an absolute difference value between the depth value of the pixel point and the reference depth value, and determining the pixel point as a failure point under the condition that the absolute difference value is larger than a preset reference threshold value. Wherein the reference threshold can be set to any value as desired.
In the embodiment of the application, considering that the pixel point and the surrounding pixel points are most likely to belong to the same object or are likely to be adjacent objects even though not belong to the same object, therefore, the depth information of the pixel point and the surrounding pixel points is similar, in this case, the depth values of the surrounding pixel points are weighted and averaged to obtain the reference depth value, whether the depth value of the pixel point has deviation is judged based on the absolute difference value of the depth value of the pixel point and the reference depth value, and the failure point with the deviation of the target depth information can be accurately determined.
And S40, deleting the target depth information of the interference point, and updating the target depth information of the failure point based on the target depth information of the pixel points around the failure point and belonging to the same object with the failure point.
Deleting the target depth information of the interference points, namely deleting the target depth information of the pixel points corresponding to the loading personnel, so that the interference caused by the target depth information corresponding to the loading personnel can be avoided, and the accuracy of the determined cargo loading rate is improved.
Through the above image segmentation, the object to which the plurality of pixel points in the current visible light image belong respectively has been determined, so in this step, the pixel points around the failure point and belonging to the same object can be determined based on the image segmentation result, so that the target depth information of the failure point can be updated based on the target depth information of the pixel points around the failure point and belonging to the same object.
Optionally, updating the target depth information of the failure point based on the target depth information of the pixel points around the failure point and the failure point belong to the same object comprises determining the target depth information of any pixel point around the failure point and the failure point belong to the same object, and taking the target depth information as the updated target depth information of the failure point, so that the target depth information of the failure point can be simply and quickly determined.
Optionally, updating the target depth information of the failure point based on the target depth information of the pixel points around the failure point and the failure point belonging to the same object comprises fusing the target depth information of a plurality of pixel points around the failure point and the failure point belonging to the same object to obtain the updated target depth information of the failure point. For example, the depth values of a plurality of pixel points around the failure point and belonging to the same object as the failure point are averaged to obtain the depth value of the failure point.
In the embodiment of the application, the target depth information of a plurality of pixel points around the failure point and belonging to the same object is fused to obtain the updated target depth information of the failure point, so that the error can be further reduced, and the accuracy of the target depth information of the failure point is improved.
And S50, determining cargo volume change amount based on target depth information of pixel points in the current visible light image and target depth information of pixel points in the historical visible light image, and determining the current cargo loading rate based on the cargo volume change amount, wherein the historical visible light image is acquired in the cargo loading scene before the current visible light image.
The load rate is the volume ratio of the cargo within the cargo compartment, i.e. the ratio of the volume of the cargo to the volume of the cargo compartment. The cargo volume change amount refers to the amount of change in cargo volume when the current visible light image is acquired relative to the cargo volume when the historical visible light image is acquired.
The cargo volume change amount is determined based on target depth information of pixels in a current visible light image and target depth information of pixels in a historical visible light image, and the cargo volume change amount is obtained by determining a depth difference value representing the depth of a newly loaded cargo based on the target depth information of pixels in the current visible light image and the target depth information of pixels in the historical visible light image, and multiplying the depth difference value by the width and the height of the cargo.
It should be noted that, in the cargo loading scene, as the amount of cargo loaded in the cargo compartment increases, the distance between the cargo and the camera decreases, so the depth value detected by the camera decreases, and the depth difference value of the current visible light image relative to the pixel point of the historical visible light image can represent the depth of the newly loaded cargo. Illustratively, in the embodiment of the application, the width and the height of the cargo compartment are taken as the width and the height of the cargo, so that the calculation efficiency of the cargo volume change amount is improved, and the calculation efficiency of the cargo loading rate is improved. Or determining the width and the height of the loaded cargos based on the target depth information of the pixel points in the current visible light image, and determining the volume change amount of the cargos based on the currently determined width and height, thereby improving the accuracy of the cargo loading rate.
Illustratively, determining the current cargo load rate based on the cargo volume variation includes dividing the cargo volume variation by the cargo volume to obtain a load rate variation, and determining a sum of the load rate variation and the historical cargo load rate as the current cargo load rate.
Wherein the historical cargo loading rate is stored in the measurement device for ready recall. Illustratively, the historical cargo loading rate is also stored after being obtained by the method provided by the present application. Optionally, the historical visible light image is a previous frame image adjacent to the current visible light image. Of course, the history visible light image and the current visible light image are not limited to two adjacent frame images.
Fig. 6 is a schematic diagram of a cargo load rate measurement scenario. Referring to fig. 6, the vision apparatus is disposed right in front of the cargo box, and the depth value L1, the angle θ, and the depth value d1 of the object point a, and the object point B are detected by the depth camera, and the width x1 of the cargo box can be determined based on the depth value L1, the depth value d1, and the angle θ. Likewise, the height and depth of the cargo box can also be measured. The volume of the cargo box is obtained based on the height, depth and width x1 of the cargo box.
Optionally, after determining the current cargo loading rate, the measuring device displays the current cargo loading rate on the interface, so that the user can check the current cargo loading rate at any time. Or the measuring device may also display the load rate variation on the interface so that the user can more intuitively understand the load efficiency through the load rate variation.
In the embodiment of the application, the cargo volume change amount is determined based on the target depth information of the pixel points in the current visible light image and the target depth information of the pixel points in the historical visible light image. Because the cargo volume change amount is a change amount of the cargo volume when the current visible light image is acquired relative to the cargo volume when the historical visible light image is acquired, the current cargo loading rate can be accurately determined based on the cargo volume change amount and the historical cargo loading rate corresponding to the acquisition time of the historical visible light image.
In the scheme provided by the application, the current visible light image and the target depth information of a plurality of pixel points in the current visible light image in the cargo loading scene are acquired, and the target depth information indicates the space positions of the object points corresponding to the plurality of pixel points, so that the current cargo loading rate can be determined based on the target depth information. Considering that the acquired multiple pixel points have interference points and failure points, wherein the interference points are the pixel points corresponding to loading personnel, and the failure points are the pixel points with deviation of target depth information. The target depth information of the interference point or the failure point introduces errors into calculation of the loading rate of the current goods, so that image segmentation and background modeling are carried out on the current visible light image to determine objects to which a plurality of pixel points in the current visible light image belong respectively, the interference point corresponding to a loading person in the plurality of pixel points can be obtained, the target depth information of the interference point is deleted, and the target depth information of the failure point can be updated based on the target depth information of the pixel points around the failure point and belonging to the same object with the failure point. Because large jump does not occur between the target depth information of the pixels around the failure point and the failure point belong to the same object and the target depth information of the failure point, the target depth information of the failure point is updated based on the target depth information of the pixels around the failure point and the failure point belong to the same object, and the accuracy of the updated target depth information can be ensured. Therefore, the cargo volume change amount can be accurately determined based on the target depth information of the pixel point in the current visible light image and the target depth information of the pixel point in the historical visible light image, so that the current cargo loading rate is determined based on the cargo volume change amount, and the accuracy of the current cargo loading rate can be effectively improved.
Fig. 7 is a flowchart of a second embodiment of the loading rate measuring method according to the present application, which is based on the first embodiment of the loading rate measuring method shown in fig. 5. Taking the execution subject as the load rate measurement system as an example, in the second embodiment, the step S10 includes:
And step S101, collecting a current visible light image in the cargo loading scene.
Step S102, acquiring first depth information of a plurality of pixel points in a current visible light image by using a laser ranging through a depth camera.
Step S103, acquiring second depth information of a plurality of pixel points in the current visible light image by utilizing binocular parallax through a multi-view camera.
Illustratively, the multi-view camera is a binocular camera or a trinocular camera. Illustratively, the binocular camera is composed of a visible light camera and a depth camera, or the binocular camera is composed of two visible light cameras. Illustratively, the three-view camera is composed of three visible light cameras.
And step S104, fusing the first depth information and the second depth information of the plurality of pixel points to obtain target depth information of the plurality of pixel points.
Optionally, the first depth information and the second depth information of the plurality of pixel points are fused to obtain target depth information of the plurality of pixel points, wherein the target depth information comprises the second depth information when the depth value in the second depth information is smaller than a reference threshold value for any one of the plurality of pixel points, and the first depth information when the depth value in the second depth information is not smaller than the reference threshold value.
The depth value indicates a distance between an object point corresponding to the pixel point and the camera. The reference threshold may be set according to the accuracy of the binocular camera and depth camera distance measurement. For example, if the measurement accuracy of the binocular camera is within 3 meters and is higher than that of the depth camera, the reference threshold is set to be 3 meters, so that for any pixel point, if the depth value measured by the binocular camera is smaller than 3 meters, the second depth information measured by the binocular camera is used as the target depth information of the pixel point, and if the depth value measured by the binocular camera is not smaller than 3 meters, the first depth information measured by the depth camera is used as the target depth information of the pixel point.
In the embodiment of the application, the accuracy of the binocular camera is higher in a scene of short-distance measurement and the accuracy of the depth camera is higher in a scene of long-distance measurement due to different distance measurement principles, so that a reference threshold is set, for any pixel point, the second depth information measured by the binocular camera is selected as the target depth information of the pixel point under the condition that the depth value measured by the binocular camera is smaller than the reference threshold, and the first depth information measured by the depth camera is selected as the target depth information of the pixel point under the condition that the depth value measured by the binocular camera is not smaller than the reference threshold.
Optionally, fusing the first depth information and the second depth information of the plurality of pixel points to obtain target depth information of the plurality of pixel points, wherein the method comprises the step of carrying out weighted average on the depth value in the first depth information and the depth value in the second depth information for any pixel point in the plurality of pixel points to obtain the depth value in the target depth information. The method is simple and quick, and improves the efficiency of determining the depth value in the target depth information on the basis of ensuring the accuracy of the depth value in the target depth information.
The application does not limit the weight set by the depth value of the first depth information and the weight set by the depth value of the second depth information. For example, in a close-range scene, the weight of the depth value of the second depth information is made larger than the weight of the depth value of the first depth information. In a distant scene, the weight of the depth value of the first depth information is made larger than the weight of the depth value of the second depth information.
In the embodiment of the application, the depth information of two different sources is acquired, namely, the first depth information of a plurality of pixel points acquired by a depth camera and the second depth information of a plurality of pixel points acquired by a binocular camera are acquired by utilizing a laser ranging principle, and the second depth information is acquired by utilizing a binocular parallax principle, so that the first depth information and the second depth information of the plurality of pixel points are fused to obtain the target depth information of the plurality of pixel points, and errors respectively introduced by the two ranging methods can be compensated to obtain more accurate target depth information.
Fig. 8 is a schematic diagram of a load rate measurement process according to an exemplary embodiment of the present application. Referring to fig. 8, depth information of a cargo box is acquired by a depth camera, and a cargo box volume is determined based on the depth information of the cargo box. The current visible light image is acquired through a visible light camera, and image segmentation and background modeling are carried out on the current visible light image so as to realize object identification, namely, the object to which each pixel point in the current visible light image belongs respectively is determined. And removing interference after determining the pixel point corresponding to the loading person through background modeling, namely deleting the target depth information of the pixel point corresponding to the loading person. And then performing point cloud filtering and point cloud filling. The point cloud filtering refers to determining a failure point with deviation of target depth information in pixel points of the current visible light image. The point cloud filling refers to updating the target depth information of the failure point based on the target depth information of the pixel points around the failure point and belonging to the same object with the failure point. And then, the cargo volume change amount can be determined based on the target depth information of the historical visible light image and the target depth information of the current visible light image, and the cargo volume change amount is compared with the cargo volume to obtain the loading rate change amount, so that the cargo loading rate is determined based on the loading rate change amount.
Fig. 9 is a schematic diagram of another load rate measurement process according to an exemplary embodiment of the present application. Referring to fig. 9, unlike the loading rate measurement process of fig. 8 described above, the measurement process of fig. 8 described above uses only depth information collected by a depth camera, while the measurement process of fig. 9 uses depth information from two different sources. One is depth information collected by a depth camera, and the other is depth information collected by a binocular camera formed by the depth camera and a visible light camera. Specifically, after the visible light camera collects the current visible light image and the depth camera collects the gray level image, binocular stereoscopic vision ranging is achieved based on the position deviation between corresponding points of the two images. And then fusing the depth information of the two different sources to obtain the target depth information of the current visible light image. And then carrying out point cloud filtering and point cloud filling, and determining the cargo volume change amount so as to obtain the cargo loading rate.
Fig. 10 is a schematic diagram of still another load rate measurement process according to an exemplary embodiment of the present application. Referring to fig. 10, unlike the loading rate measurement process of fig. 9 described above, the binocular camera of fig. 9 described above is composed of a depth camera and a visible light camera, and the binocular camera of fig. 10 is composed of two visible light cameras. Correspondingly, after the current visible light images are respectively acquired by the two visible light cameras, binocular stereoscopic vision ranging is realized based on the position deviation between corresponding points of the two visible light images. The subsequent load rate measurement process is similar to that shown in fig. 9 and will not be described again here.
It should be noted that the foregoing examples are only for understanding the present application, and do not limit the method of measuring the loading rate of the present application, and more forms of simple transformation based on the technical concept are all within the scope of the present application.
The present application also provides a loading rate measuring apparatus, referring to fig. 11, the loading rate measuring apparatus includes:
the data acquisition module 10 is used for acquiring a current visible light image in a cargo loading scene and acquiring target depth information of a plurality of pixel points in the current visible light image, wherein the target depth information indicates the spatial positions of object points corresponding to the plurality of pixel points;
the image processing module 20 is configured to perform image segmentation and background modeling on the current visible light image to determine an object to which a plurality of pixel points in the current visible light image belong respectively;
The pixel point determining module 30 is configured to determine an interference point and a failure point in the plurality of pixel points, where the interference point is a pixel point corresponding to a loader, and the failure point is a pixel point with deviation in the target depth information;
An information updating module 40, configured to delete target depth information of an interference point, and update target depth information of an failure point based on target depth information of pixel points around the failure point and belonging to the same object as the failure point;
The loading rate determining module 50 is configured to determine a cargo volume change amount based on target depth information of a pixel point in a current visible light image and target depth information of a pixel point in a historical visible light image, and determine a current cargo loading rate based on the cargo volume change amount, wherein the historical visible light image is acquired under a cargo loading scene before the current visible light image.
Optionally, the data acquisition module 10 includes:
The first acquisition submodule is used for acquiring first depth information of a plurality of pixel points by utilizing laser ranging through the depth camera;
the second acquisition sub-module is used for acquiring second depth information of a plurality of pixel points by utilizing binocular parallax through the multi-view camera;
And the fusion sub-module is used for fusing the first depth information and the second depth information of the plurality of pixel points to obtain target depth information of the plurality of pixel points.
Optionally, the fusion sub-module is configured to, for any one of the plurality of pixel points, use the second depth information as target depth information when the depth value in the second depth information is smaller than a reference threshold value, and use the first depth information as target depth information when the depth value in the second depth information is not smaller than the reference threshold value, where the depth value indicates a distance between an object point corresponding to the pixel point and the camera.
Optionally, the fusion sub-module is configured to perform weighted average on a depth value in the first depth information and a depth value in the second depth information for any one of the plurality of pixel points to obtain a depth value in the target depth information, where the depth value indicates a distance between an object point corresponding to the pixel point and the camera.
Optionally, the data acquisition module 10 is configured to acquire a plurality of reference images, where the plurality of reference images include a current visible light image and an adjacent frame of the current visible light image, and the reference images include a plurality of pixels, acquire reference depth information of the plurality of pixels in the plurality of reference images, and fuse the plurality of reference depth information of the pixels for any one of the plurality of pixels to obtain target depth information of the pixels.
Optionally, the information updating module 40 is configured to fuse target depth information of a plurality of pixels around the failure point and belonging to the same object with the failure point, so as to obtain target depth information after updating the failure point.
The loading rate measuring device provided by the application adopts the loading rate measuring method in the embodiment to measure the loading rate of the goods, and can solve the technical problem of inaccurate loading rate of the goods measured in the related technology. Compared with the prior art, the beneficial effects of the loading rate measuring device provided by the application are the same as those of the loading rate measuring method provided by the embodiment, and other technical features of the loading rate measuring device are the same as those disclosed by the method of the embodiment, and are not repeated herein.
The application provides a loading rate measuring device which comprises at least one processor and a memory in communication connection with the at least one processor, wherein the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor so that the at least one processor can execute the loading rate measuring method in the first embodiment.
Referring now to FIG. 12, a schematic diagram of a load rate measurement device suitable for use in implementing embodiments of the present application is shown. The loading rate measuring apparatus in the embodiment of the present application may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (Personal DIGITAL ASSISTANT, personal digital assistants), PADs (Portable Application Description, tablet computers), PMPs (Portable MEDIA PLAYER, portable multimedia players), vehicle-mounted terminals (e.g., vehicle-mounted navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The load rate measurement apparatus shown in fig. 12 is only one example, and should not impose any limitation on the function and scope of use of the embodiment of the present application.
As shown in fig. 12, the load rate measurement apparatus may include a processing device 1001 (e.g., a central processor, a graphics processor, or the like) that can perform various appropriate actions and processes according to a program stored in a ROM (Read Only Memory) 1002 or a program loaded from a storage device 1003 into a RAM (Random Access Memory ) 1004. In the RAM1004, various programs and data required for the operation of the load rate measurement apparatus are also stored. The processing device 1001, the ROM1002, and the RAM1004 are connected to each other by a bus 1005. An input/output (I/O) interface 1006 is also connected to the bus. In general, a system including an input device 1007 such as a touch screen, a touch pad, a keyboard, a mouse, an image sensor, a microphone, an accelerometer, a gyroscope, etc., an output device 1008 including an LCD (Liquid CRYSTAL DISPLAY ), a speaker, a vibrator, etc., a storage device 1003 including a magnetic tape, a hard disk, etc., for example, and a communication device 1009 may be connected to the I/O interface 1006. The communication means 1009 may allow the load rate measurement device to communicate wirelessly or by wire with other devices to exchange data. While a load rate measurement device having various systems is shown in the figures, it should be understood that not all of the illustrated systems are required to be implemented or provided. More or fewer systems may alternatively be implemented or provided.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through a communication device, or installed from the storage device 1003, or installed from the ROM 1002. The above-described functions defined in the method of the disclosed embodiment of the application are performed when the computer program is executed by the processing device 1001.
The loading rate measuring equipment provided by the application adopts the loading rate measuring method in the embodiment to measure the loading rate of the goods, and can solve the technical problem of inaccurate loading rate of the goods measured in the related technology. Compared with the prior art, the beneficial effects of the loading rate measuring device provided by the application are the same as those of the loading rate measuring method provided by the embodiment, and other technical features of the loading rate measuring device are the same as those disclosed by the method of the previous embodiment, and are not described in detail herein.
It is to be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the description of the above embodiments, particular features, structures, materials, or characteristics may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
The present application provides a computer-readable storage medium having computer-readable program instructions (i.e., a computer program) stored thereon for performing the load rate measurement method in the above-described embodiments.
The computer readable storage medium provided by the present application may be, for example, a USB flash disk, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared system, or device, or a combination of any of the foregoing. More specific examples of a computer-readable storage medium may include, but are not limited to, an electrical connection having one or more wires, a portable computer diskette, a hard disk, a RAM (Random Access Memory ), a ROM (Read Only Memory), an EPROM (Erasable Programmable Read Only Memory ), an optical fiber, a CD-ROM (CD-Read Only Memory), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this embodiment, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to electrical wiring, fiber optic cable, RF (Radio Frequency) and the like, or any suitable combination of the foregoing.
The computer readable storage medium may be included in the load rate measurement apparatus or may exist alone without being incorporated in the load rate measurement apparatus.
The computer readable storage medium carries one or more programs, when the one or more programs are executed by the loading rate measuring device, the loading rate measuring device is enabled to acquire a current visible light image in a cargo loading scene and acquire target depth information of a plurality of pixels in the current visible light image, conduct image segmentation and background modeling on the current visible light image to determine objects to which the pixels in the current visible light image respectively belong, determine interference points and failure points in the pixels, wherein the interference points are pixels corresponding to loading personnel, the failure points are pixels with deviation in the target depth information, delete the target depth information of the interference points, update the target depth information of the failure points based on the target depth information of the pixels around the failure points and belonging to the same object, determine cargo volume change amount based on the target depth information of the pixels in the current visible light image and the target depth information of the pixels in the history visible light image, and determine cargo volume change amount based on cargo volume change amount, wherein the current visible light image is the cargo loading scene before the cargo is acquired in the cargo loading scene.
Computer program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a LAN (Local Area Network ) or WAN (Wide Area Network, wide area network), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules involved in the embodiments of the present application may be implemented in software or in hardware. Wherein the name of the module does not constitute a limitation of the unit itself in some cases.
The readable storage medium provided by the application is a computer readable storage medium, and the computer readable storage medium stores computer readable program instructions (namely computer program) for executing the loading rate measuring method, so that the technical problem of inaccurate loading rate of cargos measured in the related technology can be solved. Compared with the related art, the beneficial effects of the computer readable storage medium provided by the application are the same as those of the loading rate measuring method provided by the above embodiment, and are not described herein.
The application also provides a computer program product comprising a computer program which, when executed by a processor, implements a load rate measurement method as described above.
The computer program product provided by the application can solve the technical problem of inaccurate cargo loading rate measured in the related technology. Compared with the related art, the beneficial effects of the computer program product provided by the present application are the same as those of the loading rate measuring method provided by the above embodiment, and will not be described herein.
The foregoing description is only a partial embodiment of the present application, and is not intended to limit the scope of the present application, and all the equivalent structural changes made by the description and the accompanying drawings under the technical concept of the present application, or the direct/indirect application in other related technical fields are included in the scope of the present application.