[go: up one dir, main page]

CN113744378B - Exhibition article scanning method and device, electronic equipment and storage medium - Google Patents

Exhibition article scanning method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113744378B
CN113744378B CN202010481765.7A CN202010481765A CN113744378B CN 113744378 B CN113744378 B CN 113744378B CN 202010481765 A CN202010481765 A CN 202010481765A CN 113744378 B CN113744378 B CN 113744378B
Authority
CN
China
Prior art keywords
point cloud
position coordinates
cloud data
scanning
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010481765.7A
Other languages
Chinese (zh)
Other versions
CN113744378A (en
Inventor
刘宁
唐建波
覃小春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Digital Sky Technology Co ltd
Original Assignee
Chengdu Digital Sky Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Digital Sky Technology Co ltd filed Critical Chengdu Digital Sky Technology Co ltd
Priority to CN202010481765.7A priority Critical patent/CN113744378B/en
Publication of CN113744378A publication Critical patent/CN113744378A/en
Application granted granted Critical
Publication of CN113744378B publication Critical patent/CN113744378B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J18/00Arms
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • B25J5/005Manipulators mounted on wheels or on carriages mounted on endless tracks or belts
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • B25J5/007Manipulators mounted on wheels or on carriages mounted on wheels
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Automation & Control Theory (AREA)
  • Processing Or Creating Images (AREA)
  • Manipulator (AREA)

Abstract

The application provides an exhibition article scanning method, an exhibition article scanning device, electronic equipment and a storage medium, which are used for solving the problems that photographing of an exhibition article is time-consuming and labor-consuming and has low efficiency. The display article scanning method is applied to electronic equipment and comprises the following steps: acquiring point cloud data of the display object, wherein the point cloud data is acquired by a robot; performing principal component analysis on the point cloud data to obtain coplanar point clouds, wherein the coplanar point clouds represent a three-dimensional coordinate set of a public plane in the point cloud data; according to the point cloud data and the coplanar point cloud, determining a plurality of position coordinates of the acquired display object and orientation angles corresponding to the position coordinates; and sending a control command to the robot according to the plurality of position coordinates and the orientation angles corresponding to the position coordinates, wherein the control command is used for enabling the robot to scan the display object according to the plurality of position coordinates and the orientation angles corresponding to the position coordinates and return a plurality of scanning images for scanning the display object.

Description

Exhibition article scanning method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of three-dimensional modeling and three-dimensional reconstruction, in particular to a method and a device for scanning an exhibition object, electronic equipment and a storage medium.
Background
Three-dimensional scanning is a non-contact measurement technique for acquiring and analyzing the shape and contour of a physical object; the three-dimensional scanning technology can be utilized to reconstruct the three-dimensional object to be scanned, thereby creating a three-dimensional model of the actual object.
Currently, when scanning exhibition objects (such as ancient cultural relics) in a closed space or when three-dimensionally scanning large components in a factory, the exhibition objects are often photographed by means of manual operation, so that a modeling analysis process is performed on photographs obtained by photographing by using software. In a specific practical process, the manual photographing of the exhibition object is found to be time-consuming, labor-consuming and low in efficiency.
Disclosure of Invention
An object of the embodiment of the application is to provide a display article scanning method, a device, electronic equipment and a storage medium, which are used for improving the problems that photographing display articles is time-consuming and labor-consuming and has low efficiency.
The embodiment of the application provides a display article scanning method, which is applied to electronic equipment and comprises the following steps: acquiring point cloud data of the display object, wherein the point cloud data is acquired by a robot; performing principal component analysis on the point cloud data to obtain coplanar point clouds, wherein the coplanar point clouds represent a three-dimensional coordinate set of a public plane in the point cloud data; according to the point cloud data and the coplanar point cloud, determining a plurality of position coordinates of the acquired display object and orientation angles corresponding to the position coordinates; and sending a control command to the robot according to the plurality of position coordinates and the orientation angles corresponding to the position coordinates, wherein the control command is used for enabling the robot to scan the display object according to the plurality of position coordinates and the orientation angles corresponding to the position coordinates and return a plurality of scanning images for scanning the display object. In the implementation process, point cloud data of the display object are obtained firstly, then principal component analysis, deletion, fitting and other calculations are carried out on the point cloud data, a plurality of position coordinates for collecting and scanning the display object and orientation angles corresponding to the position coordinates are obtained, so that a robot can scan the display object according to the plurality of position coordinates and orientation angles corresponding to the position coordinates, and a plurality of scanning images for scanning the display object are returned; therefore, the efficiency of photographing the display articles to obtain images is improved, and meanwhile, the problem that time and labor are wasted when the display articles are photographed in a manual photographing mode is effectively solved.
Optionally, in an embodiment of the present application, performing principal component analysis on the point cloud data to obtain a coplanar point cloud includes: singular value decomposition is carried out on a matrix formed by the point cloud data, so as to obtain a point cloud vector; and determining a common plane represented by the central point of the point cloud data and the point cloud vector as a coplanar point cloud. In the implementation process, singular value decomposition is carried out on a matrix formed by the point cloud data to obtain a point cloud vector; determining a common plane represented by a central point of the point cloud data and a point cloud vector as a coplanar point cloud; the speed of obtaining the coplanar point cloud is effectively increased.
Optionally, in an embodiment of the present application, determining, according to the point cloud data and the coplanar point cloud, a plurality of position coordinates of the collected display object and an orientation angle corresponding to the position coordinates includes: deleting all three-dimensional coordinates lower than the coplanar point cloud from the point cloud data to obtain target data; fitting the target data by using a spherical model to obtain fitted spherical center coordinates and spherical radius; and calculating a plurality of position coordinates for scanning the display object and orientation angles corresponding to the position coordinates according to the spherical center coordinates and the spherical radius. In the implementation process, deleting all three-dimensional coordinates lower than the coplanar point cloud from the point cloud data to obtain target data; fitting the target data by using a spherical model to obtain fitted spherical center coordinates and spherical radius; calculating a plurality of position coordinates for scanning the display object and orientation angles corresponding to the position coordinates according to the spherical center coordinates and the spherical radius; thereby effectively improving the precision of determining the orientation angle of the plurality of position coordinates of the collected display article.
Optionally, in an embodiment of the present application, after sending a control command to the robot according to the plurality of position coordinates and orientation angles corresponding to the position coordinates, the method further includes: receiving a plurality of scanned images sent by a robot; and modeling the display object according to the plurality of scanned images to obtain a three-dimensional model. In the implementation process, a plurality of scanning images sent by the robot are received; modeling the display object according to the plurality of scanned images to obtain a three-dimensional model; the speed of three-dimensional modeling according to the scanned images of the exhibited articles is effectively improved.
Optionally, in an embodiment of the present application, after obtaining the three-dimensional model, the method further includes: and mapping the three-dimensional model according to the plurality of scanned images to obtain a mapped three-dimensional model. In the implementation process, mapping is carried out on the three-dimensional model according to a plurality of scanned images, so that a mapped three-dimensional model is obtained; thereby effectively improving the speed of obtaining the mapped three-dimensional model.
The embodiment of the application also provides a display article scanning method which is applied to the robot and comprises the following steps: acquiring the display object through a depth camera to obtain point cloud data, wherein the point cloud data represents a three-dimensional coordinate set of the display object; transmitting the point cloud data to the electronic equipment so that the electronic equipment calculates and transmits a control command according to the point cloud data; receiving a control command sent by the electronic equipment, wherein the control command comprises a plurality of position coordinates and orientation angles corresponding to the position coordinates for scanning the exhibited article, and the orientation angles corresponding to the position coordinates are determined according to coplanar point cloud and point cloud data obtained by analysis after the point cloud data are received and analyzed by the electronic equipment; sequentially moving to each position coordinate in the plurality of position coordinates, and acquiring and scanning according to the corresponding orientation angle of each position coordinate to obtain a plurality of scanning images; the plurality of scanned images is transmitted to the electronic device. In the implementation process, the depth camera is used for collecting the display object to obtain point cloud data, and the point cloud data are sent to the electronic equipment, so that the electronic equipment calculates and sends a control command according to the point cloud data, and the control command comprises a plurality of position coordinates and orientation angles corresponding to the position coordinates for scanning the display object; sequentially moving to each position coordinate in the plurality of position coordinates, and acquiring and scanning according to the corresponding orientation angle of each position coordinate to obtain a plurality of scanning images; finally, a plurality of scanned images are sent to the electronic equipment; therefore, the efficiency of photographing the display articles to obtain images is improved, and meanwhile, the problem that time and labor are wasted when the display articles are photographed in a manual photographing mode is effectively solved.
Optionally, in an embodiment of the present application, the robot includes: the device comprises a servo motor, a speed reducer and image acquisition equipment; sequentially moving to each position coordinate in the plurality of position coordinates, and carrying out acquisition scanning according to the orientation angle corresponding to each position coordinate, wherein the method comprises the following steps: sequentially moving to each position coordinate in the plurality of position coordinates through a servo motor and a speed reducer; and adjusting the orientation angle of the image acquisition equipment to be the orientation angle corresponding to each position coordinate, and carrying out acquisition scanning by using the image acquisition equipment. In the implementation process, the device sequentially moves to each position coordinate in a plurality of position coordinates through a servo motor and a speed reducer; the orientation angle of the image acquisition equipment is adjusted to be the orientation angle corresponding to each position coordinate, and the image acquisition equipment is used for acquisition scanning; therefore, the efficiency of photographing the display articles to obtain images is improved, and meanwhile, the problem that time and labor are wasted when the display articles are photographed in a manual photographing mode is effectively solved.
The embodiment of the application also provides a display article scanning device, which is applied to electronic equipment and comprises: the point cloud data acquisition module is used for acquiring point cloud data of the display object, wherein the point cloud data are acquired by the robot; the coplanar point cloud acquisition module is used for carrying out principal component analysis on the point cloud data to acquire coplanar point clouds, wherein the coplanar point clouds represent a three-dimensional coordinate set of a public plane in the point cloud data; the coordinate angle determining module is used for determining a plurality of position coordinates of the acquired display object and orientation angles corresponding to the position coordinates according to the point cloud data and the coplanar point cloud; and the control command sending module is used for sending a control command to the robot according to the plurality of position coordinates and the orientation angles corresponding to the position coordinates, and the control command is used for enabling the robot to scan the display object according to the plurality of position coordinates and the orientation angles corresponding to the position coordinates and returning a plurality of scanning images for scanning the display object.
Optionally, in an embodiment of the present application, the coplanar point cloud obtaining module includes: the point cloud vector obtaining module is used for carrying out singular value decomposition on a matrix formed by the point cloud data to obtain a point cloud vector; and the coplanar point cloud determining module is used for determining the common plane represented by the central point of the point cloud data and the point cloud vector as a coplanar point cloud.
Optionally, in an embodiment of the present application, the coordinate angle determining module includes: the target data obtaining module is used for deleting all three-dimensional coordinates with the position lower than the coplanar point cloud from the point cloud data to obtain target data; the target data fitting module is used for fitting the target data by using the spherical model to obtain fitted spherical center coordinates and spherical radius; and the coordinate angle calculation module is used for calculating a plurality of position coordinates for scanning the display object and orientation angles corresponding to the position coordinates according to the spherical center coordinates and the spherical radius.
Optionally, in an embodiment of the present application, the display article scanning device further includes: the scanning image receiving module is used for receiving a plurality of scanning images sent by the robot; and the three-dimensional model obtaining module is used for modeling the display object according to the plurality of scanned images to obtain a three-dimensional model.
Optionally, in an embodiment of the present application, the display article scanning device further includes: and the three-dimensional model mapping module is used for mapping the three-dimensional model according to the plurality of scanned images to obtain a mapped three-dimensional model.
The embodiment of the application also provides an exhibition article scanning device, which is applied to a robot and comprises: the point cloud data acquisition module is used for acquiring the exhibition object through the depth camera to obtain point cloud data, and the point cloud data represents a three-dimensional coordinate set of the exhibition object; the point cloud data sending module is used for sending point cloud data to the electronic equipment so that the electronic equipment calculates and sends a control command according to the point cloud data; the control command receiving module is used for receiving a control command sent by the electronic equipment, wherein the control command comprises a plurality of position coordinates for scanning the exhibited article and orientation angles corresponding to the position coordinates, and the orientation angles corresponding to the position coordinates are determined according to coplanar point cloud and point cloud data obtained through analysis after the electronic equipment receives and analyzes the point cloud data; the scanning image acquisition module is used for sequentially moving to each position coordinate in the plurality of position coordinates, and acquiring and scanning according to the corresponding orientation angle of each position coordinate to acquire a plurality of scanning images; and the scanning image transmitting module is used for transmitting the plurality of scanning images to the electronic equipment.
Optionally, in an embodiment of the present application, the robot includes: the device comprises a servo motor, a speed reducer and image acquisition equipment; a scanned image acquisition module comprising: the robot moving module is used for sequentially moving to each position coordinate in the plurality of position coordinates through a servo motor and a speed reducer; and the robot scanning module is used for adjusting the orientation angle of the image acquisition equipment to the orientation angle corresponding to each position coordinate and carrying out acquisition scanning by using the image acquisition equipment.
The embodiment of the application also provides electronic equipment, which comprises: a processor and a memory storing machine-readable instructions executable by the processor to perform the method as described above when executed by the processor.
The present embodiments also provide a storage medium having stored thereon a computer program which, when executed by a processor, performs a method as described above.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of an exhibition article scanning method applied to an electronic device according to an embodiment of the present application;
FIG. 2 shows a schematic view of an item of display placed on a display stand provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of acquiring point cloud data of an exhibited item using a robot according to an embodiment of the present application;
FIG. 4 is a schematic diagram of fitting target data using a spherical model provided by an embodiment of the present application;
fig. 5 is a schematic flow chart of an exhibition article scanning method applied to a robot according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an exhibition article scanning device according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
Before introducing the method for scanning the display object provided by the embodiment of the application, some concepts related to the embodiment of the application are introduced:
point Cloud (Point Cloud) refers to a Point data set of the product appearance surface obtained by a measuring instrument, and the Point Cloud can represent a target space expressed under the same space reference system; the attributes of the point cloud include: spatial resolution, point location accuracy, etc.; the number of points obtained by using a three-dimensional coordinate measuring machine is usually small, the distance between the points is also large, and the three-dimensional coordinate measuring machine is called sparse point cloud; the point cloud obtained by using the three-dimensional laser scanner or the photographic scanner has larger and denser point number, and is called dense point cloud.
Depth cameras, also known as depth sensors or depth cameras, or TOF (Time of flight) cameras, are interpreted as Time of flight cameras, time of flight 3D imaging, which are devices that obtain the target distance by continuously sending light pulses to the target, then receiving the light returned from the object with a sensor, and detecting the flight (round trip) Time of the light pulses. This technique is basically similar to the principle of a 3D laser sensor, except that the 3D laser sensor scans point by point, while the TOF camera obtains depth information of the entire image at the same time. The TOF camera is similar to the common machine vision imaging process, and consists of a light source, an optical component, a sensor, a control circuit, a processing circuit and the like.
Principal component analysis (Principal Component Analysis, PCA), also known as principal component analysis or principal component analysis, is a method of statistical analysis and simplifying of datasets in multivariate statistical analysis; PCA uses orthonormal transformation to linearly transform the observations of a series of potentially related variables, thereby projecting into the values of a series of linearly uncorrelated variables, referred to as principal components (Principal Components); in particular, the principal component can be regarded as a linear equation comprising a series of linear coefficients to indicate the projection direction; PCA is sensitive to regularization or preprocessing of raw data.
Singular values, which are concepts in a matrix, are generally found by singular value decomposition theorem; let a be the mn-order matrix, q=min (m, n), the arithmetic square root of q non-negative eigenvalues of AA is called the singular value of a.
A software development kit (Software Development Kit, SDK), which refers to a collection of development tools when a software engineer builds application software for a particular software package, software framework, hardware platform, operating system, etc.; the software development tool includes a collection of related documents, examples, and tools that broadly refer to assisting in developing a certain class of software; the tools are, for example, data interfaces in a software development kit, which can be investigated to connect to a server to obtain corresponding results, where the languages of the software development kit are various, for example: JAVA and Python, etc.
A server refers to a device that provides computing services over a network, such as: an x86 server and a non-x 86 server, the non-x 86 server comprising: mainframe, minicomputer, and UNIX servers. Of course, in a specific implementation process, the server may specifically select a mainframe or a mini-computer, where the mini-computer refers to a special processor such as a reduced instruction set computing (Reduced Instruction Set Computing, RISC) or a single word length fixed point instruction average execution speed (Million Instructions Per Second, MIPS), and mainly supports a closed and special device for providing computing services of a UNIX operating system; a mainframe, also referred to herein as a mainframe, refers to a device that provides computing services using a dedicated processor instruction set, operating system, and application software.
It should be noted that, the method for scanning the display object provided in the embodiment of the present application may be executed by an electronic device, where the electronic device refers to a device terminal having a function of executing a computer program or the server described above, and the device terminal is, for example: smart phones, personal computers (personal computer, PCs), tablet computers, personal digital assistants (personal digital assistant, PDAs), mobile internet appliances (mobile Internet device, MIDs), network switches or network routers, and the like.
Before introducing the display article scanning method provided by the embodiment of the present application, application scenarios suitable for the display article scanning method are introduced, where the application scenarios include, but are not limited to: scanning and photographing the display object by using the display object scanning method to obtain a scanning image, and carrying out three-dimensional reconstruction and mapping according to the scanning image to obtain a three-dimensional model with mapping details; the display items herein include, but are not limited to: cultural goods with historical value, aerospace models, automobile models, artworks or building models and the like; or the scanned image may be applied to other fields, such as: the scanned image is used as training data for deep learning, and is applied to the fields of image recognition, image processing and the like, or the obtained three-dimensional model is applied to the animation industry, teaching demonstration and the like.
Please refer to fig. 1, which is a schematic flow chart of an exhibition article scanning method applied to an electronic device according to an embodiment of the present application; the method can be applied to electronic equipment, and comprises the steps of firstly obtaining point cloud data of the display article, then carrying out principal component analysis and other operation processing on the point cloud data to obtain position coordinates and orientation angles for collecting and scanning the display article, so that a robot can scan the display article according to the position coordinates and the orientation angles and return a plurality of scanning images for scanning the display article; thereby improving the efficiency of obtaining images by scanning and photographing the display articles; the display article scanning method can comprise the following steps:
step S110: and acquiring point cloud data of the display object, wherein the point cloud data is acquired by a robot.
Please refer to fig. 2, which is a schematic diagram of an exhibition article placed on an exhibition stand according to an embodiment of the present application; the display article is placed on a display stand, where the display stand may include: the exhibition stand comprises an exhibition stand plane and an exhibition stand support body below the exhibition stand plane; here, the display article is sometimes referred to as a display article, and a display article refers to one or more articles displayed for viewing, watching or visiting by a person, where the display article includes: cultural goods with historical value, aerospace models, automobile models, artwork or building models, and the like.
The point cloud data is a three-dimensional coordinate set obtained by collecting the exhibition objects by the point cloud collecting equipment carried by the characterization robot; the point cloud acquisition device here is for example: a depth camera, a three-dimensional coordinate measuring machine, a laser sensor, a three-dimensional laser scanner, or a photographic scanner; the specific calculation mode of the point cloud data specifically includes: the robot controls the depth camera to collect the exhibition objects to obtain a sparse point cloud depth map with the resolution ratio of 1280 multiplied by 800, wherein the point cloud depth map characterizes a distance data set between the depth camera and the exhibition objects; the total number of pixel points of the point cloud depth map is 1024000, the depth camera calls an SDK tool kit to calculate the point cloud depth map, and specifically, concrete values of 1024000 pixel points are calculated according to preset parameters of the depth camera to obtain the point cloud data; the preset parameters of the depth camera may include: the specific value of each pixel point is the distance between the pixel point cloud of the current display object and the point cloud acquisition equipment.
Please refer to fig. 3, which illustrates a schematic diagram of acquiring point cloud data of an exhibited article using a robot according to an embodiment of the present application; there are various ways to obtain the point cloud data of the display object in the above step S110, and these embodiments are as follows: in a first manner, other terminal devices collect point cloud data of an exhibition object, then the other terminal devices send the collected point cloud data to an electronic device, and finally the electronic device receives the point cloud data sent by the other terminal devices, where the other terminal devices include: a depth camera, a laser sensor, a robot with a depth camera or a laser sensor, a robot using a mechanical arm to control the depth camera or the depth camera to collect point cloud data, and the like; in a second mode, after receiving point cloud data sent by other terminal devices, the electronic device stores the point cloud data in a file system or a database, and when the data is needed, the electronic device acquires pre-stored point cloud data from the file system or the database.
After step S110, step S120 is performed: and carrying out principal component analysis on the point cloud data to obtain the coplanar point cloud.
Coplanar point clouds refer to a three-dimensional set of coordinates that characterizes the largest coplanar point cloud in the point cloud data, where the coplanar point clouds are, for example: if the point cloud data is obtained by collecting the display objects placed on the display platform plane by using the depth camera, the point cloud data can be understood as the display objects, the display platform plane and the display platform support below the display platform plane, and the coplanar point cloud can be understood as the display platform plane, namely the plane for placing the display objects.
The implementation of performing the principal component analysis on the point cloud data in the step S120 may include:
step S121: and performing singular value decomposition on a matrix formed by the point cloud data to obtain a point cloud vector.
Singular value decomposition (Singular Value Decomposition, SVD), which is an important matrix decomposition in linear algebra, is similar in some ways to diagonalization of symmetric matrix or hermite matrix based on eigenvectors, however, the two matrix decompositions are significantly different despite their correlation, the basis of symmetric matrix eigenvector decomposition is spectral analysis, and singular value decomposition is a generalization of spectral analysis theory on arbitrary matrices.
The embodiment of step S121 described above is, for example: converting the point cloud data into a matrix format to obtain a matrix formed by the point cloud data, wherein each row of data in the matrix corresponds to the point cloud coordinates of the point cloud data, and each column of the matrix corresponds to one point cloud in the point cloud data; according to x=u Σw T Singular value decomposition is carried out on a matrix formed by the point cloud data, and after decomposition is obtainedIs a point cloud vector of (1); the point cloud vector includes a first vector and a second vector, where the first vector and the second vector represent two different directions in which a common plane is located, X represents the point cloud data, Σ is a preset coefficient, specifically the preset coefficient Σ may be set to be (0, 1), U represents the first vector, W represents the second vector, and T represents a transposed operation of a matrix.
Step S122: and determining a common plane represented by the central point of the point cloud data and the point cloud vector as a coplanar point cloud.
The embodiment of step S122 described above is, for example: if the above-mentioned point cloud vectors are the first vector U and the second vector W, the center point (x, y, z) of the point cloud data is calculated and obtained, and then the common plane represented by the center point (x, y, z) of the point cloud data, the first vector U and the second vector W is determined as a coplanar point cloud, where the coplanar point cloud is understood as a plane for placing the display object. In the implementation process, singular value decomposition is carried out on a matrix formed by the point cloud data to obtain a point cloud vector; determining a common plane represented by a central point of the point cloud data and a point cloud vector as a coplanar point cloud; the speed of obtaining the coplanar point cloud is effectively increased.
After step S120, step S130 is performed: and determining a plurality of position coordinates of the acquired display object and orientation angles corresponding to the position coordinates according to the point cloud data and the coplanar point cloud.
The plurality of position coordinates and the orientation angles corresponding to the position coordinates in the step S130 refer to position coordinates where the image acquisition device of the robot acquires the display object and a horizontal orientation angle on the position coordinates, and the vertical orientation angle of the image acquisition device is adjusted in real time according to the height of the display platform plane; the embodiment of determining the position coordinates and the orientation angles corresponding to the position coordinates of the collection display object in the step S130 may include the following steps:
step S131: and deleting all three-dimensional coordinates lower than the coplanar point cloud from the point cloud data to obtain target data.
The embodiment of step S131 described above is, for example: judging whether the three-dimensional coordinates in the point cloud data are lower than the coplanar point cloud, if so, deleting the three-dimensional coordinates from the point cloud data; the point cloud data can be understood as display objects, display platform planes and display platform supports below the display platform planes, the display platform planes and the display platform supports below the display platform planes need to be deleted from the point cloud data, and the accuracy of positioning the center point coordinates of the display objects is effectively improved, so that the accuracy of obtaining the position coordinates of the display objects and the orientation angles corresponding to the position coordinates is improved, and the robot can collect the display objects better according to the orientation angles corresponding to the position coordinates and the position coordinates. In a specific implementation process, noise data such as isolated point clouds and invalid point clouds in the point cloud data can be deleted in order to further improve the accuracy and effectiveness of the data.
Step S132: and fitting the target data by using a spherical model to obtain the fitted spherical center coordinates and spherical radius.
Please refer to fig. 4, which is a schematic diagram of fitting target data using a spherical model according to an embodiment of the present application; the embodiment of step S132 described above is, for example: firstly randomly setting one point cloud coordinate in the point cloud data as the spherical center coordinate of the spherical model, and taking a random value as the radius of the spherical model; calculating the proportion of the point cloud data in the spherical model, if the proportion is larger than a preset threshold, wherein the preset threshold can be 80% or 90%, for example, the random value with the smallest radius is used as the radius of the spherical model, namely, the distance between the display object and the image acquisition device of the robot is the spherical radius, and the point cloud coordinate corresponding to the random value with the smallest radius is determined as the spherical center coordinate of the spherical model; of course, in a specific implementation process, the fitted spherical center coordinates and spherical radius can also be obtained by combining a binary search algorithm (binary search algorithm); the binary search algorithm is also called half-interval search algorithm or logarithmic search algorithm, and is a search algorithm for searching a specific element in an ordered array.
Step S133: and calculating a plurality of position coordinates for scanning the display object and orientation angles corresponding to the position coordinates according to the spherical center coordinates and the spherical radius.
The embodiment of step S133 described above is, for example: assuming that the spherical center coordinates are represented by o and the o is the origin of the three-dimensional coordinates, the direction of the spherical center coordinates o pointing to the image acquisition device of the robot is the x-axis direction, the direction passing through the spherical center coordinates o and perpendicular to the plane of the exhibition stand (i.e. the plane for placing the exhibition object) in the point cloud data is the z-axis, the directions perpendicular to the x-axis and the z-axis are the y-axis, and a coordinate system is established by the x-axis, the y-axis and the z-axis; under the coordinate system, assuming that the spherical radius is 30 cm, that is, the distance between the exhibited article and the image acquisition device of the robot is 30 cm, the orientation angle of the image acquisition device of the robot is 0 degrees with the x axis, and the exhibited article can be photographed sequentially when the orientation angle of the image acquisition device of the robot and the included angle of the x axis are the position coordinates corresponding to the preset interval degrees, then the orientation angles corresponding to the position coordinates and the position coordinates can be calculated according to the preset interval degrees, the preset interval degrees can be set according to specific conditions, and the preset interval degrees specifically include: 10 degrees, 15 degrees, 20 degrees, 25 degrees, 30 degrees, 40 degrees, etc.
In the implementation process, first, deleting all three-dimensional coordinates with positions lower than the coplanar point cloud from the point cloud data, namely deleting the three-dimensional coordinates from the point cloud data if the positions (Z coordinates) of the three-dimensional coordinates are lower than the positions of the coplanar point cloud, so as to obtain target data; then, fitting the target data by using a spherical model to obtain fitted spherical center coordinates and spherical radius; finally, calculating a plurality of position coordinates and orientation angles corresponding to the position coordinates for scanning the display object according to the spherical center coordinates and the spherical radius; thereby effectively improving the precision of determining the orientation angle of the plurality of position coordinates of the collected display article.
After step S130, step S140 is performed: and sending a control command to the robot according to the plurality of position coordinates and the orientation angles corresponding to the position coordinates, wherein the control command is used for enabling the robot to scan the display object according to the plurality of position coordinates and the orientation angles corresponding to the position coordinates and return a plurality of scanning images for scanning the display object.
The embodiment of step S140 described above is, for example: generating a control command in a preset format according to the plurality of position coordinates and the corresponding orientation angles of the position coordinates, wherein the preset format is as follows: script object numbered musical notation (JavaScript Object Notation, JSON) and extensible markup language (eXtensible Markup Language, XML) formats; the electronic equipment sends a control command to the robot; wherein JSON is a lightweight data exchange format; JSON is based on a subset of ECMAScript (JavaScript specification formulated by the european computer institute), which stores and represents data in a text format that is completely independent of the programming language; XML is a subset of the standard generic markup language, and XML is also a markup language used to mark electronic documents to be structured.
Optionally, in the embodiment of the present application, after sending a control command to the robot according to a plurality of position coordinates and orientation angles corresponding to the position coordinates, a scan image sent by the robot may be received according to the control command, and a three-dimensional model is constructed according to the scan image, and then after step S140, the method further includes the following steps:
step S150: the electronic device receives a plurality of scanned images transmitted by the robot.
The embodiment of step S150 described above is, for example: the electronic device receives the plurality of scanned images sent by the robot through a transmission control protocol (Transmission Control Protocol, TCP) or a user datagram protocol (User Datagram Protocol, UDP); the TCP protocol is also called a network communication protocol, and is a connection-oriented, reliable and byte stream-based transport layer communication protocol; in the internet protocol family, the TCP layer is an intermediate layer above the IP layer and below the application layer; reliable, pipe-like connections are often required between application layers of different hosts, but the IP layer does not provide such a streaming mechanism, but rather unreliable packet switching; here UDP is an acronym for User Datagram Protocol, chinese name is user datagram protocol, UDP protocol is a connectionless transport layer protocol in the open systems interconnection (Open System Interconnection, OSI) reference model.
Step S160: and modeling the display object according to the plurality of scanned images to obtain a three-dimensional model.
The embodiment of step S160 described above is, for example: modeling the display object according to a plurality of scanned images by using readitycapure software or OpenCV to obtain a three-dimensional model; the OpenCV is generally known as Open Source Computer Vision Library and is a cross-platform computer vision library; openCV may be used to develop real-time image processing, computer vision, and pattern recognition programs. In the implementation process, a plurality of scanning images sent by the robot are received; modeling the display object according to the plurality of scanned images to obtain a three-dimensional model; the speed of three-dimensional modeling according to the scanned images of the exhibited articles is effectively improved.
Optionally, in the embodiment of the present application, after the three-dimensional model is obtained, the three-dimensional model may be mapped, and then after step S160, the following steps may be further included:
step S170: and mapping the three-dimensional model according to the plurality of scanned images to obtain a mapped three-dimensional model.
The embodiment of step S170 described above is, for example: mapping the three-dimensional model according to the plurality of scanned images by using readitycapure software or an open graphics library (Open Graphics Library, openGL) to obtain a mapped three-dimensional model; wherein OpenGL refers to a cross-language, cross-platform application program interface (Application Programming Interface, API) for rendering 2D, 3D vector graphics; this interface consists of nearly 350 different function calls, used to draw a scene from simple graphics bits to complex three dimensions. In the implementation process, mapping is carried out on the three-dimensional model according to a plurality of scanned images, so that a mapped three-dimensional model is obtained; thereby effectively improving the speed of obtaining the mapped three-dimensional model.
In the implementation process, point cloud data of the display object are obtained firstly, then principal component analysis, deletion, fitting and other calculations are carried out on the point cloud data, a plurality of position coordinates for collecting and scanning the display object and orientation angles corresponding to the position coordinates are obtained, so that a robot can scan the display object according to the plurality of position coordinates and orientation angles corresponding to the position coordinates, and a plurality of scanning images for scanning the display object are returned; therefore, the efficiency of photographing the display articles to obtain images is improved, and meanwhile, the problem that time and labor are wasted when the display articles are photographed in a manual photographing mode is effectively solved.
Please refer to fig. 5, which is a schematic flowchart of an exhibition article scanning method applied to a robot according to an embodiment of the present application; the display article scanning method may be applied to a robot, and a specific structure of the robot will be described in detail below, and the display article scanning method applied to the robot may include:
step S210: the robot collects the exhibition objects through the depth camera to obtain point cloud data.
The robot is a machine device for automatically executing work, can accept human command, can run a pre-programmed program, and can also conduct actions according to the principle formulated by an artificial intelligence technology; the robot may be a wheeled mobile robot; the wheeled mobile robot here specifically includes: single-wheel mobile robots, double-wheel mobile robots, four-wheel mobile robots and the like; the wheeled mobile robot herein may include a track outside the wheel, and the robot is moved by friction of the track with the ground.
The robot may include: the robot comprises a robot body, a servo motor, a speed reducer, a mechanical arm and image acquisition equipment; the image acquisition device here is for example: depth cameras, single-lens reflex cameras, and the like; the robot body is respectively and movably connected with the servo motor, the speed reducer and the mechanical arm, and the mechanical arm is movably connected with the image acquisition device; the servo motor and the speed reducer are used for providing moving and walking power for the robot and stopping the walking function of the robot; the arm is used for controlling the angle of image acquisition device and shooting and gathering the action, and specific collection object includes: the depth camera is used for acquiring the point cloud data or the point cloud depth map of the display object, and the single-lens reflex is used for acquiring the color scanning image of the display object.
In the embodiment in which the robot collects the display object by the depth camera in the step S210, for example: the robot determines the specific position coordinates through controlling the servo motor and the speed reducer, the robot controls the shooting angle and shooting action of the depth camera on the position coordinates, and the depth camera collects the display objects according to the shooting angle and the shooting action to obtain the point cloud data of the display objects.
Step S220: the robot transmits the point cloud data to the electronic device, so that the electronic device calculates and transmits a control command according to the point cloud data.
In the embodiment in which the robot transmits the point cloud data to the electronic device in step S220, for example: the robot sends the point cloud data to the electronic device via hypertext transfer protocol (Hyper Text Transfer Protocol, HTTP) or hypertext transfer security protocol (Hyper Text Transfer Protocol Secure, HTTPs); HTTP is a simple request response protocol, which generally runs on top of the transmission control protocol (Transmission Control Protocol, TCP); HTTPS, also known as HTTP Secure, is a transport protocol for Secure communications over computer networks.
Step S230: the robot receives a control command sent by the electronic equipment, wherein the control command comprises a plurality of position coordinates for scanning the display object and orientation angles corresponding to the position coordinates.
The orientation angles corresponding to the position coordinates in step S230 are determined by the electronic device according to the coplanar point cloud and the point cloud data obtained by analysis after receiving and analyzing the point cloud data, and the specific determining method is shown in steps S110 to S130 executed by the electronic device.
The embodiment of step S230 described above is, for example: the robot receives a control command sent by the electronic equipment through an HTTP protocol, an HTTPS protocol or an HTTP/2 protocol; wherein HTTP/2 is version 2 of the hypertext transfer protocol, originally named HTTP 2.0, abbreviated as h2 (i.e., encrypted connections based on TLS/1.2 or above) or h2c (non-encrypted connections), which is the second major version of the HTTP protocol; the standardization of HTTP/2 is supported by browsers such as Chrome, opera, firefox, internet Explorer, safari, amazon Silk and Edge.
Step S240: the robot sequentially moves to each position coordinate in the plurality of position coordinates, and performs acquisition scanning according to the orientation angle corresponding to each position coordinate to obtain a plurality of scanning images.
The embodiment for obtaining a plurality of scanned images in step S240 described above specifically includes, for example: sequentially moving to each position coordinate in the plurality of position coordinates through a servo motor and a speed reducer; and adjusting the orientation angle of the image acquisition equipment to be the orientation angle corresponding to each position coordinate, and carrying out acquisition scanning by using the image acquisition equipment. In the implementation process, the device sequentially moves to each position coordinate in a plurality of position coordinates through a servo motor and a speed reducer; the orientation angle of the image acquisition equipment is adjusted to be the orientation angle corresponding to each position coordinate, and the image acquisition equipment is used for acquisition scanning; therefore, the efficiency of photographing the display articles to obtain images is improved, and meanwhile, the problem that time and labor are wasted when the display articles are photographed in a manual photographing mode is effectively solved.
Step S250: the robot transmits a plurality of scanned images to the electronic device.
The robot in step S250 may be configured to transmit a plurality of scanned images to the electronic device, for example: the robot transmits the plurality of scanned images to the electronic device through an HTTP protocol, an HTTPs protocol, or an HTTP/2 protocol.
In the implementation process, the depth camera is used for collecting the display object to obtain point cloud data, and the point cloud data are sent to the electronic equipment, so that the electronic equipment calculates and sends a control command according to the point cloud data, and the control command comprises a plurality of position coordinates and orientation angles corresponding to the position coordinates for scanning the display object; sequentially moving to each position coordinate in the plurality of position coordinates, and acquiring and scanning according to the corresponding orientation angle of each position coordinate to obtain a plurality of scanning images; finally, a plurality of scanned images are sent to the electronic equipment; therefore, the efficiency of photographing the display articles to obtain images is improved, and meanwhile, the problem that time and labor are wasted when the display articles are photographed in a manual photographing mode is effectively solved.
Please refer to fig. 6, which illustrates a schematic structural diagram of an exhibition article scanning device provided in an embodiment of the present application; the embodiment of the application provides an exhibition article scanning device 300, which is applied to electronic equipment and comprises:
The point cloud data obtaining module 310 is configured to obtain point cloud data of the display object, where the point cloud data is obtained by collecting the display object by the robot.
The coplanar point cloud obtaining module 320 is configured to perform principal component analysis on the point cloud data to obtain a coplanar point cloud, where the coplanar point cloud characterizes a three-dimensional coordinate set of a common plane in the point cloud data.
The coordinate angle determining module 330 is configured to determine, according to the point cloud data and the coplanar point cloud, a plurality of position coordinates of the acquired display object and orientation angles corresponding to the position coordinates.
The control command sending module 340 is configured to send a control command to the robot according to the plurality of position coordinates and the orientation angles corresponding to the position coordinates, where the control command is configured to enable the robot to scan the display object according to the plurality of position coordinates and the orientation angles corresponding to the position coordinates, and return a plurality of scanned images for scanning the display object.
Optionally, in an embodiment of the present application, the coplanar point cloud obtaining module includes:
the point cloud vector obtaining module is used for carrying out singular value decomposition on a matrix formed by the point cloud data to obtain a point cloud vector.
And the coplanar point cloud determining module is used for determining the common plane represented by the central point of the point cloud data and the point cloud vector as a coplanar point cloud.
Optionally, in an embodiment of the present application, the coordinate angle determining module includes:
and the target data obtaining module is used for deleting all three-dimensional coordinates with the position lower than the coplanar point cloud from the point cloud data to obtain target data.
And the target data fitting module is used for fitting the target data by using the spherical model to obtain the fitted spherical center coordinates and spherical radius.
And the coordinate angle calculation module is used for calculating a plurality of position coordinates for scanning the display object and orientation angles corresponding to the position coordinates according to the spherical center coordinates and the spherical radius.
Optionally, in an embodiment of the present application, the display article scanning device further includes:
and the scanning image receiving module is used for receiving a plurality of scanning images sent by the robot.
And the three-dimensional model obtaining module is used for modeling the display object according to the plurality of scanned images to obtain a three-dimensional model.
Optionally, in an embodiment of the present application, the display article scanning apparatus may further include:
and the three-dimensional model mapping module is used for mapping the three-dimensional model according to the plurality of scanned images to obtain a mapped three-dimensional model.
The embodiment of the application also provides an exhibition article scanning device, which is applied to a robot and comprises:
The point cloud data acquisition module is used for acquiring the exhibition object through the depth camera to obtain point cloud data, and the point cloud data represents a three-dimensional coordinate set of the exhibition object.
And the point cloud data sending module is used for sending the point cloud data to the electronic equipment so that the electronic equipment calculates and sends a control command according to the point cloud data.
The control command receiving module is used for receiving a control command sent by the electronic equipment, wherein the control command comprises a plurality of position coordinates for scanning the exhibited article and orientation angles corresponding to the position coordinates, and the orientation angles corresponding to the position coordinates are determined according to coplanar point cloud and point cloud data obtained through analysis after the electronic equipment receives and analyzes the point cloud data.
And the scanning image obtaining module is used for sequentially moving to each position coordinate in the plurality of position coordinates, and carrying out acquisition scanning according to the orientation angle corresponding to each position coordinate to obtain a plurality of scanning images.
And the scanning image transmitting module is used for transmitting the plurality of scanning images to the electronic equipment.
Optionally, in an embodiment of the present application, the robot includes: the device comprises a servo motor, a speed reducer and image acquisition equipment; a scanned image acquisition module comprising:
And the robot moving module is used for sequentially moving to each position coordinate in the plurality of position coordinates through the servo motor and the speed reducer.
And the robot scanning module is used for adjusting the orientation angle of the image acquisition equipment to the orientation angle corresponding to each position coordinate and carrying out acquisition scanning by using the image acquisition equipment.
It should be understood that, corresponding to the above-mentioned method embodiment for scanning an exhibited article, the apparatus is capable of executing the steps involved in the above-mentioned method embodiment, and specific functions of the apparatus may be referred to the above description, and detailed descriptions thereof are omitted herein as appropriate to avoid repetition. The device includes at least one software functional module that can be stored in memory in the form of software or firmware (firmware) or cured in an Operating System (OS) of the device.
Please refer to fig. 7, which illustrates a schematic structural diagram of an electronic device provided in an embodiment of the present application. An electronic device 400 provided in an embodiment of the present application includes: a processor 410 and a memory 420, the memory 420 storing machine-readable instructions executable by the processor 410, which when executed by the processor 410 perform the method as described above.
The present embodiment also provides a storage medium 430, on which storage medium 430 a computer program is stored which, when executed by the processor 410, performs a method as above.
The storage medium 430 may be implemented by any type or combination of volatile or nonvolatile Memory devices, such as a static random access Memory (Static Random Access Memory, SRAM), an electrically erasable Programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), an erasable Programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), a Programmable Read-Only Memory (PROM), a Read-Only Memory (ROM), a magnetic Memory, a flash Memory, a magnetic disk, or an optical disk.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The apparatus embodiments described above are merely illustrative, for example, of the flowcharts and block diagrams in the figures that illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The foregoing description is merely an optional implementation of the embodiments of the present application, but the scope of the embodiments of the present application is not limited thereto, and any person skilled in the art may easily think about changes or substitutions within the technical scope of the embodiments of the present application, and the changes or substitutions should be covered in the scope of the embodiments of the present application.

Claims (8)

1. A method for scanning an exhibited article, applied to an electronic device, comprising:
acquiring point cloud data of an exhibition object, wherein the point cloud data is acquired by a robot;
performing principal component analysis on the point cloud data to obtain a coplanar point cloud, wherein the coplanar point cloud represents a three-dimensional coordinate set of a public plane in the point cloud data;
Determining and collecting a plurality of position coordinates of the display object and orientation angles corresponding to the position coordinates according to the point cloud data and the coplanar point cloud;
transmitting a control command to the robot according to the plurality of position coordinates and the orientation angles corresponding to the position coordinates, wherein the control command is used for enabling the robot to scan the display object according to the plurality of position coordinates and the orientation angles corresponding to the position coordinates and return a plurality of scanning images for scanning the display object;
the principal component analysis is performed on the point cloud data to obtain a coplanar point cloud, which comprises the following steps: singular value decomposition is carried out on a matrix formed by the point cloud data to obtain a point cloud vector; determining a common plane represented by a central point of the point cloud data and the point cloud vector as the coplanar point cloud;
the determining, according to the point cloud data and the coplanar point cloud, a plurality of position coordinates of the display object and orientation angles corresponding to the position coordinates, includes: deleting all three-dimensional coordinates lower than the coplanar point cloud from the point cloud data to obtain target data; fitting the target data by using a spherical model to obtain fitted spherical center coordinates and spherical radius; and calculating a plurality of position coordinates for collecting and scanning the display object and orientation angles corresponding to the position coordinates according to the spherical center coordinates and the spherical radius.
2. The method of claim 1, further comprising, after the sending a control command to the robot according to the plurality of position coordinates and the orientation angle corresponding to the position coordinates:
receiving the plurality of scanned images sent by the robot;
and modeling the display object according to the plurality of scanned images to obtain a three-dimensional model.
3. The method of claim 2, further comprising, after the obtaining the three-dimensional model:
and mapping the three-dimensional model according to the plurality of scanned images to obtain a mapped three-dimensional model.
4. A method for scanning an exhibited article, applied to a robot, comprising:
acquiring the exhibition object through a depth camera to obtain point cloud data, wherein the point cloud data represents a three-dimensional coordinate set of the exhibition object;
the point cloud data are sent to the electronic equipment, so that the electronic equipment calculates and sends a control command according to the point cloud data;
receiving the control command sent by the electronic equipment, wherein the control command comprises a plurality of position coordinates for scanning the exhibited article and orientation angles corresponding to the position coordinates, and the orientation angles corresponding to the position coordinates are determined according to coplanar point clouds obtained by analysis and the point cloud data after the electronic equipment receives and analyzes the point cloud data;
Sequentially moving to each position coordinate in the plurality of position coordinates, and acquiring and scanning according to the orientation angle corresponding to each position coordinate to obtain a plurality of scanning images;
transmitting the plurality of scanned images to the electronic device;
the coplanar point cloud is obtained by carrying out singular value decomposition on a matrix formed by the point cloud data; determining a center point of the point cloud data and a common plane represented by the point cloud vector;
the plurality of position coordinates and the orientation angles corresponding to the position coordinates are obtained by deleting all three-dimensional coordinates with positions lower than the coplanar point cloud from the point cloud data; fitting the target data by using a spherical model to obtain fitted spherical center coordinates and spherical radius; and acquiring and scanning the display object according to the spherical center coordinates and the spherical radius calculation.
5. The method of claim 4, wherein the robot comprises: the device comprises a servo motor, a speed reducer and image acquisition equipment; the step of sequentially moving to each position coordinate in the plurality of position coordinates and performing acquisition scanning according to the orientation angle corresponding to each position coordinate comprises the following steps:
Sequentially moving to each position coordinate in the plurality of position coordinates through the servo motor and the speed reducer;
and adjusting the orientation angle of the image acquisition equipment to be the orientation angle corresponding to each position coordinate, and carrying out acquisition scanning by using the image acquisition equipment.
6. An exhibit item scanning apparatus, characterized by being applied to an electronic device, comprising:
the point cloud data acquisition module is used for acquiring point cloud data of the display article, wherein the point cloud data are acquired by a robot;
the coplanar point cloud obtaining module is used for carrying out principal component analysis on the point cloud data to obtain a coplanar point cloud, and the coplanar point cloud represents a three-dimensional coordinate set of a public plane in the point cloud data;
the coordinate angle determining module is used for determining and collecting a plurality of position coordinates of the display object and orientation angles corresponding to the position coordinates according to the point cloud data and the coplanar point cloud;
a control command sending module, configured to send a control command to the robot according to the plurality of position coordinates and orientation angles corresponding to the position coordinates, where the control command is configured to cause the robot to scan the display object according to the plurality of position coordinates and orientation angles corresponding to the position coordinates, and return a plurality of scan images for scanning the display object;
The principal component analysis is performed on the point cloud data to obtain a coplanar point cloud, which comprises the following steps: singular value decomposition is carried out on a matrix formed by the point cloud data to obtain a point cloud vector; determining a common plane represented by a central point of the point cloud data and the point cloud vector as the coplanar point cloud;
the determining, according to the point cloud data and the coplanar point cloud, a plurality of position coordinates of the display object and orientation angles corresponding to the position coordinates, includes: deleting all three-dimensional coordinates lower than the coplanar point cloud from the point cloud data to obtain target data; fitting the target data by using a spherical model to obtain fitted spherical center coordinates and spherical radius; and calculating a plurality of position coordinates for collecting and scanning the display object and orientation angles corresponding to the position coordinates according to the spherical center coordinates and the spherical radius.
7. An electronic device, comprising: a processor and a memory storing machine-readable instructions executable by the processor, which when executed by the processor perform the method of any one of claims 1 to 3.
8. A storage medium having stored thereon a computer program which, when executed by a processor, performs the method of any of claims 1 to 5.
CN202010481765.7A 2020-05-27 2020-05-27 Exhibition article scanning method and device, electronic equipment and storage medium Active CN113744378B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010481765.7A CN113744378B (en) 2020-05-27 2020-05-27 Exhibition article scanning method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010481765.7A CN113744378B (en) 2020-05-27 2020-05-27 Exhibition article scanning method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113744378A CN113744378A (en) 2021-12-03
CN113744378B true CN113744378B (en) 2024-02-20

Family

ID=78727849

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010481765.7A Active CN113744378B (en) 2020-05-27 2020-05-27 Exhibition article scanning method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113744378B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114664126A (en) * 2022-03-23 2022-06-24 中国地质大学(武汉) Art design multimedia teaching instrument based on computer network and operation method thereof

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574812A (en) * 2015-12-14 2016-05-11 深圳先进技术研究院 Multi-angle three-dimensional data registration method and device
CN107782240A (en) * 2017-09-27 2018-03-09 首都师范大学 A kind of two dimensional laser scanning instrument scaling method, system and device
WO2019161558A1 (en) * 2018-02-26 2019-08-29 Intel Corporation Method and system of point cloud registration for image processing
CN110443840A (en) * 2019-08-07 2019-11-12 山东理工大学 The optimization method of sampling point set initial registration in surface in kind
CN111028340A (en) * 2019-12-10 2020-04-17 苏州大学 Three-dimensional reconstruction method, device, equipment and system in precision assembly
CN111080805A (en) * 2019-11-26 2020-04-28 北京云聚智慧科技有限公司 Method and device for generating three-dimensional block diagram of marked object, electronic equipment and storage medium
CN111179433A (en) * 2019-12-31 2020-05-19 杭州阜博科技有限公司 Three-dimensional modeling method and device for target object, electronic device and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9996974B2 (en) * 2013-08-30 2018-06-12 Qualcomm Incorporated Method and apparatus for representing a physical scene

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105574812A (en) * 2015-12-14 2016-05-11 深圳先进技术研究院 Multi-angle three-dimensional data registration method and device
CN107782240A (en) * 2017-09-27 2018-03-09 首都师范大学 A kind of two dimensional laser scanning instrument scaling method, system and device
WO2019161558A1 (en) * 2018-02-26 2019-08-29 Intel Corporation Method and system of point cloud registration for image processing
CN110443840A (en) * 2019-08-07 2019-11-12 山东理工大学 The optimization method of sampling point set initial registration in surface in kind
CN111080805A (en) * 2019-11-26 2020-04-28 北京云聚智慧科技有限公司 Method and device for generating three-dimensional block diagram of marked object, electronic equipment and storage medium
CN111028340A (en) * 2019-12-10 2020-04-17 苏州大学 Three-dimensional reconstruction method, device, equipment and system in precision assembly
CN111179433A (en) * 2019-12-31 2020-05-19 杭州阜博科技有限公司 Three-dimensional modeling method and device for target object, electronic device and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
compression of plenoptic point clouds;Gustavo Sandri .etc;《IEEE Transaction on Image Processing》;第28卷(第3期);1419-1427 *
大尺寸形貌测量的三维点云拼接技术;皮佳静;《中国优秀硕士学位论文全文数据库(电子期刊)》;I140-689 *
用于三维重建的点云单应性迭代最近点配准算法;韦盛斌等;《光学学报》;第35卷(第5期);252-258 *

Also Published As

Publication number Publication date
CN113744378A (en) 2021-12-03

Similar Documents

Publication Publication Date Title
US10740694B2 (en) System and method for capture and adaptive data generation for training for machine vision
US9235928B2 (en) 3D body modeling, from a single or multiple 3D cameras, in the presence of motion
Sweeney et al. Solving for relative pose with a partially known rotation is a quadratic eigenvalue problem
KR101791590B1 (en) Object pose recognition apparatus and method using the same
CN111738261A (en) Pose estimation and correction-based disordered target grabbing method for single-image robot
KR100855657B1 (en) Magnetic Position Estimation System and Method of Mobile Robot Using Monocular Zoom Camera
CN108229416B (en) Robot SLAM method based on semantic segmentation technology
US20140206443A1 (en) Camera pose estimation for 3d reconstruction
EP3067658B1 (en) 3d-shape measurement device, 3d-shape measurement method, and 3d-shape measurement program
Taryudi et al. Eye to hand calibration using ANFIS for stereo vision-based object manipulation system
Ye et al. 6-DOF pose estimation of a robotic navigation aid by tracking visual and geometric features
CN111515950A (en) Method, device and equipment for determining transformation relation of robot coordinate system and storage medium
Wu et al. This is the way: Sensors auto-calibration approach based on deep learning for self-driving cars
WO2018233514A1 (en) Pose measurement method and device, and storage medium
JP7114686B2 (en) Augmented reality device and positioning method
EP3599588B1 (en) Rendering an object
CN113744378B (en) Exhibition article scanning method and device, electronic equipment and storage medium
EP4089637B1 (en) Hybrid feature matching between intensity image and color image
EP4261789A1 (en) Method for displaying posture of robot in three-dimensional map, apparatus, device, and storage medium
Luo et al. A structural 3D displacement measurement method using monocular camera based on multiple feature points tracking
CN117788686A (en) Three-dimensional scene reconstruction method and device based on 2D image and electronic equipment
US12002227B1 (en) Deep partial point cloud registration of objects
CN116616812A (en) NeRF positioning-based ultrasonic autonomous navigation method
Do On the neural computation of the scale factor in perspective transformation camera model
Yaqoob et al. Performance evaluation of mobile stereonet for real time navigation in autonomous mobile robots

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant