Workshop digital twin oriented augmented reality system and method
Technical Field
The invention relates to a workshop digital twin oriented augmented reality system and a method, and belongs to the field of intelligent manufacturing.
Background
The current intelligent factory and intelligent workshop digital twin models need to establish a complex three-dimensional model of a workshop, then digital twin information generated by a workshop digital twin system is displayed on the three-dimensional model corresponding to workshop equipment, the whole modeling process works greatly, and once a workshop structure changes, the three-dimensional model needs to be changed, so that the later maintenance workload is large, and the calculation resources occupied when the complex model is rendered are large.
Disclosure of Invention
In order to solve the technical problem, the invention provides a workshop digital twin-oriented augmented reality system, which is short in development period and flexible in display effect, and display equipment information is superimposed on a workshop video image without establishing a complex three-dimensional model of a physical workshop system.
The technical scheme of the invention is as follows:
the augmented reality system for the workshop digital twin comprises a physical workshop and a workshop digital twin system, wherein the workshop digital twin system outputs digital twin information, and the augmented reality system further comprises the following modules: a camera group: the method comprises the steps of fixing in a physical workshop, and collecting a video image of the current state of the physical workshop; an image acquisition module: acquiring a workshop video image shot by a camera selected by a user; a workshop three-dimensional model labeling module: constructing a virtual three-dimensional model of a physical workshop, wherein the virtual three-dimensional model of equipment in the physical workshop is constructed by using proxy shape bodies, each proxy shape body represents the space shape and position of the equipment in the physical workshop, then labeling each proxy shape body, establishing a one-to-one correspondence relationship between labels and equipment identifiers, and generating a three-dimensional labeling model of the workshop; an equipment identification module: according to user setting, identifying equipment needing to display equipment information, specifically, acquiring a current frame image from the workshop video image, and identifying equipment identification corresponding to the equipment needing to be identified in the current frame image and imaging areas of the equipment on the current frame image according to the position corresponding relation between each proxy-shaped body in the three-dimensional labeling model and the equipment in the current frame image and the labels of the proxy-shaped bodies; the data query and AR registration module: inquiring a workshop digital twin system according to the equipment identification to obtain corresponding digital twin information, and meanwhile, determining an information display area of each equipment according to an imaging area of each equipment on a current frame image; the AR display module: and superposing the digital twin information of the equipment on the information display area of the equipment on the current frame image, thereby realizing AR display of the workshop equipment information.
Preferably, the camera group comprises a plurality of cameras installed in different areas, each camera comprises a lens, an image sensor, a cradle head and an image sensor posture detection module, the cradle head is used for controlling the orientation of the image sensor, and the image sensor posture detection module is used for detecting the direction of the image sensor; the image acquisition module is also used for acquiring the position of the image sensor of the currently selected camera and the posture of the image sensor; the equipment identification module acquires the position and the posture of the image sensor and sends the information to the workshop three-dimensional model marking module; the workshop three-dimensional model labeling module sets the position and the posture of a currently selected virtual image sensor of the virtual camera according to the conversion relation between the physical workshop coordinate system and the workshop three-dimensional labeling model coordinate system, so that a virtual imaging model in the workshop three-dimensional labeling model is consistent with an imaging model of the physical workshop, and a virtual synthetic image of the workshop three-dimensional labeling model is synthesized according to the camera imaging model; the identification module determines the corresponding relation of the position of the proxy shape body in the virtual composite image and the position of the equipment in the current frame image according to the consistency of the imaging model, the imaging position and the imaging posture of the virtual composite image and the current frame image, reads the mark of the equipment which needs to be identified in the virtual composite image according to the equipment which needs to be identified and is set by a user, and determines the equipment identification and the imaging area of the equipment in the current frame image according to the mark.
Preferably, the conversion relationship between the physical workshop coordinate system and the workshop three-dimensional labeling model coordinate system is an equivalent relationship, specifically: the workshop three-dimensional model labeling module further executes: unifying the coordinate system of the three-dimensional workshop marking model and the coordinate system of the physical workshop to ensure that the equipment coordinate of the physical workshop is consistent with the coordinate of the corresponding proxy shape body in the three-dimensional workshop marking model and the coordinate of each camera in the physical workshop is consistent with the coordinate of the corresponding virtual camera in the three-dimensional workshop marking model.
Preferably, the label is a color label, the proxy shapes corresponding to different devices render different colors, and a one-to-one mapping relationship between the colors and the device identifiers is established; and reading the color value of the pixel according to the corresponding relation between the proxy-shaped body in the virtual synthesized image and the position of the equipment in the current frame image, determining the equipment identification of the equipment corresponding to the color value in the current frame image, and determining the imaging area of the equipment in the current frame image according to the area range of the color value pixel.
Preferably, the system further comprises a workshop information display setting module and an AR information display interface library, wherein the AR display interface library defines various types of display interfaces, and the workshop information display setting module is used for setting the display interface type of each equipment parameter and/or digital twin information and the information or parameters displayed by the interface; after the data query and AR registration module acquires digital twin information of the equipment and determines an information display area of the equipment, the data query and AR registration module also sends a display interface type, the information display area and information or parameters to be displayed to the AR display module according to the setting of the equipment in the workshop information display setting module; the AR display module acquires a display interface of a corresponding type from the AR information display interface library, superposes the display interface on the information display area of the equipment in the current frame image, and displays the information or parameters of the equipment through the display interface.
Preferably, the device that needs to display device information is identified according to user settings, specifically: setting to display all equipment information, and identifying all equipment in the current frame image; or, the object recognition module reads the position of the mouse on the current frame image, determines the position of the mouse in the three-dimensional labeling model according to the conversion relation between the physical workshop coordinate system and the workshop three-dimensional labeling model coordinate system, thereby determining the label corresponding to the proxy shape body at the position, and recognizes the device identification corresponding to the device pointed by the mouse in the current frame image and the imaging area of the device on the current frame image according to the label.
The invention also provides a workshop digital twin oriented augmented reality method.
A workshop digital twin oriented augmented reality method comprises the following steps: step 1, constructing a virtual three-dimensional model of a physical workshop, wherein the virtual three-dimensional model of equipment in the physical workshop is constructed by using proxy shape bodies, each proxy shape body represents the space shape and the position of the equipment in the physical workshop, then labeling each proxy shape body, establishing a one-to-one correspondence relationship between labels and equipment identifiers, and generating a three-dimensional labeling model of the workshop; step 2, fixing a plurality of cameras in the physical workshop, and acquiring a video image of the current state of the physical workshop through the cameras; step 3, acquiring a workshop video image shot by a camera selected by a user; and 4, identifying the equipment needing to display equipment information according to user setting: acquiring a current frame image from the workshop video image, and identifying equipment identifications corresponding to equipment to be identified in the current frame image and imaging areas of the equipment on the current frame image according to the position corresponding relation between each agent shape body in the three-dimensional labeling model and the equipment in the current frame image and the labels of the agent shape bodies; step 5, inquiring a workshop digital twin system according to the equipment identification to obtain corresponding digital twin information, and meanwhile, determining an information display area of the equipment according to an imaging area of the equipment on the current frame image; and 6, superposing the digital twin information of the equipment on the information display area of the equipment on the current frame image, thereby realizing AR display of the workshop equipment information.
Preferably, in step 2, the camera group includes a plurality of cameras installed in different areas, each camera includes a lens, an image sensor, a pan-tilt and an image sensor posture detection module, the pan-tilt is used to control the orientation of the image sensor, and the image sensor posture detection module is used to detect the direction of the image sensor; in the step 3, the position of the image sensor of the currently selected camera and the posture of the image sensor are also acquired; in the step 4, firstly, the position and the posture of the virtual image sensor of the virtual camera corresponding to the currently selected camera are set according to the position and the posture of the image sensor and the conversion relation between the physical workshop coordinate system and the workshop three-dimensional labeling model coordinate system, so that a virtual imaging model in the workshop three-dimensional labeling model is consistent with an imaging model of the physical workshop, and a virtual synthetic image of the workshop three-dimensional labeling model is synthesized according to the camera imaging model; then, according to the consistency of the imaging models, the imaging positions and the imaging postures of the virtual synthetic image and the current frame image, the corresponding relation of the positions of the proxy-shaped body in the virtual synthetic image and the equipment in the current frame image is determined, the mark of the equipment which needs to be identified in the virtual synthetic image is read according to the equipment which needs to be identified and is set by a user, and the equipment identification and the imaging area of the equipment in the current frame image are determined according to the mark.
Preferably, the label is a color label, the proxy shapes corresponding to different devices render different colors, and a one-to-one mapping relationship between the colors and the device identifiers is established; and reading the color value of the pixel according to the corresponding relation between the proxy-shaped body in the virtual synthesized image and the position of the equipment in the current frame image, determining the equipment identification of the equipment corresponding to the color value in the current frame image, and determining the imaging area of the equipment in the current frame image according to the area range of the color value pixel.
Preferably, the method further comprises a workshop information display setting module and an AR information display interface library, wherein the AR display interface library defines various types of display interfaces, and the workshop information display setting module is used for setting the display interface type of each equipment parameter and/or digital twin information and the information or parameters displayed by the interface; in the step 5, after the digital twin information of the equipment and the information display area of the equipment are obtained, the setting of the equipment in the workshop information display setting module is read, and the type of the display interface and the information or parameters to be displayed are obtained; in the step 6, a display interface of a corresponding type is acquired from the AR information display interface library according to the acquired display interface type, the display interface is superimposed on the information display area of the device in the current frame image, and the device information or parameters to be displayed are displayed through the display interface.
Preferably, the device that needs to display device information is identified according to user settings, specifically: setting to display all the device information, and in the step 4, identifying all the devices in the current frame image; or, setting to display the information of the device pointed by the mouse, in the step 4, reading the position of the mouse on the current frame image, determining the position of the mouse in the three-dimensional labeling model according to the conversion relation between the physical workshop coordinate system and the workshop three-dimensional labeling model coordinate system, thereby determining the label corresponding to the proxy shape body at the position, and identifying the device identifier corresponding to the device pointed by the mouse in the current frame image and the imaging area of the device on the current frame image according to the label.
The invention has the following beneficial effects:
1. the invention relates to a workshop digital twin-oriented augmented reality system and a workshop digital twin-oriented augmented reality method.
2. According to the augmented reality system and method for the digital twin workshop, a complex three-dimensional model of a physical workshop system does not need to be established, the video image is used for replacing the workshop three-dimensional model for displaying, and the picture display is smoother.
3. The augmented reality system and method for the workshop digital twin can set whether information of each device is displayed or not, display interface types, interface parameters and the like, and are more flexible in display mode and simpler and more convenient to operate.
4. The invention relates to a workshop digital twin oriented augmented reality system and a method, which can judge the device identification pointed by a mouse and display the related information of the device.
Drawings
FIG. 1 is a system block diagram of a workshop digital twin oriented augmented reality system of the present invention;
FIG. 2 is a flow chart of the augmented reality method of the present invention showing information of all devices;
fig. 3 is a flowchart of displaying device information of a device pointed by a mouse according to the augmented reality method of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and the specific embodiments.
Referring to fig. 1 and 2, an augmented reality system for a workshop digital twin includes a physical workshop, a workshop digital twin system, a camera group, an image acquisition module, a workshop three-dimensional model labeling module, an equipment identification module, a data query and AR registration module, and an AR display module. The workshop digital twin system outputs digital twin information, and the camera group generally consists of at least two cameras, is fixed in a physical workshop and collects video images of the current state of the physical workshop; the image acquisition module acquires a workshop video image shot by a camera selected by a user; the method comprises the following steps that a workshop three-dimensional model marking module builds a virtual three-dimensional model of a physical workshop through three-dimensional modeling software (such as a Multigen Creator), wherein a virtual three-dimensional model of equipment in the physical workshop is built by using proxy shape bodies (such as basic shapes of cuboids, spheres, ellipsoids and the like), each proxy shape body represents the space shape and position of the equipment in the physical workshop, then each proxy shape body is marked, the one-to-one correspondence relationship between marks and equipment marks is built, and a workshop three-dimensional marking model is generated; the equipment identification module identifies equipment needing to display equipment information according to user setting, specifically, obtains a current frame image from the workshop video image, and identifies equipment identification corresponding to the equipment needing to be identified in the current frame image and imaging areas of the equipment on the current frame image according to the position corresponding relation between each proxy-shaped body in the three-dimensional labeling model and the equipment in the current frame image and the labels of the proxy-shaped bodies; the data query and AR registration module: inquiring a workshop digital twin system according to the equipment identification to obtain corresponding digital twin information, and meanwhile, determining an information display area of each equipment according to an imaging area of each equipment on a current frame image; and the AR display module superposes the digital twin information of the equipment on the information display area of the equipment on the current frame image, so that AR display of the workshop equipment information is realized. Since the current frame image is extracted from the video image, the current frame image is dynamically transformed, and device identification, data query and AR display are a dynamic loop process.
The information display area may be within the imaging area or in the vicinity of the imaging area.
The workshop digital twin system comprises a sensor, edge computing equipment, a bus communication module, a digital twin model system and digital twin information. The sensor and the edge computing equipment are used for detecting state information of a workshop, such as machining information, machine task information, logistics information, robot body information and the like of a machine tool body, the state information is transmitted to the digital twin model system through the bus communication system, the workshop digital twin model system is a mapping of a physical workshop in a computer and comprises a simulation and prediction model of the physical workshop, and the workshop digital twin model system takes the workshop state information as input and generates digital twin information comprising the digital twin simulation prediction information and the state information.
The camera group comprises a plurality of cameras installed in different areas, each camera comprises a lens, an image sensor, a holder and an image sensor posture detection module, the holder is used for controlling the position of the image sensor, and the image sensor posture detection module is used for detecting the direction of the image sensor.
Now, taking the conversion relationship between the physical workshop coordinate system and the workshop three-dimensional labeling model coordinate system as an equivalent relationship, and taking the color label as an example, the equipment identification process is specifically explained.
The workshop three-dimensional model labeling module further executes the steps of rendering different colors for agent-shaped bodies corresponding to different devices, establishing a one-to-one mapping relation between the colors and device identifiers, unifying a workshop three-dimensional labeling model coordinate system and a physical workshop coordinate system, enabling device coordinates of a physical workshop to be consistent with corresponding proxy shape body coordinates in the workshop three-dimensional labeling model, enabling coordinates of each camera of the physical workshop to be consistent with corresponding virtual camera coordinates in the workshop three-dimensional labeling model, acquiring a workshop video image shot by a camera selected by a user, a position P (x, y, z) of an image sensor of the currently selected camera and a posture Q (α, theta) of the image sensor, acquiring the position P (x, y, z) and the posture Q (α, theta) of the image sensor by the device identification module, sending the information to the workshop three-dimensional model labeling module, reading a current frame image of the workshop video image by the device identification module, setting the position P (x, y) of the virtual image sensor of the virtual camera to be consistent with the position P (x, y, z) of the image sensor, setting the information to the workshop three-dimensional model labeling module, reading a current frame image synthesis device image synthesis model, and determining an imaging color value of the corresponding to the current frame of the workshop three-dimensional synthesis device synthesis model according to the corresponding virtual image synthesis device synthesis model, and the image synthesis device synthesis color synthesis model identification of the image synthesis device identification, and the imaging device synthesis model, and determining the imaging device synthesis area of the current frame of the workshop three-dimensional synthesis device synthesis model synthesis area, and determining the image synthesis area of the workshop three-dimensional synthesis device synthesis model synthesis area according to the corresponding.
In this embodiment, the augmented reality system for a workshop digital twin further comprises a workshop information display setting module and an AR information display interface library, wherein the AR display interface library defines various display interfaces, such as a dashboard display interface control, a nixie tube display interface control and a virtual oscilloscope display interface control; the workshop information display setting module is used for setting the display interface type of each equipment parameter and/or digital twin information and the information or parameters displayed by the interface; after the data query and AR registration module acquires digital twin information of the equipment and determines an information display area of the equipment, the data query and AR registration module also sends a display interface type, the information display area and information or parameters to be displayed to the AR display module according to the setting of the equipment in the workshop information display setting module; the AR display module acquires a display interface of a corresponding type from the AR information display interface library, superposes the display interface on the information display area of the equipment in the current frame image, and displays the information or parameters of the equipment through the display interface.
In this embodiment, the identifying the device that needs to display the device information according to the user setting includes that the user setting is to display all device information in the current frame image, then all devices in the current frame image are identified, and then the information of all identified devices is obtained and displayed by being superimposed on the current frame image; or, the method is configured to display device information of a device pointed by a mouse of a user, please refer to fig. 3, where the object identification module reads a position of the mouse on a current frame image when identifying the device, then determines a position of the mouse in the three-dimensional annotation model according to a conversion relationship between a physical workshop coordinate system and a workshop three-dimensional annotation model coordinate system, thereby determining an annotation corresponding to a proxy shape body of the position, identifies a device identifier corresponding to the device pointed by the mouse in the current frame image and an imaging area of the device on the current frame image according to the annotation, and then superimposes digital twin information of the device on an information display area of the device on the current frame image.
Example two
Referring to fig. 1 and 2, an augmented reality method facing a workshop digital twin includes the following steps: step 1, constructing a virtual three-dimensional model of a physical workshop, wherein the virtual three-dimensional model of equipment in the physical workshop is constructed by using proxy shape bodies, each proxy shape body represents the space shape and the position of the equipment in the physical workshop, then labeling each proxy shape body, establishing a one-to-one correspondence relationship between labels and equipment identifiers, and generating a three-dimensional labeling model of the workshop; step 2, fixing a plurality of cameras in the physical workshop, and acquiring a video image of the current state of the physical workshop through the cameras; step 3, acquiring a workshop video image shot by a camera selected by a user; and 4, identifying the equipment needing to display equipment information according to user setting: acquiring a current frame image from the workshop video image, and identifying equipment identifications corresponding to equipment to be identified in the current frame image and imaging areas of the equipment on the current frame image according to the position corresponding relation between each agent shape body in the three-dimensional labeling model and the equipment in the current frame image and the labels of the agent shape bodies; step 5, inquiring a workshop digital twin system according to the equipment identification to obtain corresponding digital twin information, and meanwhile, determining an information display area of the equipment according to an imaging area of the equipment on the current frame image; and 6, superposing the digital twin information of the equipment on the information display area of the equipment on the current frame image, thereby realizing AR display of the workshop equipment information.
In step 2, the camera group includes a plurality of cameras installed in different areas, each camera includes a lens, an image sensor, a cradle head and an image sensor posture detection module, the cradle head is used for controlling the orientation of the image sensor, and the image sensor posture detection module is used for detecting the direction of the image sensor. And in the step 3, the position of the image sensor of the currently selected camera and the posture of the image sensor are also acquired. In the step 4, firstly, the position and the posture of the virtual image sensor of the virtual camera corresponding to the currently selected camera are set according to the position and the posture of the image sensor and the conversion relation between the physical workshop coordinate system and the workshop three-dimensional labeling model coordinate system, so that a virtual imaging model in the workshop three-dimensional labeling model is consistent with an imaging model of the physical workshop, and a virtual synthetic image of the workshop three-dimensional labeling model is synthesized according to the camera imaging model; then, according to the consistency of the imaging models, the imaging positions and the imaging postures of the virtual synthetic image and the current frame image, the corresponding relation of the positions of the proxy-shaped body in the virtual synthetic image and the equipment in the current frame image is determined, the mark of the equipment which needs to be identified in the virtual synthetic image is read according to the equipment which needs to be identified and is set by a user, and the equipment identification and the imaging area of the equipment in the current frame image are determined according to the mark.
In this embodiment, the label is a color label, the proxy shapes corresponding to different devices render different colors, and a one-to-one mapping relationship between the colors and the device identifiers is established; and reading the color value of the pixel according to the corresponding relation between the proxy-shaped body in the virtual synthesized image and the position of the equipment in the current frame image, determining the equipment identification of the equipment corresponding to the color value in the current frame image, and determining the imaging area of the equipment in the current frame image according to the area range of the color value pixel. For example, the statistical pixel color value is the maximum coordinate value and the minimum coordinate value of the abscissa and the ordinate of all the pixels of the color of the device a, and a rectangular region can be defined, and the region can be estimated as the imaging region of the device a.
In this embodiment, a workshop information display setting module and an AR information display interface library are further employed, the AR display interface library defines multiple types of display interfaces, and the workshop information display setting module is configured to set a display interface type of each device parameter and/or digital twin information and information or parameters displayed by the interface; in the step 5, after the digital twin information of the equipment and the information display area of the equipment are obtained, the setting of the equipment in the workshop information display setting module is read, and the type of the display interface and the information or parameters to be displayed are obtained; in the step 6, a display interface of a corresponding type is acquired from the AR information display interface library according to the acquired display interface type, the display interface is superimposed on the information display area of the device in the current frame image, and the device information or parameters to be displayed are displayed through the display interface.
The device that needs to display device information is identified according to user settings, specifically: setting to display all the device information, and in the step 4, identifying all the devices in the current frame image;
or, setting to display the information of the device pointed by the mouse, in the step 4, reading the position of the mouse on the current frame image, determining the position of the mouse in the three-dimensional labeling model according to the conversion relation between the physical workshop coordinate system and the workshop three-dimensional labeling model coordinate system, thereby determining the label corresponding to the proxy shape body at the position, and identifying the device identifier corresponding to the device pointed by the mouse in the current frame image and the imaging area of the device on the current frame image according to the label.
Referring to fig. 3, a method for determining a currently selected device and outputting digital twin information of the device by moving a mouse is provided.
A preparation stage:
step 10, constructing a virtual three-dimensional model of a physical workshop, wherein the virtual three-dimensional model of equipment in the physical workshop is constructed by using proxy shape bodies, each proxy shape body represents the space shape and the position of the equipment in the physical workshop, then color labeling is carried out on each proxy shape body, the proxy shape bodies corresponding to different equipment render different colors, a one-to-one mapping relation between the colors and equipment identifications is established, and the workshop three-dimensional labeling model is generated; the virtual three-dimensional model of the workshop also comprises virtual cameras corresponding to the spatial positions of the cameras in the physical workshop; unifying the coordinate system of the workshop three-dimensional labeling model and the coordinate system of the physical workshop to ensure that the equipment coordinate of the physical workshop is consistent with the coordinate of the corresponding proxy shape body in the workshop three-dimensional labeling model and the coordinate of each camera of the physical workshop is consistent with the coordinate of the corresponding virtual camera in the workshop three-dimensional labeling model;
step 20, defining an AR information display interface library, which comprises an instrument panel display interface control, a nixie tube display interface control and a virtual oscilloscope display interface control;
step 30, defining a workshop information display setting module, setting the display interface type of each equipment parameter and/or digital twin information and the information or parameters displayed by the interface, for example, defining parameters of an interface control, such as a display range, a resolution and the like;
operating a cycle stage:
step 40, collecting a current frame image of a workshop video image shot by a camera selected by a user, and collecting the position P (x, y, z) and the attitude Q (α, theta) of an image sensor of the currently selected camera;
step 50, reading the position P (x, y, z) and the posture Q (α, theta) of the image sensor of the currently selected camera, setting the position of the virtual image sensor of the virtual camera to be P (x, y, z) and the posture to be Q (α, theta), enabling a virtual imaging model in a workshop three-dimensional labeling model to be consistent with an imaging model of a physical workshop, synthesizing a virtual synthetic image of the workshop three-dimensional labeling model according to the camera imaging model, wherein different colors of the virtual synthetic image correspond to different devices, reading the coordinate (m, n) of a mouse on a current frame image, reading the color value of a pixel with the pixel coordinate (m, n) from the virtual synthetic image, identifying a device identification corresponding to the pixel on the virtual synthetic image according to the one-to-one mapping relation between the colors and the physical devices, wherein the device identification is the device pointed by the mouse on the current frame image, and simultaneously counting the area of the pixel with the color value to obtain the imaging area of the device;
step 60, inquiring a workshop digital twin system according to the equipment identification to obtain corresponding digital twin information; reading the setting of the equipment in the workshop information display setting module, and acquiring the type of a display interface and information or parameters to be displayed;
step 70, acquiring a display interface of a corresponding type from an AR information display interface library according to the acquired display interface type, determining an information display area of the equipment according to an imaging area of the equipment, and superposing digital twin information of the equipment on the information display area of the equipment on a current frame image so as to realize AR display of workshop equipment information;
and 80, judging whether the program is finished or not, if not, returning to the step 40, and if so, finishing the exit program.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.