Disclosure of Invention
The invention aims to provide an aviation cable auxiliary assembly method based on MR, and the other aim of the invention is to provide an aviation cable auxiliary assembly system based on MR, and the method and the system can help workers quickly and accurately understand assembly requirements under complex environments, so that assembly efficiency is improved, and assembly errors are reduced.
In order to achieve the purpose, the invention provides an aviation cable auxiliary assembly method based on MR, which specifically comprises the following steps:
s1: according to the current assembly task, importing a corresponding assembly process file by using the MR glasses and a network communication module at a computer end;
s2: scanning the marker map in the assembly scene through a virtual and real positioning module at the MR spectacle end, and realizing space positioning in the assembly scene by means of the marker map;
s3: controlling the wearable equipment by using gestures or voice according to the assembly guiding information of the MR glasses end through man-machine interaction operation, and sending an operation instruction;
s4: after the MR glasses end receives the operation instruction, the virtual-real display and virtual-real positioning module is utilized to generate assembly operation information in the corresponding step in the assembly operation, and the assembly operation information is displayed to an operator through binocular stereo, a UI interface and virtual-real animation;
s5: the operation worker carries out corresponding assembly operation according to corresponding assembly prompt information of the MR spectacle end;
s6: after the current assembly steps are completed, the steps of S3-S5 are repeated until all assembly tasks are completed.
Further, in S1, the paper assembly process is recorded into the MR spectacle end through an XML file, and the guidance information in the process file is product information, process information, and process step information.
Further, the product information mainly comprises a product name and a product number; the process information comprises tool information, material information, process numbers and process names required for completing the process; the step information includes guide information for the step, including step number, step name, text description information of the step, picture information of the step, and three-dimensional model information of the step.
Further, in S2, providing auxiliary process information in the form of characters, pictures, virtual animation and real animation for an assembler, and projecting the three-dimensional model of the component onto a position to be assembled, the specific implementation steps include:
1) selecting a product to be assembled;
2) reading the assembly auxiliary file, and loading assembly auxiliary information: text auxiliary information, picture auxiliary information, model auxiliary information;
3) selecting a current process step;
4) displaying information of characters and pictures;
in an assembly scene, a world coordinate system, a camera coordinate system, a display screen coordinate system and a virtual camera coordinate system are unified through the identification graph, so that space positioning is realized.
Further, the virtual-real display in S4 specifically includes:
1) establishing a three-dimensional model of the assembled parts by using three-dimensional modeling software, and simplifying the model so as to meet the requirements of loading and rendering the model of the AR glasses and process the shielding relation between the augmented reality information and the real physical scene;
2) establishing a data file interface of the assembly element model, reading the three-dimensional coordinate information of the assembly element frame by frame from the data file, and driving the position of the corresponding three-dimensional model in a Unity3D graphic engine; amplifying a three-dimensional model in a virtual scene in a set proportion to meet the watching requirement;
3) determining the position and the posture of a virtual viewpoint in a virtual scene according to viewpoint position posture information output by the AR glasses positioning module, and associating the position of the assembly element in the virtual scene with the position information of the actual scene to ensure that the assembly element can be matched when being superposed;
4) generating coordinates of binocular viewpoints according to the viewpoint position in the current virtual scene and the position relation with the observed assembly part, and rendering and outputting images of the virtual model according to the current binocular coordinates respectively;
5) and displaying the model information and the character description information of the assembled parts through the animation display of the assembled model and the UI dialog box prompt information.
An auxiliary assembly system of the MR-based aviation cable for implementing the method comprises:
computer-side assembly of an auxiliary subsystem: the network communication module is used for carrying out network communication of configuration files and interactive instructions on the MR glasses;
MR glasses end assembly auxiliary subsystem: the method is used for virtual and real positioning, man-machine interaction and virtual and real display;
the virtual and real positioning realizes positioning of an assembly scene through SLAM positioning and identification positioning, man-machine interaction realizes interaction of workers and an auxiliary assembly system through functions of gestures, voice and staring, virtual and real display realizes assembly operation guidance through functions of binocular stereo, UI display and animation display, so that a three-dimensional model of the part to be assembled is superposed in a real assembly scene, and the visual assembly form and assembly position of the part to be assembled are realized.
Further, the virtual and real positioning: the method is used for scanning the identification map of the appointed assembly operation through the MR glasses before the assembly operation, and acquiring the coordinate positioning of an assembly operation scene, wherein the SLAM positioning is performed through the depth camera of the MR glasses and the real-time scene without the need of identification positioning after the identification positioning.
Further, the virtual and real displays: the binocular stereo system realizes three-dimensional display of parts to be assembled through MR glasses, UI display realizes that a UI interface assisting operation of workers and prompting the workers is provided in an assembly scene, and animation display realizes that virtual-real linkage is realized in a real assembly scene by utilizing three-dimensional animation display and the required assembly action is visually displayed for the workers.
The system further comprises a worker end and an assembly site end, wherein the worker end is used for executing an assembly task by utilizing the MR glasses according to an assembly operation sequence, and needs to import an assembly process file, scan a marking diagram, send an assembly interaction instruction to the MR glasses end and perform assembly operation through assembly guide information of the MR glasses end; the assembling site end is a real scene in an assembling task, and a worker can display parts to be assembled in the real assembling scene according to the virtual and real display function in the MR glasses and guide the worker to intuitively perform assembling operation.
Further, the assembly process file of the worker end is the assembly process file processed by the computer end assembly auxiliary subsystem, the interactive instruction comprises interactive actions such as voice, gestures and staring involved in assembly operation of the worker, and the assembly guidance information is information presented by functions such as binocular stereo, UI display and animation display provided by a virtual and real display function of the MR glasses end.
Compared with the current technology, the MR-based aviation cable auxiliary assembly method and system disclosed by the invention have the following advantages.
(1) The method provided by the invention adopts a method combining wearable equipment and an augmented reality technology, provides a new convenient and visual assembly idea for aircraft cable assembly, and can effectively solve the problem of a traditional operation mode which seriously depends on the experience and proficiency of workers in the aircraft cable assembly process.
(2) The aviation cable auxiliary assembly method based on the MR can realize that assembly operation guidance can be visually provided for operators by superposing virtual parts to be assembled in a real assembly scene and displaying through binocular stereo, UI interfaces and virtual-real animations.
(3) The aviation cable auxiliary assembly method based on the MR changes the original assembly operation mode of the traditional paper assembly process file and the experience of workers, and greatly improves the assembly operation efficiency and the assembly quality of the aircraft cable.
Detailed Description
The present invention now will be described more fully hereinafter with reference to the accompanying drawings, in which exemplary embodiments of the invention are shown. This invention may, however, be embodied in many different forms and should not be construed as limited to the exemplary embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
For ease of description, spatially relative terms, such as "upper," "lower," "left," "right," and the like, may be used herein to describe one element or feature's relationship to another element or feature as illustrated in the figures. It will be understood that the spatial terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as "below" other elements or features would then be oriented "above" the other elements or features. Thus, the exemplary term "lower" can encompass both an upper and a lower orientation. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
As shown in fig. 1 to 6, the invention provides an aviation cable auxiliary assembly method and system based on MR, the essence of the method is that an assembly process flow is implemented through MR glasses, a three-dimensional model of a part to be assembled is superimposed in a real assembly scene through an enhanced display technology, the three-dimensional model is displayed to an operator in a binocular stereo and virtual-real animation mode, and corresponding assembly information is displayed through a UI interface to assist the operator to efficiently complete an assembly task. The augmented reality technology is that a virtual model is superposed to a real assembly scene according to a real assembly form by a three-dimensional model of the part to be assembled through MR glasses and a virtual-real positioning function, so that a worker can feel the assembly form, the assembly position and the like of the part to be assembled more intuitively.
The invention provides an aviation cable auxiliary assembly system based on MR, which comprises: the system comprises a computer end assembly auxiliary subsystem, an MR glasses end assembly auxiliary subsystem, a worker end and an assembly field.
The computer-side assembly auxiliary subsystem is used for configuration files and network communication, wherein the configuration files are used for acquiring process file information, product process information and tool material information required by assembly, and the network communication is used for performing functions of instruction processing of assembly site workers, coordinate information processing of operation scenes, interactive information processing of external systems and the like.
The MR glasses end assembly auxiliary subsystem is used for virtual and real positioning, man-machine interaction and virtual and real display, wherein the virtual and real positioning can realize positioning of an assembly scene through an SLAM mode and an identification mode, the MR glasses are used for scanning an identification diagram of appointed assembly operation before the assembly operation to acquire coordinate positioning of the assembly operation scene, and the SLAM positioning is that after the identification positioning is carried out, the identification positioning is not needed any more, and the real-time positioning can be carried out through a depth camera and a real-time scene of the MR glasses. The human-computer interaction can realize the interaction between a worker and an auxiliary assembly system through functions of gestures, voice, staring and the like, the virtual-real display can realize assembly operation guidance through functions of binocular stereo, UI display, animation display and the like, wherein the binocular stereo can realize the three-dimensional display of parts to be assembled through MR glasses, the UI display can realize the UI interface which assists the operation of the worker and prompts the worker in an assembly scene, the animation display can realize the virtual-real linkage in a real assembly scene by utilizing the three-dimensional animation display, and the required assembly action can be visually displayed for the worker.
And the worker end is used for executing the assembly work task by utilizing the MR glasses according to the assembly work sequence, and the assembly work is carried out by importing an assembly process file, scanning a marking diagram, sending an assembly interaction instruction to the MR glasses end and through the assembly guide information of the MR glasses end. The assembly process file is processed by a computer-end assembly auxiliary subsystem, and the interactive instruction comprises interactive actions such as voice, gestures, staring and the like related to a worker in the assembly operation; the assembly guidance information is information presented by functions of binocular stereo, UI display, animation display and the like provided by the virtual and real display function of the MR glasses.
And the assembly site end is a real scene in an assembly task, a worker can display the parts to be assembled in the real assembly scene according to the virtual and real display function in the MR glasses, and can intuitively guide the worker to perform the assembly operation by utilizing the binocular stereo, UI interface and virtual and real animation functions.
The invention relates to an aviation cable auxiliary assembly method based on MR, which adopts the technical scheme that the method comprises the following steps:
s1: an assembly process file is established in the computer-end auxiliary assembly subsystem according to the assembly task, an assembly process file in an xml format is generated and is sent to the MR glasses-end auxiliary assembly subsystem through the network communication module;
s2: in order to correspond to the process files of the assembly task, workers need to import corresponding assembly process files by using MR glasses according to the assembly task, and assembly operation guidance is carried out;
s3: in order to determine the spatial position of the component to be assembled in the subsequent step, a virtual and real positioning module at the MR spectacle end is used for scanning a marking image in an assembly scene, and spatial positioning is realized in the assembly scene by means of the marking image;
s4: a worker makes corresponding human-computer interaction operation, controls the wearable equipment by using gestures or voice according to the assembly guiding information of the MR glasses end, and sends an operation instruction; the MR glasses end receives a worker operation instruction, returns the currently required assembly information and assembly parts after the instruction processing, generates assembly operation information in the corresponding step in the assembly operation by using a virtual-real display and virtual-real positioning module, and displays the assembly operation information to an operator through binocular stereo, a UI interface and virtual-real animation;
s5: an operator performs assembly operation according to the corresponding assembly prompt information of the MR spectacle end, the prompted assembly process information and the corresponding assembly part information;
s6: after the current assembly steps are completed, the steps of S4-S5 are repeated until all assembly tasks are completed.
The specific implementation method is described in detail below with reference to fig. 1 to 6.
S1: the method is based on MR aviation cable auxiliary assembly, so that an aviation cable assembly process must be established, the traditional operation mode is that a paper assembly process is not completely suitable for a wearable device auxiliary assembly system, and therefore the paper assembly process is recorded into the MR aviation cable auxiliary assembly system through an XML (extensive markup language) file in a computer-end auxiliary assembly subsystem (shown in figure 1).
S1.1: for this case, the information contained in the conventional assembly process file is refined, classified and organized herein, and its representation model is built. The information which has great guiding significance to the assembly process through inspection mainly comprises detailed descriptions of the assembly auxiliary tools, materials and assembly steps. The two parts in the assembly AO can be abstracted into three types of information, namely product information, process information and process step information, and a strict hierarchical relationship exists between the two parts. FIG. 3 is an E-R diagram of product information, process information, and process step information.
As shown in fig. 3, the product information mainly includes a product name and a product number; the process information mainly comprises tool information, material information, process numbers and process names required for completing the process; the step information mainly includes guide information for the step, including step number, step name, text description information of the step, picture information of the step, and three-dimensional model information of the step.
Further, the relationship between the product information and the process information is a "one-to-many" relationship, that is, a product of one model includes a plurality of assembly processes. Similarly, the relationship between the process information and the process step information is also a "one-to-many" relationship, that is, one assembly process includes a plurality of assembly process steps.
Aiming at the characteristics of the assembly process information, the product information is abstractly represented by using the attributes of 'productID' and 'productName'; abstract representation is carried out on the process information by using attributes of 'processID', 'processName', 'Tools' and 'Material'; the abstract representation is made using the "stepName", "textInf", "picInf", "ARmodInf" attributes.
In view of the 'one-to-many' inclusion relationship among product information, process information and process step information, the method for realizing the airplane assembly process information management by using the XML file is provided.
S1.2: as shown in fig. 2, the process for establishing the aircraft cable assembly process information may be divided into two main stages, where the first stage is to fill the assembly process information on a computer by a craft worker according to the aircraft cable assembly process, and the second stage is to generate an assembly process information file based on XML according to the filled assembly process information, and the obtained process file based on the XML format may provide support for the MR glasses end assembly auxiliary subsystem.
S2: in order to correspond to the process files of the assembly task, a worker needs to import the corresponding assembly process files by using the MR glasses according to the assembly task, and then assembly operation guidance is performed.
Visual assembly guidance is realized by using the augmented reality technology of the MR glasses end assembly auxiliary subsystem, auxiliary process information in the forms of characters, pictures and virtual-real animations is provided for assembly workers, and the information can be obtained through the assembly auxiliary XML file generated in the step S1 as follows:
through the augmented reality technology with spare part three-dimensional model projection to waiting to assemble the position on, can provide directly perceived and accurate assembly guide for the workman in real time, concrete implementation steps include:
1) and selecting a product to be assembled.
2) The assembly auxiliary file is read and the assembly auxiliary information (character auxiliary information, picture auxiliary information, model auxiliary information) is loaded.
3) And selecting the current process step.
4) And displaying the information of the characters and the pictures.
In order to determine the spatial position of the component to be assembled in the subsequent step, the virtual and real positioning module at the MR spectacle end is used for scanning the marker map in the assembly scene, and as shown in FIG. 4, the world coordinate system, the camera coordinate system, the coordinate system of the display screen and the virtual camera coordinate system are unified through the marker map in the assembly scene, so that spatial positioning is realized;
assuming that a world coordinate system is denoted by W, a camera coordinate system is denoted by C, a coordinate system of a display screen of the optical see-through head-mounted display is denoted by S, and a human eye and the screen together constitute a virtual camera, the coordinate system is denoted by V. If the coordinate of any point P in the space is Pw, Pc under the camera coordinate system and Ps on the screen, then the equations (1) and (2) are obtained,
Pc=[Rwc|Twc;0 0 0 1]Pw(1)
Ps=K[Rcv|Tcv]Pc=GPc(2)
wherein, Pw, Pc are homogeneous coordinates of the three-dimensional position of the point in the world coordinate system, and therefore are all four-dimensional vectors. Ps is the homogeneous coordinate of the two-dimensional position of the point in the screen coordinate system, and is therefore a three-dimensional vector. R is an optional matrix of 3 x 3, T is a translation column vector of 3 x 1, and K is an internal reference matrix of the virtual camera including human eyes, and is a 3 x 3 matrix.
Let Ps ═ u v 1] T, Pc ═ xc zc 1] T, G ═ G11G 12G 13G 14; g21 g22 g23 g 24; g31 g32 g33 g34], then:
the matrix G is a mapping relationship from the camera coordinate system Pc to the screen coordinate system Ps.
The same approach can result in the F, K, M matrix shown in fig. 4.
S3: a worker makes corresponding human-computer interaction operation, controls the wearable equipment by using gestures or voice according to the assembly guiding information of the MR glasses end, and sends an operation finger;
s4: the MR glasses end receives a worker operation instruction, returns the currently required assembly information and assembly parts after the instruction processing, generates assembly operation information in the corresponding step in the assembly operation by using a virtual-real display and virtual-real positioning module, and displays the assembly operation information to an operator through binocular stereo, a UI interface and virtual-real animation;
s4.1: workers make corresponding human-computer interaction operation and send operation instructions, and the method is based on three modes of gazing, gestures and voice developed by an interaction mode of MR glasses;
(1) staring at
The MR glasses use the head movement factor as the main control mode of the Gaze interaction, the input form of the user instruction is named as 'size input', and the main marker of the Gaze function of the MR glasses is a pointer. The pointer is a reference object for helping the user accurately know the gaze point, and functions like a cursor in a conventional computer. The pointer has various forms, the default Cursor provided in the HoloToolkit is used in the system, the form of the Cursor is a light purple solid light spot under a normal state, and the form of the Cursor can become a blue purple hollow aperture after an object is selected.
(2) Gesture
The MR glasses support recognized interactive gestures in three main categories: pinch gestures, tap gestures, blossom gestures. The pinch gesture and the click gesture are mainly used in interactive cooperation with the gazing, and are mainly used for placing and starting applications, triggering response events of various controls in the UI, and then achieving the operation of the user on the MR glasses, and the blooming gesture is mainly used for exiting the current application and returning to a starting menu in the MR glasses.
TABLE 1 gesture interaction of MR glasses
(3) Speech sound
The user speaks corresponding voice information according to the prompt on the current MR glasses, the MR glasses capture audio signals in real time through the microphone and conduct processing and analysis to complete semantic recognition, then the code segment corresponding to the command is called, and the processing result is displayed to the user through the holographic image.
In order to realize the voice interaction function of the aircraft cable assembly auxiliary system, corresponding keywords are required to be specified in advance as trigger signals for command execution. Because the MR glasses study the fashionable and non-published Chinese voice recognition function in the invention, English words are selected as the voice recognition keywords. In this document, 13 operation keys are defined according to the actual requirements of the assembly site, as shown in table 2.
S4.2: after the MR glasses end receives the operation instruction, the virtual-real display and virtual-real positioning module is utilized to generate assembly operation information in the corresponding step in the assembly operation, and the assembly operation information is displayed to an operator through binocular stereo, a UI interface and virtual-real animation;
the method mainly realizes the interaction and display of the assembly operation guidance information according to the network communication function and the virtual and real display function.
TABLE 2 Voice interaction keywords and their execution operations
(1) Network communication function
The MR glasses and the computer end are communicated by Socket. Fig. 5 shows a main mechanism for realizing communication between the glasses and the computer based on Socket, and the transmission of data such as instruction information, position information, text/picture/video, etc. between the glasses and the computer is realized by using a network communication function. The experience of supplementary assembly of MR glasses is promoted.
(2) Virtual and real display function
1) Establishing three-dimensional models such as assembly parts by using three-dimensional modeling software such as 3dmax, appropriately simplifying the models to meet the requirements of model loading and rendering of AR glasses, and processing the shielding relation between augmented reality information and a real physical scene;
the AR virtual-real fusion display technology is also called occlusion processing technology, and in early augmented reality, a real scene appears as a background of a virtual object, so that no matter how an occlusion object in the real scene moves, the occlusion object appears behind the virtual object. With the continuous development of related technologies (such as graphic image technology, three-dimensional rendering technology, etc.), researchers have begun to solve the problem of virtual and real occlusion from various angles. Generally these methods are divided into depth-based computation and model-based reconstruction.
In the application scenario of the project, virtual-real fusion display of the AR needs to be realized in the real-time SLAM process. Because the scene is a known scene, the occlusion processing is carried out by adopting a method based on model reconstruction, thereby obtaining the best virtual-real fusion effect. The specific flow is shown in FIG. 6
In the invention, the Unity3D engine is adopted as an AR virtual scene rendering tool, so that the specific virtual and real occlusion processing is selected to be processed on the Unity3D scene editing tool, and most of the occlusion calculation and illumination consistency processing work can be omitted based on the Unity3D occlusion processing.
2) Establishing a data file interface of the assembly element model, reading the three-dimensional coordinate information of the assembly element frame by frame from the data file, and driving the position of the corresponding three-dimensional model in a Unity3D graphic engine; carrying out appropriate scale amplification on the three-dimensional model in the virtual scene to meet the watching requirement;
3) determining the position and the posture of a virtual viewpoint in a virtual scene according to viewpoint position posture information output by the AR glasses positioning module, and associating the position of the assembly element in the virtual scene with the position information of the actual scene to ensure that the assembly element can be matched when being superposed;
4) generating coordinates of binocular viewpoints according to the viewpoint position in the current virtual scene and the position relation with the observed assembly part, and rendering and outputting images of the virtual model according to the current binocular coordinates respectively;
5) and displaying information such as animation display, UI dialog box prompt and the like of the assembly model, and displaying information such as model information, character description and the like of the assembly parts.
S5: an operator performs assembly operation according to the corresponding assembly prompt information of the MR spectacle end, the prompted assembly process information and the corresponding assembly part information;
s6: after the current assembly steps are completed, the steps of S4-S5 are repeated until all assembly tasks are completed.
Through the steps, the aviation cable auxiliary assembly method and system based on the MR can be realized, can assist workers in assembling airplane cables, and improves the airplane cable assembly efficiency and quality.