Disclosure of Invention
Based on the problems, the invention provides an interactive welding environment modeling system and method, which can effectively promote the improvement of industrial production quality and industrial production efficiency.
In view of this, a first aspect of the present invention proposes an interactive welding environment modeling system including a welding apparatus including a welding platform for placing a welding object, a welding head for welding the welding object on the welding platform, and a numerical control mechanical arm for fixing and controlling movement of the welding head, the welding environment modeling system further including a structured light scanning device for scanning modeling a modeling space, which is fixedly provided at one side of the welding platform, at least two scanning directions being non-parallel, an image capturing device for capturing an image of a solid of the modeling space, and a display device for displaying a dynamic model of the modeling space, the structured light scanning device including a structured light emitting unit and a structured light receiving unit, the welding environment modeling system further including a control device for controlling the welding apparatus, the structured light scanning device, the image capturing device, and the display device, the control device being configured to:
Constructing a modeling space in a structural light scanning range, wherein the structural light scanning range is a union region of scanning ranges covered by two or more structural light scanning devices, the modeling space falls into an intersection region of the scanning ranges of the two or more structural light scanning devices, and the modeling space covers a welding platform of welding equipment;
Controlling the structured light scanning device to scan the modeling space under a static condition to generate a static model of the modeling space, wherein the static condition is that an object in the modeling space is in a static state;
In a working state, controlling the structure light scanning device to periodically scan the modeling space in a preset high-frequency scanning period to obtain a structure light periodic scanning image in the modeling space, wherein the structure light periodic scanning image comprises structure light scanning images obtained by synchronously scanning the structure light scanning device corresponding to each scanning period;
Judging whether an object in the modeling space moves or changes according to the structure photoperiod scanning image;
When an object in the modeling space moves or changes, generating a dynamic model of the modeling space based on the structure light periodical scanning image;
Performing object recognition on the static model and the dynamic model to obtain object types and numbers in the static model and the dynamic model;
When the types and the number of the objects in the dynamic model are inconsistent with those of the static model, determining the pose of each newly added object model in the dynamic model;
And executing corresponding interaction operation according to the pose of each newly added object in the dynamic model.
A second aspect of the present invention proposes an interactive welding environment modeling method, comprising:
Constructing a modeling space in a structural light scanning range, wherein the structural light scanning range is a union region of scanning ranges covered by two or more structural light scanning devices, the modeling space falls into an intersection region of the scanning ranges of the two or more structural light scanning devices, and the modeling space covers a welding platform of welding equipment;
Controlling the structured light scanning device to scan the modeling space under a static condition to generate a static model of the modeling space, wherein the static condition is that an object in the modeling space is in a static state;
In a working state, controlling the structure light scanning device to periodically scan the modeling space in a preset high-frequency scanning period to obtain a structure light periodic scanning image in the modeling space, wherein the structure light periodic scanning image comprises structure light scanning images obtained by synchronously scanning the structure light scanning device corresponding to each scanning period;
Judging whether an object in the modeling space moves or changes according to the structure photoperiod scanning image;
When an object in the modeling space moves or changes, generating a dynamic model of the modeling space based on the structure light periodical scanning image;
Performing object recognition on the static model and the dynamic model to obtain object types and numbers in the static model and the dynamic model;
When the types and the number of the objects in the dynamic model are inconsistent with those of the static model, determining the pose of each newly added object model in the dynamic model;
And executing corresponding interaction operation according to the pose of each newly added object in the dynamic model.
Preferably, in the above welding environment modeling method, the step of determining a pose of each newly added object model in the dynamic model specifically includes:
Respectively extracting independent object models from the static model and the dynamic model according to an object recognition result;
Matching the positions and the shapes of the object models in the static model and the dynamic model;
establishing a corresponding relation between the static model and an object model of the same object in the dynamic model, wherein the object model of the same object is an object model generated in the static model and the dynamic model based on the same object in a real environment;
Determining an object model which does not have a corresponding relation with the object model in the static model in the dynamic model as a newly added object model;
and acquiring the position and posture data of the newly added object model from the dynamic model.
Preferably, in the above welding environment modeling method, the step of executing the corresponding interaction operation according to the pose of each newly added object model in the dynamic model specifically includes:
Reading pre-configured welding object information, wherein the welding object information comprises a standard model, a welding position and welding parameters of a welding object;
matching the newly added object model with the standard model of the welding object;
judging whether the newly added object model contains the model of the welding object according to the matching result;
And when the newly added object model comprises the model of the welding object, controlling the welding head to weld the welding object according to the pose of the model of the welding object, the welding position and the welding parameters.
Preferably, in the above welding environment modeling method, after the step of generating the dynamic model of the modeling space based on the structural light periodic scan image, the method further includes:
Acquiring a physical image of the modeling space through an image acquisition device;
reading material information of an object in the dynamic model according to an object identification result;
analyzing the physical image to obtain visual angle information of the physical image, color information of an object corresponding to the dynamic model and light source information of a welding environment;
Rendering an object model in the dynamic model based on the material information, the visual angle information, the color information and the light source information to generate a virtual image of the dynamic model;
Displaying the virtual image on a display device of the welding equipment;
Updating the virtual image displayed on the display device based on the high frequency scanning period.
Preferably, in the above welding environment modeling method, the step of executing the corresponding interaction operation according to the pose of each newly added object in the dynamic model specifically includes:
Reading pre-configured interactive object information, wherein the interactive object information comprises a standard model of an interactive object, a motion gesture and an interactive instruction associated with the motion gesture;
Matching the newly added object model with the standard model of the interactive object;
Judging whether the newly added object model contains the model of the interaction object according to the matching result;
And when the newly added object model comprises the model of the interaction object, controlling the welding equipment to enter an interaction state.
Preferably, in the above welding environment modeling method, the modeling space includes an interaction area and a welding area, the interaction area is an area for performing man-machine interaction in the modeling space, which is far away from the welding platform and the welding head, the welding area is an area for welding a welding object, which covers the welding platform and the welding head, in the modeling space, and the step of controlling the welding device to enter the interaction state specifically includes:
Displaying the physical image on a display device of the welding equipment;
Acquiring the size of the interaction area in the physical image displayed on the display device;
Scaling the welding area in the virtual image to match the size of the interaction area in the physical image;
And displaying the welding area in the virtual image at a position corresponding to the interaction area in the physical image on the display device.
Preferably, in the above welding environment modeling method, the step of executing the corresponding interaction operation according to the pose of each newly added object in the dynamic model further includes:
Acquiring a first pose of the interaction object in the interaction area in the entity image;
acquiring a second pose of the welding object in the welding area in the virtual image;
after displaying the welding area in the virtual image on the display device at a position corresponding to the interaction area in the physical image, acquiring a position relationship between the first pose and the second pose in the interaction area in the physical image;
And identifying the interaction operation corresponding to the first pose of the interaction object according to the position relation of the first pose of the interaction object and the second pose of the welding object in the interaction area in the entity image.
Preferably, in the above welding environment modeling method, after the step of identifying the interaction operation corresponding to the first pose of the interaction object according to the positional relationship between the first pose of the interaction object and the second pose of the welding object in the interaction region in the physical image, the method further includes:
When the interactive operation is identified as a predicted welding result, inputting the current welding parameters and the welding state into a pre-trained welding result prediction model to predict the welding result;
generating a welding result model of the welding object after welding is completed according to the predicted welding result;
replacing the welding object model with the welding result model in the dynamic model;
Generating a virtual image of the dynamic model including the welding result model;
And displaying the welding area in the virtual image at a position corresponding to the interaction area in the physical image on the display device.
Preferably, in the above welding environment modeling method, after the step of identifying the interaction operation corresponding to the first pose of the interaction object according to the positional relationship between the first pose of the interaction object and the second pose of the welding object in the interaction region in the physical image, the method further includes:
When the interactive operation is identified to modify welding parameters, configuring a prediction part in the welding result model into a modifiable state, wherein the prediction part is a difference part between the welding object result model and the welding object entity in the entity image;
modifying the shape or position of the predicted portion according to the motion state of the interactive object in the interactive region of the physical image;
Controlling the welding equipment to exit from the interaction state according to the interaction operation or an external control instruction;
generating corresponding welding parameters according to the modification result of the prediction part;
And controlling the welding equipment to weld the welding object with the modified welding parameters.
The invention provides an interactive welding environment modeling system and method, which are characterized in that a structural light scanning device for scanning and modeling a modeling space, an image acquisition device for acquiring an entity image of the modeling space, a display device for displaying a dynamic model of the modeling space and a control device are arranged, wherein the control device scans and models the modeling space through the structural light scanning device to generate a corresponding static model when welding equipment is in a non-working state, scans and models the modeling space through the structural light scanning device to generate a corresponding dynamic model when the welding equipment is in a working state, judges whether a new object exists or not based on the difference between the generated dynamic model and the static model, and executes corresponding interactive operation based on the pose of the new object model, so that the welding condition can be known and the welding parameters can be modified in real time based on the interactive operation, and the improvement of industrial production quality and industrial production efficiency can be effectively promoted.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will be more clearly understood, a more particular description of the application will be rendered by reference to the appended drawings and appended detailed description. It should be noted that, without conflict, the embodiments of the present application and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced otherwise than as described herein, and therefore the scope of the present invention is not limited to the specific embodiments disclosed below.
In the description of the present invention, the term "plurality" means two or more, unless explicitly defined otherwise, the orientation or positional relationship indicated by the terms "upper", "lower", etc. are based on the orientation or positional relationship shown in the drawings, merely for convenience of description of the present invention and to simplify the description, and do not indicate or imply that the apparatus or elements referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention. The terms "coupled," "mounted," "secured," and the like are to be construed broadly, and may be fixedly coupled, detachably coupled, or integrally connected, for example; can be directly connected or indirectly connected through an intermediate medium. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", etc. may explicitly or implicitly include one or more such feature. In the description of the present invention, unless otherwise indicated, the meaning of "a plurality" is two or more.
In the description of this specification, the terms "one embodiment," "some implementations," "particular embodiments," and the like, mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
An interactive welding environment modeling system and method provided in accordance with some embodiments of the present invention is described below with reference to the accompanying drawings
As shown in fig. 1, a first aspect of the present invention proposes an interactive welding environment modeling system, including a welding apparatus including a welding platform for placing a welding object, a welding head for welding the welding object on the welding platform, and a numerical control mechanical arm for fixing and controlling movement of the welding head, the welding environment modeling system further including a structured light scanning device for scanning and modeling a modeling space, which is fixedly disposed at one side of the welding platform, at least two scanning directions being non-parallel, an image acquisition device for acquiring an image of a solid of the modeling space, and a display device for displaying a dynamic model of the modeling space, the structured light scanning device including a structured light emitting unit and a structured light receiving unit, the welding environment modeling system further including a control device for controlling the welding apparatus, the structured light scanning device, the image acquisition device, and the display device, the control device being configured to:
Constructing a modeling space in a structural light scanning range, wherein the structural light scanning range is a union region of scanning ranges covered by two or more structural light scanning devices, the modeling space falls into an intersection region of the scanning ranges of the two or more structural light scanning devices, and the modeling space covers a welding platform of welding equipment;
Controlling the structured light scanning device to scan the modeling space under a static condition to generate a static model of the modeling space, wherein the static condition is that an object in the modeling space is in a static state;
In a working state, controlling the structure light scanning device to periodically scan the modeling space in a preset high-frequency scanning period to obtain a structure light periodic scanning image in the modeling space, wherein the structure light periodic scanning image comprises structure light scanning images obtained by synchronously scanning the structure light scanning device corresponding to each scanning period;
Judging whether an object in the modeling space moves or changes according to the structure photoperiod scanning image;
When an object in the modeling space moves or changes, generating a dynamic model of the modeling space based on the structure light periodical scanning image;
Performing object recognition on the static model and the dynamic model to obtain object types and numbers in the static model and the dynamic model;
When the types and the number of the objects in the dynamic model are inconsistent with those of the static model, determining the pose of each newly added object model in the dynamic model;
And executing corresponding interaction operation according to the pose of each newly added object in the dynamic model.
Preferably, in the welding environment modeling system, the pose of the added object includes position and pose data of the added object, and specifically includes a shape of the added object and coordinates of each local area of the added object in the modeling space.
Preferably, in the welding environment modeling system described above, in the step of controlling the structured light scanning device to scan the modeling space under a static condition to generate a static model of the modeling space, the control device is configured to:
Acquiring the current coordinates of a welding head of the welding equipment;
Judging whether the welding head is in the modeling space according to the current coordinates of the welding head;
if not, controlling a numerical control mechanical arm of the welding equipment to move the welding head to a preset position in the modeling space;
controlling the two or more structured light scanning devices to scan the modeling space to obtain two or more structured light scanning images in the modeling space;
Analyzing the structured light scanning image to obtain shape and size parameters of an object in the modeling space, wherein the object in the modeling space comprises a welding platform and a welding head of the welding equipment;
generating a stereoscopic model of the object according to the shape and size parameters of the object in the modeling space, wherein the stereoscopic model of the object in the modeling space forms the static model.
Preferably, in the welding environment modeling system described above, in the step of determining whether or not the object in the modeling space moves or changes by the structured light periodically scanning image, the control device is configured to:
Acquiring a first structured light scanning image for constructing the static model;
Acquiring a second structured light scanning image of the last scanning period in the structured light periodic scanning image;
matching the first structural light scanning image with the structural light scanning image of the same machine position in the second structural light scanning image, wherein the structural light scanning image of the same machine position is the structural light scanning image output by the same structural light scanning device;
and when the first structure light scanning image is inconsistent with the structure light scanning image of any machine position in the second structure light scanning image, determining that an object in the modeling space moves or changes.
Preferably, in the welding environment modeling system described above, in the step of determining a pose of each newly added object model in the dynamic model, the control device is configured to:
Respectively extracting independent object models from the static model and the dynamic model according to an object recognition result;
Matching the positions and the shapes of the object models in the static model and the dynamic model;
establishing a corresponding relation between the static model and an object model of the same object in the dynamic model, wherein the object model of the same object is an object model generated in the static model and the dynamic model based on the same object in a real environment;
Determining an object model which does not have a corresponding relation with the object model in the static model in the dynamic model as a newly added object model;
and acquiring the position and posture data of the newly added object model from the dynamic model.
Preferably, in the welding environment modeling system described above, in the step of performing a corresponding interaction operation according to a pose of each newly added object model in the dynamic model, the control device is configured to:
Reading pre-configured welding object information, wherein the welding object information comprises a standard model, a welding position and welding parameters of a welding object;
matching the newly added object model with the standard model of the welding object;
judging whether the newly added object model contains the model of the welding object according to the matching result;
And when the newly added object model comprises the model of the welding object, controlling the welding head to weld the welding object according to the pose of the model of the welding object, the welding position and the welding parameters.
Preferably, in the welding environment modeling system described above, after the step of generating a dynamic model of the modeling space based on the structural photoperiod scanned image, the control device is configured to:
Acquiring a physical image of the modeling space through an image acquisition device;
reading material information of an object in the dynamic model according to an object identification result;
analyzing the physical image to obtain visual angle information of the physical image, color information of an object corresponding to the dynamic model and light source information of a welding environment;
Rendering an object model in the dynamic model based on the material information, the visual angle information, the color information and the light source information to generate a virtual image of the dynamic model;
Displaying the virtual image on a display device of the welding equipment;
Updating the virtual image displayed on the display device based on the high frequency scanning period.
Preferably, in the welding environment modeling system described above, after the step of constructing the modeling space within the structured light scanning range, the control device is configured to:
And configuring an interaction area and a welding area in the modeling space, wherein the interaction area is an area, away from the welding platform and the welding head, in the modeling space for executing man-machine interaction, and the welding area is an area, in the modeling space, covering the welding platform and the welding head and used for welding a welding object.
Preferably, in the welding environment modeling system, the area away from the welding platform and the welding head is an area outside a maximum movement range of the numerical control mechanical arm of the welding device.
Preferably, in the welding environment modeling system described above, in the step of performing a corresponding interaction operation according to the pose of each newly added object in the dynamic model, the control device is configured to:
Reading pre-configured interactive object information, wherein the interactive object information comprises a standard model of an interactive object, a motion gesture and an interactive instruction associated with the motion gesture;
Matching the newly added object model with the standard model of the interactive object;
Judging whether the newly added object model contains the model of the interaction object according to the matching result;
And when the newly added object model comprises the model of the interaction object, controlling the welding equipment to enter an interaction state.
Preferably, in the welding environment modeling system, the interactive object includes a hand, a head or a whole body of a human body.
Preferably, in the welding environment modeling system described above, in the step of controlling the welding apparatus to enter the interactive state, the control device is configured to:
judging whether the welding equipment is in a welding state or not;
Judging whether the welding equipment meets a stop condition or not when the welding equipment is in a welding state;
When the welding equipment meets the stop condition, controlling the welding equipment to stop welding;
And moving a welding head of the welding equipment to a preset position in the modeling space.
Preferably, in the welding environment modeling system described above, in the step of controlling the welding apparatus to enter the interactive state, the control device is configured to:
Displaying the physical image on a display device of the welding equipment;
Acquiring the size of the interaction area in the physical image displayed on the display device;
Scaling the welding area in the virtual image to match the size of the interaction area in the physical image;
And displaying the welding area in the virtual image at a position corresponding to the interaction area in the physical image on the display device.
Preferably, in the welding environment modeling system described above, in the step of performing a corresponding interaction operation according to the pose of each newly added object in the dynamic model, the control device is configured to:
Acquiring a first pose of the interaction object in the interaction area in the entity image;
acquiring a second pose of the welding object in the welding area in the virtual image;
after displaying the welding area in the virtual image on the display device at a position corresponding to the interaction area in the physical image, acquiring a position relationship between the first pose and the second pose in the interaction area in the physical image;
And identifying the interaction operation corresponding to the first pose of the interaction object according to the position relation of the first pose of the interaction object and the second pose of the welding object in the interaction area in the entity image.
Preferably, in the above welding environment modeling system, the interactive operation includes selecting a welding position, reducing or enlarging the welding area of the virtual image in the interactive area of the physical image, predicting a welding result, or modifying a welding parameter.
Preferably, in the welding environment modeling system described above, after the step of identifying the interaction operation corresponding to the first pose of the interaction object according to the positional relationship between the first pose of the interaction object and the second pose of the welding object in the physical image, the control device is configured to:
When the interactive operation is identified as a predicted welding result, inputting the current welding parameters and the welding state into a pre-trained welding result prediction model to predict the welding result;
generating a welding result model of the welding object after welding is completed according to the predicted welding result;
replacing the welding object model with the welding result model in the dynamic model;
Generating a virtual image of the dynamic model including the welding result model;
And displaying the welding area in the virtual image at a position corresponding to the interaction area in the physical image on the display device.
Preferably, in the welding environment modeling system described above, after the step of identifying the interaction operation corresponding to the first pose of the interaction object according to the positional relationship between the first pose of the interaction object and the second pose of the welding object in the physical image, the control device is configured to:
When the interactive operation is identified to modify welding parameters, configuring a prediction part in the welding result model into a modifiable state, wherein the prediction part is a difference part between the welding object result model and the welding object entity in the entity image;
modifying the shape or position of the predicted portion according to the motion state of the interactive object in the interactive region of the physical image;
Controlling the welding equipment to exit from the interaction state according to the interaction operation or an external control instruction;
generating corresponding welding parameters according to the modification result of the prediction part;
And controlling the welding equipment to weld the welding object with the modified welding parameters.
Preferably, in the above welding environment modeling system, the welding result model includes a solid portion and a predicted portion, the solid portion in the welding result model includes a main body of the welding object and a welding material or main body molten portion of a welded area of the welding object, and the predicted portion in the welding result model is the welding material or main body molten portion predicted according to the current welding parameter and the welding state.
As shown in fig. 2, a second aspect of the present invention proposes an interactive welding environment modeling method, including:
Constructing a modeling space in a structural light scanning range, wherein the structural light scanning range is a union region of scanning ranges covered by two or more structural light scanning devices, the modeling space falls into an intersection region of the scanning ranges of the two or more structural light scanning devices, and the modeling space covers a welding platform of welding equipment;
Controlling the structured light scanning device to scan the modeling space under a static condition to generate a static model of the modeling space, wherein the static condition is that an object in the modeling space is in a static state;
In a working state, controlling the structure light scanning device to periodically scan the modeling space in a preset high-frequency scanning period to obtain a structure light periodic scanning image in the modeling space, wherein the structure light periodic scanning image comprises structure light scanning images obtained by synchronously scanning the structure light scanning device corresponding to each scanning period;
Judging whether an object in the modeling space moves or changes according to the structure photoperiod scanning image;
When an object in the modeling space moves or changes, generating a dynamic model of the modeling space based on the structure light periodical scanning image;
Performing object recognition on the static model and the dynamic model to obtain object types and numbers in the static model and the dynamic model;
When the types and the number of the objects in the dynamic model are inconsistent with those of the static model, determining the pose of each newly added object model in the dynamic model;
And executing corresponding interaction operation according to the pose of each newly added object in the dynamic model.
Preferably, in the above welding environment modeling method, the pose of the added object includes position and pose data of the added object, and specifically includes a shape of the added object and coordinates of each local area of the added object in the modeling space.
Preferably, in the above welding environment modeling method, the step of controlling the structured light scanning device to scan the modeling space under the static condition to generate a static model of the modeling space specifically includes:
Acquiring the current coordinates of a welding head of the welding equipment;
Judging whether the welding head is in the modeling space according to the current coordinates of the welding head;
if not, controlling a numerical control mechanical arm of the welding equipment to move the welding head to a preset position in the modeling space;
controlling the two or more structured light scanning devices to scan the modeling space to obtain two or more structured light scanning images in the modeling space;
Analyzing the structured light scanning image to obtain shape and size parameters of an object in the modeling space, wherein the object in the modeling space comprises a welding platform and a welding head of the welding equipment;
generating a stereoscopic model of the object according to the shape and size parameters of the object in the modeling space, wherein the stereoscopic model of the object in the modeling space forms the static model.
Preferably, in the above welding environment modeling method, the step of determining whether the object in the modeling space moves or changes according to the structural light periodic scan image specifically includes:
Acquiring a first structured light scanning image for constructing the static model;
Acquiring a second structured light scanning image of the last scanning period in the structured light periodic scanning image;
matching the first structural light scanning image with the structural light scanning image of the same machine position in the second structural light scanning image, wherein the structural light scanning image of the same machine position is the structural light scanning image output by the same structural light scanning device;
and when the first structure light scanning image is inconsistent with the structure light scanning image of any machine position in the second structure light scanning image, determining that an object in the modeling space moves or changes.
Preferably, in the above welding environment modeling method, the step of determining a pose of each newly added object model in the dynamic model specifically includes:
Respectively extracting independent object models from the static model and the dynamic model according to an object recognition result;
Matching the positions and the shapes of the object models in the static model and the dynamic model;
establishing a corresponding relation between the static model and an object model of the same object in the dynamic model, wherein the object model of the same object is an object model generated in the static model and the dynamic model based on the same object in a real environment;
Determining an object model which does not have a corresponding relation with the object model in the static model in the dynamic model as a newly added object model;
and acquiring the position and posture data of the newly added object model from the dynamic model.
Preferably, in the above welding environment modeling method, the step of executing the corresponding interaction operation according to the pose of each newly added object model in the dynamic model specifically includes:
Reading pre-configured welding object information, wherein the welding object information comprises a standard model, a welding position and welding parameters of a welding object;
matching the newly added object model with the standard model of the welding object;
judging whether the newly added object model contains the model of the welding object according to the matching result;
And when the newly added object model comprises the model of the welding object, controlling the welding head to weld the welding object according to the pose of the model of the welding object, the welding position and the welding parameters.
Preferably, in the above welding environment modeling method, after the step of generating the dynamic model of the modeling space based on the structural light periodic scan image, the method further includes:
Acquiring a physical image of the modeling space through an image acquisition device;
reading material information of an object in the dynamic model according to an object identification result;
analyzing the physical image to obtain visual angle information of the physical image, color information of an object corresponding to the dynamic model and light source information of a welding environment;
Rendering an object model in the dynamic model based on the material information, the visual angle information, the color information and the light source information to generate a virtual image of the dynamic model;
Displaying the virtual image on a display device of the welding equipment;
Updating the virtual image displayed on the display device based on the high frequency scanning period.
Preferably, in the above welding environment modeling method, after the step of constructing the modeling space in the structured light scanning range, the method further includes:
And configuring an interaction area and a welding area in the modeling space, wherein the interaction area is an area, away from the welding platform and the welding head, in the modeling space for executing man-machine interaction, and the welding area is an area, in the modeling space, covering the welding platform and the welding head and used for welding a welding object.
Preferably, in the above welding environment modeling method, the area away from the welding platform and the welding head is an area outside a maximum movement range of a numerical control mechanical arm of the welding device.
Preferably, in the above welding environment modeling method, the step of executing the corresponding interaction operation according to the pose of each newly added object in the dynamic model specifically includes:
Reading pre-configured interactive object information, wherein the interactive object information comprises a standard model of an interactive object, a motion gesture and an interactive instruction associated with the motion gesture;
Matching the newly added object model with the standard model of the interactive object;
Judging whether the newly added object model contains the model of the interaction object according to the matching result;
And when the newly added object model comprises the model of the interaction object, controlling the welding equipment to enter an interaction state.
Preferably, in the above welding environment modeling method, the interactive object includes a hand, a head, or a whole body of a human body.
Preferably, in the above welding environment modeling method, the step of controlling the welding device to enter the interactive state includes:
judging whether the welding equipment is in a welding state or not;
Judging whether the welding equipment meets a stop condition or not when the welding equipment is in a welding state;
When the welding equipment meets the stop condition, controlling the welding equipment to stop welding;
And moving a welding head of the welding equipment to a preset position in the modeling space.
Preferably, in the above welding environment modeling method, the step of controlling the welding device to enter the interaction state specifically includes:
Displaying the physical image on a display device of the welding equipment;
Acquiring the size of the interaction area in the physical image displayed on the display device;
Scaling the welding area in the virtual image to match the size of the interaction area in the physical image;
And displaying the welding area in the virtual image at a position corresponding to the interaction area in the physical image on the display device.
Preferably, in the above welding environment modeling method, the step of executing the corresponding interaction operation according to the pose of each newly added object in the dynamic model further includes:
Acquiring a first pose of the interaction object in the interaction area in the entity image;
acquiring a second pose of the welding object in the welding area in the virtual image;
after displaying the welding area in the virtual image on the display device at a position corresponding to the interaction area in the physical image, acquiring a position relationship between the first pose and the second pose in the interaction area in the physical image;
And identifying the interaction operation corresponding to the first pose of the interaction object according to the position relation of the first pose of the interaction object and the second pose of the welding object in the interaction area in the entity image.
Preferably, in the above welding environment modeling method, the interactive operation includes selecting a welding position, shrinking or enlarging the welding area of the virtual image in the interactive area of the physical image, predicting a welding result, or modifying a welding parameter.
Preferably, in the above welding environment modeling method, after the step of identifying the interaction operation corresponding to the first pose of the interaction object according to the positional relationship between the first pose of the interaction object and the second pose of the welding object in the interaction region in the physical image, the method further includes:
When the interactive operation is identified as a predicted welding result, inputting the current welding parameters and the welding state into a pre-trained welding result prediction model to predict the welding result;
generating a welding result model of the welding object after welding is completed according to the predicted welding result;
replacing the welding object model with the welding result model in the dynamic model;
Generating a virtual image of the dynamic model including the welding result model;
And displaying the welding area in the virtual image at a position corresponding to the interaction area in the physical image on the display device.
Preferably, in the above welding environment modeling method, after the step of identifying the interaction operation corresponding to the first pose of the interaction object according to the positional relationship between the first pose of the interaction object and the second pose of the welding object in the interaction region in the physical image, the method further includes:
When the interactive operation is identified to modify welding parameters, configuring a prediction part in the welding result model into a modifiable state, wherein the prediction part is a difference part between the welding object result model and the welding object entity in the entity image;
modifying the shape or position of the predicted portion according to the motion state of the interactive object in the interactive region of the physical image;
Controlling the welding equipment to exit from the interaction state according to the interaction operation or an external control instruction;
generating corresponding welding parameters according to the modification result of the prediction part;
And controlling the welding equipment to weld the welding object with the modified welding parameters.
Preferably, in the above welding environment modeling method, the welding result model includes a solid portion and a predicted portion, the solid portion in the welding result model includes a main body of the welding object and a welding material or main body molten portion of a welded area of the welding object, and the predicted portion in the welding result model is the welding material or main body molten portion predicted according to the current welding parameter and the welding state.
It should be noted that in this document relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Embodiments in accordance with the present invention, as described above, are not intended to be exhaustive or to limit the invention to the precise embodiments disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best utilize the invention and various modifications as are suited to the particular use contemplated. The invention is limited only by the claims and the full scope and equivalents thereof.