[go: up one dir, main page]

CN116352323B - Interactive welding environment modeling system and method - Google Patents

Interactive welding environment modeling system and method Download PDF

Info

Publication number
CN116352323B
CN116352323B CN202310400500.3A CN202310400500A CN116352323B CN 116352323 B CN116352323 B CN 116352323B CN 202310400500 A CN202310400500 A CN 202310400500A CN 116352323 B CN116352323 B CN 116352323B
Authority
CN
China
Prior art keywords
welding
model
interaction
image
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310400500.3A
Other languages
Chinese (zh)
Other versions
CN116352323A (en
Inventor
邱志国
朱广慧
尹相仕
王义浩
李康玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Chendong Smart Home Co ltd
Original Assignee
Shenzhen Chendong Smart Home Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Chendong Smart Home Co ltd filed Critical Shenzhen Chendong Smart Home Co ltd
Priority to CN202310400500.3A priority Critical patent/CN116352323B/en
Publication of CN116352323A publication Critical patent/CN116352323A/en
Application granted granted Critical
Publication of CN116352323B publication Critical patent/CN116352323B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23KSOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
    • B23K37/00Auxiliary devices or processes, not specially adapted for a procedure covered by only one of the other main groups of this subclass
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B23MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
    • B23KSOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
    • B23K31/00Processes relevant to this subclass, specially adapted for particular articles or purposes, but not covered by only one of the preceding main groups
    • B23K31/02Processes relevant to this subclass, specially adapted for particular articles or purposes, but not covered by only one of the preceding main groups relating to soldering or welding
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/1605Simulation of manipulator lay-out, design, modelling of manipulator
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Robotics (AREA)

Abstract

The invention provides an interactive welding environment modeling system and method, which are characterized in that a structural light scanning device for scanning and modeling a modeling space, an image acquisition device for acquiring an entity image of the modeling space, a display device for displaying a dynamic model of the modeling space and a control device are arranged, wherein the control device scans and models the modeling space through the structural light scanning device to generate a corresponding static model when welding equipment is in a non-working state, scans and models the modeling space through the structural light scanning device to generate a corresponding dynamic model when the welding equipment is in a working state, judges whether a new object exists or not based on the difference between the generated dynamic model and the static model, and executes corresponding interactive operation based on the pose of the new object model, so that the welding condition can be known and the welding parameters can be modified in real time based on the interactive operation, and the improvement of industrial production quality and industrial production efficiency can be effectively promoted.

Description

Interactive welding environment modeling system and method
Technical Field
The invention relates to the technical field of welding, in particular to an interactive welding environment modeling system and method.
Background
The welding virtual simulation technology is a technology for realizing the purposes of simulation of a welding environment, prediction of a welding result, optimization of a welding process and the like by establishing a virtual simulation model. With the rapid development of chip technology and artificial intelligence technology, the processing capacity of a high-performance graphic processing chip is greatly improved, and meanwhile, the introduction of the artificial intelligence technology brings new vitality for the virtual simulation technology, so that the welding virtual simulation technology is not limited by an artificial hand-drawing model with low efficiency, and the efficiency of welding virtual simulation modeling is greatly improved. However, the welding virtual simulation technology is mainly applied to simulation and verification under laboratory conditions, and the improvement of the welding virtual simulation modeling efficiency cannot effectively promote the improvement of the industrial production quality and the industrial production efficiency.
Disclosure of Invention
Based on the problems, the invention provides an interactive welding environment modeling system and method, which can effectively promote the improvement of industrial production quality and industrial production efficiency.
In view of this, a first aspect of the present invention proposes an interactive welding environment modeling system including a welding apparatus including a welding platform for placing a welding object, a welding head for welding the welding object on the welding platform, and a numerical control mechanical arm for fixing and controlling movement of the welding head, the welding environment modeling system further including a structured light scanning device for scanning modeling a modeling space, which is fixedly provided at one side of the welding platform, at least two scanning directions being non-parallel, an image capturing device for capturing an image of a solid of the modeling space, and a display device for displaying a dynamic model of the modeling space, the structured light scanning device including a structured light emitting unit and a structured light receiving unit, the welding environment modeling system further including a control device for controlling the welding apparatus, the structured light scanning device, the image capturing device, and the display device, the control device being configured to:
Constructing a modeling space in a structural light scanning range, wherein the structural light scanning range is a union region of scanning ranges covered by two or more structural light scanning devices, the modeling space falls into an intersection region of the scanning ranges of the two or more structural light scanning devices, and the modeling space covers a welding platform of welding equipment;
Controlling the structured light scanning device to scan the modeling space under a static condition to generate a static model of the modeling space, wherein the static condition is that an object in the modeling space is in a static state;
In a working state, controlling the structure light scanning device to periodically scan the modeling space in a preset high-frequency scanning period to obtain a structure light periodic scanning image in the modeling space, wherein the structure light periodic scanning image comprises structure light scanning images obtained by synchronously scanning the structure light scanning device corresponding to each scanning period;
Judging whether an object in the modeling space moves or changes according to the structure photoperiod scanning image;
When an object in the modeling space moves or changes, generating a dynamic model of the modeling space based on the structure light periodical scanning image;
Performing object recognition on the static model and the dynamic model to obtain object types and numbers in the static model and the dynamic model;
When the types and the number of the objects in the dynamic model are inconsistent with those of the static model, determining the pose of each newly added object model in the dynamic model;
And executing corresponding interaction operation according to the pose of each newly added object in the dynamic model.
A second aspect of the present invention proposes an interactive welding environment modeling method, comprising:
Constructing a modeling space in a structural light scanning range, wherein the structural light scanning range is a union region of scanning ranges covered by two or more structural light scanning devices, the modeling space falls into an intersection region of the scanning ranges of the two or more structural light scanning devices, and the modeling space covers a welding platform of welding equipment;
Controlling the structured light scanning device to scan the modeling space under a static condition to generate a static model of the modeling space, wherein the static condition is that an object in the modeling space is in a static state;
In a working state, controlling the structure light scanning device to periodically scan the modeling space in a preset high-frequency scanning period to obtain a structure light periodic scanning image in the modeling space, wherein the structure light periodic scanning image comprises structure light scanning images obtained by synchronously scanning the structure light scanning device corresponding to each scanning period;
Judging whether an object in the modeling space moves or changes according to the structure photoperiod scanning image;
When an object in the modeling space moves or changes, generating a dynamic model of the modeling space based on the structure light periodical scanning image;
Performing object recognition on the static model and the dynamic model to obtain object types and numbers in the static model and the dynamic model;
When the types and the number of the objects in the dynamic model are inconsistent with those of the static model, determining the pose of each newly added object model in the dynamic model;
And executing corresponding interaction operation according to the pose of each newly added object in the dynamic model.
Preferably, in the above welding environment modeling method, the step of determining a pose of each newly added object model in the dynamic model specifically includes:
Respectively extracting independent object models from the static model and the dynamic model according to an object recognition result;
Matching the positions and the shapes of the object models in the static model and the dynamic model;
establishing a corresponding relation between the static model and an object model of the same object in the dynamic model, wherein the object model of the same object is an object model generated in the static model and the dynamic model based on the same object in a real environment;
Determining an object model which does not have a corresponding relation with the object model in the static model in the dynamic model as a newly added object model;
and acquiring the position and posture data of the newly added object model from the dynamic model.
Preferably, in the above welding environment modeling method, the step of executing the corresponding interaction operation according to the pose of each newly added object model in the dynamic model specifically includes:
Reading pre-configured welding object information, wherein the welding object information comprises a standard model, a welding position and welding parameters of a welding object;
matching the newly added object model with the standard model of the welding object;
judging whether the newly added object model contains the model of the welding object according to the matching result;
And when the newly added object model comprises the model of the welding object, controlling the welding head to weld the welding object according to the pose of the model of the welding object, the welding position and the welding parameters.
Preferably, in the above welding environment modeling method, after the step of generating the dynamic model of the modeling space based on the structural light periodic scan image, the method further includes:
Acquiring a physical image of the modeling space through an image acquisition device;
reading material information of an object in the dynamic model according to an object identification result;
analyzing the physical image to obtain visual angle information of the physical image, color information of an object corresponding to the dynamic model and light source information of a welding environment;
Rendering an object model in the dynamic model based on the material information, the visual angle information, the color information and the light source information to generate a virtual image of the dynamic model;
Displaying the virtual image on a display device of the welding equipment;
Updating the virtual image displayed on the display device based on the high frequency scanning period.
Preferably, in the above welding environment modeling method, the step of executing the corresponding interaction operation according to the pose of each newly added object in the dynamic model specifically includes:
Reading pre-configured interactive object information, wherein the interactive object information comprises a standard model of an interactive object, a motion gesture and an interactive instruction associated with the motion gesture;
Matching the newly added object model with the standard model of the interactive object;
Judging whether the newly added object model contains the model of the interaction object according to the matching result;
And when the newly added object model comprises the model of the interaction object, controlling the welding equipment to enter an interaction state.
Preferably, in the above welding environment modeling method, the modeling space includes an interaction area and a welding area, the interaction area is an area for performing man-machine interaction in the modeling space, which is far away from the welding platform and the welding head, the welding area is an area for welding a welding object, which covers the welding platform and the welding head, in the modeling space, and the step of controlling the welding device to enter the interaction state specifically includes:
Displaying the physical image on a display device of the welding equipment;
Acquiring the size of the interaction area in the physical image displayed on the display device;
Scaling the welding area in the virtual image to match the size of the interaction area in the physical image;
And displaying the welding area in the virtual image at a position corresponding to the interaction area in the physical image on the display device.
Preferably, in the above welding environment modeling method, the step of executing the corresponding interaction operation according to the pose of each newly added object in the dynamic model further includes:
Acquiring a first pose of the interaction object in the interaction area in the entity image;
acquiring a second pose of the welding object in the welding area in the virtual image;
after displaying the welding area in the virtual image on the display device at a position corresponding to the interaction area in the physical image, acquiring a position relationship between the first pose and the second pose in the interaction area in the physical image;
And identifying the interaction operation corresponding to the first pose of the interaction object according to the position relation of the first pose of the interaction object and the second pose of the welding object in the interaction area in the entity image.
Preferably, in the above welding environment modeling method, after the step of identifying the interaction operation corresponding to the first pose of the interaction object according to the positional relationship between the first pose of the interaction object and the second pose of the welding object in the interaction region in the physical image, the method further includes:
When the interactive operation is identified as a predicted welding result, inputting the current welding parameters and the welding state into a pre-trained welding result prediction model to predict the welding result;
generating a welding result model of the welding object after welding is completed according to the predicted welding result;
replacing the welding object model with the welding result model in the dynamic model;
Generating a virtual image of the dynamic model including the welding result model;
And displaying the welding area in the virtual image at a position corresponding to the interaction area in the physical image on the display device.
Preferably, in the above welding environment modeling method, after the step of identifying the interaction operation corresponding to the first pose of the interaction object according to the positional relationship between the first pose of the interaction object and the second pose of the welding object in the interaction region in the physical image, the method further includes:
When the interactive operation is identified to modify welding parameters, configuring a prediction part in the welding result model into a modifiable state, wherein the prediction part is a difference part between the welding object result model and the welding object entity in the entity image;
modifying the shape or position of the predicted portion according to the motion state of the interactive object in the interactive region of the physical image;
Controlling the welding equipment to exit from the interaction state according to the interaction operation or an external control instruction;
generating corresponding welding parameters according to the modification result of the prediction part;
And controlling the welding equipment to weld the welding object with the modified welding parameters.
The invention provides an interactive welding environment modeling system and method, which are characterized in that a structural light scanning device for scanning and modeling a modeling space, an image acquisition device for acquiring an entity image of the modeling space, a display device for displaying a dynamic model of the modeling space and a control device are arranged, wherein the control device scans and models the modeling space through the structural light scanning device to generate a corresponding static model when welding equipment is in a non-working state, scans and models the modeling space through the structural light scanning device to generate a corresponding dynamic model when the welding equipment is in a working state, judges whether a new object exists or not based on the difference between the generated dynamic model and the static model, and executes corresponding interactive operation based on the pose of the new object model, so that the welding condition can be known and the welding parameters can be modified in real time based on the interactive operation, and the improvement of industrial production quality and industrial production efficiency can be effectively promoted.
Drawings
FIG. 1 is a schematic block diagram of an interactive welding environment modeling system provided in accordance with one embodiment of the present invention;
FIG. 2 is a schematic flow chart of an interactive welding environment modeling method provided by an embodiment of the invention.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will be more clearly understood, a more particular description of the application will be rendered by reference to the appended drawings and appended detailed description. It should be noted that, without conflict, the embodiments of the present application and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced otherwise than as described herein, and therefore the scope of the present invention is not limited to the specific embodiments disclosed below.
In the description of the present invention, the term "plurality" means two or more, unless explicitly defined otherwise, the orientation or positional relationship indicated by the terms "upper", "lower", etc. are based on the orientation or positional relationship shown in the drawings, merely for convenience of description of the present invention and to simplify the description, and do not indicate or imply that the apparatus or elements referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention. The terms "coupled," "mounted," "secured," and the like are to be construed broadly, and may be fixedly coupled, detachably coupled, or integrally connected, for example; can be directly connected or indirectly connected through an intermediate medium. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art according to the specific circumstances. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first", "a second", etc. may explicitly or implicitly include one or more such feature. In the description of the present invention, unless otherwise indicated, the meaning of "a plurality" is two or more.
In the description of this specification, the terms "one embodiment," "some implementations," "particular embodiments," and the like, mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
An interactive welding environment modeling system and method provided in accordance with some embodiments of the present invention is described below with reference to the accompanying drawings
As shown in fig. 1, a first aspect of the present invention proposes an interactive welding environment modeling system, including a welding apparatus including a welding platform for placing a welding object, a welding head for welding the welding object on the welding platform, and a numerical control mechanical arm for fixing and controlling movement of the welding head, the welding environment modeling system further including a structured light scanning device for scanning and modeling a modeling space, which is fixedly disposed at one side of the welding platform, at least two scanning directions being non-parallel, an image acquisition device for acquiring an image of a solid of the modeling space, and a display device for displaying a dynamic model of the modeling space, the structured light scanning device including a structured light emitting unit and a structured light receiving unit, the welding environment modeling system further including a control device for controlling the welding apparatus, the structured light scanning device, the image acquisition device, and the display device, the control device being configured to:
Constructing a modeling space in a structural light scanning range, wherein the structural light scanning range is a union region of scanning ranges covered by two or more structural light scanning devices, the modeling space falls into an intersection region of the scanning ranges of the two or more structural light scanning devices, and the modeling space covers a welding platform of welding equipment;
Controlling the structured light scanning device to scan the modeling space under a static condition to generate a static model of the modeling space, wherein the static condition is that an object in the modeling space is in a static state;
In a working state, controlling the structure light scanning device to periodically scan the modeling space in a preset high-frequency scanning period to obtain a structure light periodic scanning image in the modeling space, wherein the structure light periodic scanning image comprises structure light scanning images obtained by synchronously scanning the structure light scanning device corresponding to each scanning period;
Judging whether an object in the modeling space moves or changes according to the structure photoperiod scanning image;
When an object in the modeling space moves or changes, generating a dynamic model of the modeling space based on the structure light periodical scanning image;
Performing object recognition on the static model and the dynamic model to obtain object types and numbers in the static model and the dynamic model;
When the types and the number of the objects in the dynamic model are inconsistent with those of the static model, determining the pose of each newly added object model in the dynamic model;
And executing corresponding interaction operation according to the pose of each newly added object in the dynamic model.
Preferably, in the welding environment modeling system, the pose of the added object includes position and pose data of the added object, and specifically includes a shape of the added object and coordinates of each local area of the added object in the modeling space.
Preferably, in the welding environment modeling system described above, in the step of controlling the structured light scanning device to scan the modeling space under a static condition to generate a static model of the modeling space, the control device is configured to:
Acquiring the current coordinates of a welding head of the welding equipment;
Judging whether the welding head is in the modeling space according to the current coordinates of the welding head;
if not, controlling a numerical control mechanical arm of the welding equipment to move the welding head to a preset position in the modeling space;
controlling the two or more structured light scanning devices to scan the modeling space to obtain two or more structured light scanning images in the modeling space;
Analyzing the structured light scanning image to obtain shape and size parameters of an object in the modeling space, wherein the object in the modeling space comprises a welding platform and a welding head of the welding equipment;
generating a stereoscopic model of the object according to the shape and size parameters of the object in the modeling space, wherein the stereoscopic model of the object in the modeling space forms the static model.
Preferably, in the welding environment modeling system described above, in the step of determining whether or not the object in the modeling space moves or changes by the structured light periodically scanning image, the control device is configured to:
Acquiring a first structured light scanning image for constructing the static model;
Acquiring a second structured light scanning image of the last scanning period in the structured light periodic scanning image;
matching the first structural light scanning image with the structural light scanning image of the same machine position in the second structural light scanning image, wherein the structural light scanning image of the same machine position is the structural light scanning image output by the same structural light scanning device;
and when the first structure light scanning image is inconsistent with the structure light scanning image of any machine position in the second structure light scanning image, determining that an object in the modeling space moves or changes.
Preferably, in the welding environment modeling system described above, in the step of determining a pose of each newly added object model in the dynamic model, the control device is configured to:
Respectively extracting independent object models from the static model and the dynamic model according to an object recognition result;
Matching the positions and the shapes of the object models in the static model and the dynamic model;
establishing a corresponding relation between the static model and an object model of the same object in the dynamic model, wherein the object model of the same object is an object model generated in the static model and the dynamic model based on the same object in a real environment;
Determining an object model which does not have a corresponding relation with the object model in the static model in the dynamic model as a newly added object model;
and acquiring the position and posture data of the newly added object model from the dynamic model.
Preferably, in the welding environment modeling system described above, in the step of performing a corresponding interaction operation according to a pose of each newly added object model in the dynamic model, the control device is configured to:
Reading pre-configured welding object information, wherein the welding object information comprises a standard model, a welding position and welding parameters of a welding object;
matching the newly added object model with the standard model of the welding object;
judging whether the newly added object model contains the model of the welding object according to the matching result;
And when the newly added object model comprises the model of the welding object, controlling the welding head to weld the welding object according to the pose of the model of the welding object, the welding position and the welding parameters.
Preferably, in the welding environment modeling system described above, after the step of generating a dynamic model of the modeling space based on the structural photoperiod scanned image, the control device is configured to:
Acquiring a physical image of the modeling space through an image acquisition device;
reading material information of an object in the dynamic model according to an object identification result;
analyzing the physical image to obtain visual angle information of the physical image, color information of an object corresponding to the dynamic model and light source information of a welding environment;
Rendering an object model in the dynamic model based on the material information, the visual angle information, the color information and the light source information to generate a virtual image of the dynamic model;
Displaying the virtual image on a display device of the welding equipment;
Updating the virtual image displayed on the display device based on the high frequency scanning period.
Preferably, in the welding environment modeling system described above, after the step of constructing the modeling space within the structured light scanning range, the control device is configured to:
And configuring an interaction area and a welding area in the modeling space, wherein the interaction area is an area, away from the welding platform and the welding head, in the modeling space for executing man-machine interaction, and the welding area is an area, in the modeling space, covering the welding platform and the welding head and used for welding a welding object.
Preferably, in the welding environment modeling system, the area away from the welding platform and the welding head is an area outside a maximum movement range of the numerical control mechanical arm of the welding device.
Preferably, in the welding environment modeling system described above, in the step of performing a corresponding interaction operation according to the pose of each newly added object in the dynamic model, the control device is configured to:
Reading pre-configured interactive object information, wherein the interactive object information comprises a standard model of an interactive object, a motion gesture and an interactive instruction associated with the motion gesture;
Matching the newly added object model with the standard model of the interactive object;
Judging whether the newly added object model contains the model of the interaction object according to the matching result;
And when the newly added object model comprises the model of the interaction object, controlling the welding equipment to enter an interaction state.
Preferably, in the welding environment modeling system, the interactive object includes a hand, a head or a whole body of a human body.
Preferably, in the welding environment modeling system described above, in the step of controlling the welding apparatus to enter the interactive state, the control device is configured to:
judging whether the welding equipment is in a welding state or not;
Judging whether the welding equipment meets a stop condition or not when the welding equipment is in a welding state;
When the welding equipment meets the stop condition, controlling the welding equipment to stop welding;
And moving a welding head of the welding equipment to a preset position in the modeling space.
Preferably, in the welding environment modeling system described above, in the step of controlling the welding apparatus to enter the interactive state, the control device is configured to:
Displaying the physical image on a display device of the welding equipment;
Acquiring the size of the interaction area in the physical image displayed on the display device;
Scaling the welding area in the virtual image to match the size of the interaction area in the physical image;
And displaying the welding area in the virtual image at a position corresponding to the interaction area in the physical image on the display device.
Preferably, in the welding environment modeling system described above, in the step of performing a corresponding interaction operation according to the pose of each newly added object in the dynamic model, the control device is configured to:
Acquiring a first pose of the interaction object in the interaction area in the entity image;
acquiring a second pose of the welding object in the welding area in the virtual image;
after displaying the welding area in the virtual image on the display device at a position corresponding to the interaction area in the physical image, acquiring a position relationship between the first pose and the second pose in the interaction area in the physical image;
And identifying the interaction operation corresponding to the first pose of the interaction object according to the position relation of the first pose of the interaction object and the second pose of the welding object in the interaction area in the entity image.
Preferably, in the above welding environment modeling system, the interactive operation includes selecting a welding position, reducing or enlarging the welding area of the virtual image in the interactive area of the physical image, predicting a welding result, or modifying a welding parameter.
Preferably, in the welding environment modeling system described above, after the step of identifying the interaction operation corresponding to the first pose of the interaction object according to the positional relationship between the first pose of the interaction object and the second pose of the welding object in the physical image, the control device is configured to:
When the interactive operation is identified as a predicted welding result, inputting the current welding parameters and the welding state into a pre-trained welding result prediction model to predict the welding result;
generating a welding result model of the welding object after welding is completed according to the predicted welding result;
replacing the welding object model with the welding result model in the dynamic model;
Generating a virtual image of the dynamic model including the welding result model;
And displaying the welding area in the virtual image at a position corresponding to the interaction area in the physical image on the display device.
Preferably, in the welding environment modeling system described above, after the step of identifying the interaction operation corresponding to the first pose of the interaction object according to the positional relationship between the first pose of the interaction object and the second pose of the welding object in the physical image, the control device is configured to:
When the interactive operation is identified to modify welding parameters, configuring a prediction part in the welding result model into a modifiable state, wherein the prediction part is a difference part between the welding object result model and the welding object entity in the entity image;
modifying the shape or position of the predicted portion according to the motion state of the interactive object in the interactive region of the physical image;
Controlling the welding equipment to exit from the interaction state according to the interaction operation or an external control instruction;
generating corresponding welding parameters according to the modification result of the prediction part;
And controlling the welding equipment to weld the welding object with the modified welding parameters.
Preferably, in the above welding environment modeling system, the welding result model includes a solid portion and a predicted portion, the solid portion in the welding result model includes a main body of the welding object and a welding material or main body molten portion of a welded area of the welding object, and the predicted portion in the welding result model is the welding material or main body molten portion predicted according to the current welding parameter and the welding state.
As shown in fig. 2, a second aspect of the present invention proposes an interactive welding environment modeling method, including:
Constructing a modeling space in a structural light scanning range, wherein the structural light scanning range is a union region of scanning ranges covered by two or more structural light scanning devices, the modeling space falls into an intersection region of the scanning ranges of the two or more structural light scanning devices, and the modeling space covers a welding platform of welding equipment;
Controlling the structured light scanning device to scan the modeling space under a static condition to generate a static model of the modeling space, wherein the static condition is that an object in the modeling space is in a static state;
In a working state, controlling the structure light scanning device to periodically scan the modeling space in a preset high-frequency scanning period to obtain a structure light periodic scanning image in the modeling space, wherein the structure light periodic scanning image comprises structure light scanning images obtained by synchronously scanning the structure light scanning device corresponding to each scanning period;
Judging whether an object in the modeling space moves or changes according to the structure photoperiod scanning image;
When an object in the modeling space moves or changes, generating a dynamic model of the modeling space based on the structure light periodical scanning image;
Performing object recognition on the static model and the dynamic model to obtain object types and numbers in the static model and the dynamic model;
When the types and the number of the objects in the dynamic model are inconsistent with those of the static model, determining the pose of each newly added object model in the dynamic model;
And executing corresponding interaction operation according to the pose of each newly added object in the dynamic model.
Preferably, in the above welding environment modeling method, the pose of the added object includes position and pose data of the added object, and specifically includes a shape of the added object and coordinates of each local area of the added object in the modeling space.
Preferably, in the above welding environment modeling method, the step of controlling the structured light scanning device to scan the modeling space under the static condition to generate a static model of the modeling space specifically includes:
Acquiring the current coordinates of a welding head of the welding equipment;
Judging whether the welding head is in the modeling space according to the current coordinates of the welding head;
if not, controlling a numerical control mechanical arm of the welding equipment to move the welding head to a preset position in the modeling space;
controlling the two or more structured light scanning devices to scan the modeling space to obtain two or more structured light scanning images in the modeling space;
Analyzing the structured light scanning image to obtain shape and size parameters of an object in the modeling space, wherein the object in the modeling space comprises a welding platform and a welding head of the welding equipment;
generating a stereoscopic model of the object according to the shape and size parameters of the object in the modeling space, wherein the stereoscopic model of the object in the modeling space forms the static model.
Preferably, in the above welding environment modeling method, the step of determining whether the object in the modeling space moves or changes according to the structural light periodic scan image specifically includes:
Acquiring a first structured light scanning image for constructing the static model;
Acquiring a second structured light scanning image of the last scanning period in the structured light periodic scanning image;
matching the first structural light scanning image with the structural light scanning image of the same machine position in the second structural light scanning image, wherein the structural light scanning image of the same machine position is the structural light scanning image output by the same structural light scanning device;
and when the first structure light scanning image is inconsistent with the structure light scanning image of any machine position in the second structure light scanning image, determining that an object in the modeling space moves or changes.
Preferably, in the above welding environment modeling method, the step of determining a pose of each newly added object model in the dynamic model specifically includes:
Respectively extracting independent object models from the static model and the dynamic model according to an object recognition result;
Matching the positions and the shapes of the object models in the static model and the dynamic model;
establishing a corresponding relation between the static model and an object model of the same object in the dynamic model, wherein the object model of the same object is an object model generated in the static model and the dynamic model based on the same object in a real environment;
Determining an object model which does not have a corresponding relation with the object model in the static model in the dynamic model as a newly added object model;
and acquiring the position and posture data of the newly added object model from the dynamic model.
Preferably, in the above welding environment modeling method, the step of executing the corresponding interaction operation according to the pose of each newly added object model in the dynamic model specifically includes:
Reading pre-configured welding object information, wherein the welding object information comprises a standard model, a welding position and welding parameters of a welding object;
matching the newly added object model with the standard model of the welding object;
judging whether the newly added object model contains the model of the welding object according to the matching result;
And when the newly added object model comprises the model of the welding object, controlling the welding head to weld the welding object according to the pose of the model of the welding object, the welding position and the welding parameters.
Preferably, in the above welding environment modeling method, after the step of generating the dynamic model of the modeling space based on the structural light periodic scan image, the method further includes:
Acquiring a physical image of the modeling space through an image acquisition device;
reading material information of an object in the dynamic model according to an object identification result;
analyzing the physical image to obtain visual angle information of the physical image, color information of an object corresponding to the dynamic model and light source information of a welding environment;
Rendering an object model in the dynamic model based on the material information, the visual angle information, the color information and the light source information to generate a virtual image of the dynamic model;
Displaying the virtual image on a display device of the welding equipment;
Updating the virtual image displayed on the display device based on the high frequency scanning period.
Preferably, in the above welding environment modeling method, after the step of constructing the modeling space in the structured light scanning range, the method further includes:
And configuring an interaction area and a welding area in the modeling space, wherein the interaction area is an area, away from the welding platform and the welding head, in the modeling space for executing man-machine interaction, and the welding area is an area, in the modeling space, covering the welding platform and the welding head and used for welding a welding object.
Preferably, in the above welding environment modeling method, the area away from the welding platform and the welding head is an area outside a maximum movement range of a numerical control mechanical arm of the welding device.
Preferably, in the above welding environment modeling method, the step of executing the corresponding interaction operation according to the pose of each newly added object in the dynamic model specifically includes:
Reading pre-configured interactive object information, wherein the interactive object information comprises a standard model of an interactive object, a motion gesture and an interactive instruction associated with the motion gesture;
Matching the newly added object model with the standard model of the interactive object;
Judging whether the newly added object model contains the model of the interaction object according to the matching result;
And when the newly added object model comprises the model of the interaction object, controlling the welding equipment to enter an interaction state.
Preferably, in the above welding environment modeling method, the interactive object includes a hand, a head, or a whole body of a human body.
Preferably, in the above welding environment modeling method, the step of controlling the welding device to enter the interactive state includes:
judging whether the welding equipment is in a welding state or not;
Judging whether the welding equipment meets a stop condition or not when the welding equipment is in a welding state;
When the welding equipment meets the stop condition, controlling the welding equipment to stop welding;
And moving a welding head of the welding equipment to a preset position in the modeling space.
Preferably, in the above welding environment modeling method, the step of controlling the welding device to enter the interaction state specifically includes:
Displaying the physical image on a display device of the welding equipment;
Acquiring the size of the interaction area in the physical image displayed on the display device;
Scaling the welding area in the virtual image to match the size of the interaction area in the physical image;
And displaying the welding area in the virtual image at a position corresponding to the interaction area in the physical image on the display device.
Preferably, in the above welding environment modeling method, the step of executing the corresponding interaction operation according to the pose of each newly added object in the dynamic model further includes:
Acquiring a first pose of the interaction object in the interaction area in the entity image;
acquiring a second pose of the welding object in the welding area in the virtual image;
after displaying the welding area in the virtual image on the display device at a position corresponding to the interaction area in the physical image, acquiring a position relationship between the first pose and the second pose in the interaction area in the physical image;
And identifying the interaction operation corresponding to the first pose of the interaction object according to the position relation of the first pose of the interaction object and the second pose of the welding object in the interaction area in the entity image.
Preferably, in the above welding environment modeling method, the interactive operation includes selecting a welding position, shrinking or enlarging the welding area of the virtual image in the interactive area of the physical image, predicting a welding result, or modifying a welding parameter.
Preferably, in the above welding environment modeling method, after the step of identifying the interaction operation corresponding to the first pose of the interaction object according to the positional relationship between the first pose of the interaction object and the second pose of the welding object in the interaction region in the physical image, the method further includes:
When the interactive operation is identified as a predicted welding result, inputting the current welding parameters and the welding state into a pre-trained welding result prediction model to predict the welding result;
generating a welding result model of the welding object after welding is completed according to the predicted welding result;
replacing the welding object model with the welding result model in the dynamic model;
Generating a virtual image of the dynamic model including the welding result model;
And displaying the welding area in the virtual image at a position corresponding to the interaction area in the physical image on the display device.
Preferably, in the above welding environment modeling method, after the step of identifying the interaction operation corresponding to the first pose of the interaction object according to the positional relationship between the first pose of the interaction object and the second pose of the welding object in the interaction region in the physical image, the method further includes:
When the interactive operation is identified to modify welding parameters, configuring a prediction part in the welding result model into a modifiable state, wherein the prediction part is a difference part between the welding object result model and the welding object entity in the entity image;
modifying the shape or position of the predicted portion according to the motion state of the interactive object in the interactive region of the physical image;
Controlling the welding equipment to exit from the interaction state according to the interaction operation or an external control instruction;
generating corresponding welding parameters according to the modification result of the prediction part;
And controlling the welding equipment to weld the welding object with the modified welding parameters.
Preferably, in the above welding environment modeling method, the welding result model includes a solid portion and a predicted portion, the solid portion in the welding result model includes a main body of the welding object and a welding material or main body molten portion of a welded area of the welding object, and the predicted portion in the welding result model is the welding material or main body molten portion predicted according to the current welding parameter and the welding state.
It should be noted that in this document relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Embodiments in accordance with the present invention, as described above, are not intended to be exhaustive or to limit the invention to the precise embodiments disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, to thereby enable others skilled in the art to best utilize the invention and various modifications as are suited to the particular use contemplated. The invention is limited only by the claims and the full scope and equivalents thereof.

Claims (5)

1. An interactive welding environment modeling method applied to a welding environment modeling system, which is characterized in that the welding environment modeling system comprises welding equipment, the welding equipment comprises a welding platform for placing a welding object, a welding head for welding the welding object on the welding platform, and a numerical control mechanical arm of a numerical control mechanical arm for fixing and controlling the movement of the welding head, the welding environment modeling system further comprises a structured light scanning device fixedly arranged at one side of the welding platform and having at least two non-parallel scanning directions and used for scanning and modeling a modeling space, an image acquisition device used for acquiring an entity image of the modeling space, and a display device used for displaying a dynamic model of the modeling space, the structured light scanning device comprises a structured light emitting unit and a structured light receiving unit, and the welding environment modeling system further comprises a control device for controlling the welding equipment, the structured light scanning device, the image acquisition device and the display device, and the method comprises:
Constructing a modeling space in a structural light scanning range, wherein the structural light scanning range is a union region of scanning ranges covered by two or more structural light scanning devices, the modeling space falls into an intersection region of the scanning ranges of the two or more structural light scanning devices, and the modeling space covers a welding platform of welding equipment;
Controlling the structured light scanning device to scan the modeling space under a static condition to generate a static model of the modeling space, wherein the static condition is that an object in the modeling space is in a static state;
In a working state, controlling the structure light scanning device to periodically scan the modeling space in a preset high-frequency scanning period to obtain a structure light periodic scanning image in the modeling space, wherein the structure light periodic scanning image comprises structure light scanning images obtained by synchronously scanning the structure light scanning device corresponding to each scanning period;
Judging whether an object in the modeling space moves or changes according to the structure photoperiod scanning image;
When an object in the modeling space moves or changes, generating a dynamic model of the modeling space based on the structure light periodical scanning image;
Performing object recognition on the static model and the dynamic model to obtain object types and numbers in the static model and the dynamic model;
When the types and the number of the objects in the dynamic model are inconsistent with those of the static model, determining the pose of each newly added object model in the dynamic model;
Executing corresponding interaction operation according to the pose of each newly added object in the dynamic model;
the step of determining the pose of each newly added object model in the dynamic model specifically comprises the following steps:
Respectively extracting independent object models from the static model and the dynamic model according to an object recognition result;
Matching the positions and the shapes of the object models in the static model and the dynamic model;
establishing a corresponding relation between the static model and an object model of the same object in the dynamic model, wherein the object model of the same object is an object model generated in the static model and the dynamic model based on the same object in a real environment;
Determining an object model which does not have a corresponding relation with the object model in the static model in the dynamic model as a newly added object model;
Acquiring position and posture data of the newly added object model from the dynamic model;
the step of executing the corresponding interaction operation according to the pose of each newly added object model in the dynamic model specifically comprises the following steps:
Reading pre-configured welding object information, wherein the welding object information comprises a standard model, a welding position and welding parameters of a welding object;
matching the newly added object model with the standard model of the welding object;
judging whether the newly added object model contains the model of the welding object according to the matching result;
When the newly added object model comprises the model of the welding object, controlling the welding head to weld the welding object according to the pose of the model of the welding object, the welding position and the welding parameters;
After the step of generating a dynamic model of the modeling space based on the structured light periodic scan image, further comprising:
Acquiring a physical image of the modeling space through an image acquisition device;
reading material information of an object in the dynamic model according to an object identification result;
analyzing the physical image to obtain visual angle information of the physical image, color information of an object corresponding to the dynamic model and light source information of a welding environment;
Rendering an object model in the dynamic model based on the material information, the visual angle information, the color information and the light source information to generate a virtual image of the dynamic model;
Displaying the virtual image on a display device of the welding equipment;
updating the virtual image displayed on the display device based on the high frequency scanning period;
The step of executing corresponding interaction operation according to the pose of each newly added object in the dynamic model specifically comprises the following steps:
Reading pre-configured interactive object information, wherein the interactive object information comprises a standard model of an interactive object, a motion gesture and an interactive instruction associated with the motion gesture;
Matching the newly added object model with the standard model of the interactive object;
Judging whether the newly added object model contains the model of the interaction object according to the matching result;
And when the newly added object model comprises the model of the interaction object, controlling the welding equipment to enter an interaction state.
2. The welding environment modeling method according to claim 1, wherein the modeling space includes an interaction area and a welding area, the interaction area is an area for performing man-machine interaction in the modeling space, which is far away from the welding platform and the welding head, the welding area is an area for welding a welding object, which covers the welding platform and the welding head, in the modeling space, and the step of controlling the welding device to enter an interaction state specifically includes:
Displaying the physical image on a display device of the welding equipment;
Acquiring the size of the interaction area in the physical image displayed on the display device;
Scaling the welding area in the virtual image to match the size of the interaction area in the physical image;
And displaying the welding area in the virtual image at a position corresponding to the interaction area in the physical image on the display device.
3. The welding environment modeling method of claim 2, wherein the step of performing a corresponding interaction operation based on the pose of each newly added object in the dynamic model further comprises:
Acquiring a first pose of the interaction object in the interaction area in the entity image;
acquiring a second pose of the welding object in the welding area in the virtual image;
after displaying the welding area in the virtual image on the display device at a position corresponding to the interaction area in the physical image, acquiring a position relationship between the first pose and the second pose in the interaction area in the physical image;
And identifying the interaction operation corresponding to the first pose of the interaction object according to the position relation of the first pose of the interaction object and the second pose of the welding object in the interaction area in the entity image.
4. The welding environment modeling method as defined in claim 3, further comprising, after the step of identifying an interaction operation corresponding to the first pose of the interactive object based on a positional relationship between the first pose of the interactive object and the second pose of the welding object in the interaction region in the physical image:
When the interactive operation is identified as a predicted welding result, inputting the current welding parameters and the welding state into a pre-trained welding result prediction model to predict the welding result;
generating a welding result model of the welding object after welding is completed according to the predicted welding result;
replacing the welding object model with the welding result model in the dynamic model;
Generating a virtual image of the dynamic model including the welding result model;
And displaying the welding area in the virtual image at a position corresponding to the interaction area in the physical image on the display device.
5. The welding environment modeling method according to claim 4, further comprising, after the step of identifying an interaction operation corresponding to the first pose of the interactive object based on a positional relationship between the first pose of the interactive object and the second pose of the welding object in the interaction region in the physical image:
When the interactive operation is identified to modify welding parameters, configuring a prediction part in the welding result model into a modifiable state, wherein the prediction part is a difference part between the welding object result model and the welding object entity in the entity image;
modifying the shape or position of the predicted portion according to the motion state of the interactive object in the interactive region of the physical image;
Controlling the welding equipment to exit from the interaction state according to the interaction operation or an external control instruction;
generating corresponding welding parameters according to the modification result of the prediction part;
And controlling the welding equipment to weld the welding object with the modified welding parameters.
CN202310400500.3A 2023-04-10 2023-04-10 Interactive welding environment modeling system and method Active CN116352323B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310400500.3A CN116352323B (en) 2023-04-10 2023-04-10 Interactive welding environment modeling system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310400500.3A CN116352323B (en) 2023-04-10 2023-04-10 Interactive welding environment modeling system and method

Publications (2)

Publication Number Publication Date
CN116352323A CN116352323A (en) 2023-06-30
CN116352323B true CN116352323B (en) 2024-07-30

Family

ID=86904774

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310400500.3A Active CN116352323B (en) 2023-04-10 2023-04-10 Interactive welding environment modeling system and method

Country Status (1)

Country Link
CN (1) CN116352323B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107052571A (en) * 2016-12-27 2017-08-18 深圳信息职业技术学院 A kind of laser welding equipment and laser welding method
CN109685894A (en) * 2018-12-03 2019-04-26 东莞理工学院 A kind of bifurcation full-view modeling system
CN209279907U (en) * 2018-08-06 2019-08-20 殷煜翔 Structure light dynamic 3 D scans demonstrator

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111037549B (en) * 2019-11-29 2022-09-09 重庆顺泰铁塔制造有限公司 Welding track processing method and system based on 3D scanning and TensorFlow algorithm
CN113190120B (en) * 2021-05-11 2022-06-24 浙江商汤科技开发有限公司 Pose acquisition method and device, electronic equipment and storage medium
CN113369636A (en) * 2021-06-09 2021-09-10 深圳市集新自动化有限公司 Numerical control welding system and control method
CN115713581A (en) * 2022-11-28 2023-02-24 北京百度网讯科技有限公司 Dynamic model generation method, device and equipment
CN115592324B (en) * 2022-12-08 2023-05-16 孙军 Automatic welding robot control system based on artificial intelligence

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107052571A (en) * 2016-12-27 2017-08-18 深圳信息职业技术学院 A kind of laser welding equipment and laser welding method
CN209279907U (en) * 2018-08-06 2019-08-20 殷煜翔 Structure light dynamic 3 D scans demonstrator
CN109685894A (en) * 2018-12-03 2019-04-26 东莞理工学院 A kind of bifurcation full-view modeling system

Also Published As

Publication number Publication date
CN116352323A (en) 2023-06-30

Similar Documents

Publication Publication Date Title
JP7071054B2 (en) Information processing equipment, information processing methods and programs
JP6626970B2 (en) Server device, image processing system, and image processing method
JP6970553B2 (en) Image processing device, image processing method
CN116423478A (en) Processing device and method
CN106256512B (en) Robotic device including machine vision
US10015466B2 (en) Spatial information visualization apparatus, storage medium, and spatial information visualization method
CN101086789A (en) Reverse modeling method using mesh data as feature
US8786652B2 (en) Information processing apparatus, information processing method, and system
US5537523A (en) Method and apparatus for displaying altitude of form characteristics generated from a geometric model in a computer using a graph
KR20210136502A (en) RGB-D Data and Deep Learning Based 3D Instance Segmentation Method and System
CN109242762A (en) Image-recognizing method, device, storage medium and electronic equipment
CN116352323B (en) Interactive welding environment modeling system and method
JP2009122999A (en) 3D shape optimization apparatus and 3D shape optimization method
JPH11102446A (en) Graphics input device
CN112698632B (en) Full-automatic production line digital twinning system, method and equipment
JP4990173B2 (en) Image processing apparatus, image processing method, and program
JP2020110920A (en) Device, robot system, model generation method, and model generation program
JP5125859B2 (en) Leather shape data generation device, leather shape data generation method, and leather shape data generation program
US20230142455A1 (en) Data creation device, program creation device, object detection device, data creation method, and object detection method
CN115358094B (en) Hydraulic support control method based on digital twin model
JP2003281566A (en) Image processor and processing method, storage medium and program
JP7197316B2 (en) Image processing method, computer program and recording medium
JP4411585B2 (en) Analysis device
US20250085302A1 (en) Method for Executing a Chemical Workflow
CN114019977B (en) Path control method and device for mobile robot, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240709

Address after: 518000 Building 11, Second Industrial Zone, Li Songqian Community, Gongming Street, Guangming District, Shenzhen City, Guangdong Province

Applicant after: Shenzhen Chendong smart home Co.,Ltd.

Country or region after: China

Address before: Room 1203, Building B, Tefa Information Port, No.2 Kefeng Road, Yuehai Street, Nanshan District, Shenzhen, Guangdong 518000

Applicant before: SHENZHEN BASICAE SOFTWARE TECHNOLOGY CO.,LTD.

Country or region before: China

GR01 Patent grant
GR01 Patent grant