CN117115398B - A virtual-real fusion digital twin fluid phenomenon simulation method - Google Patents
A virtual-real fusion digital twin fluid phenomenon simulation method Download PDFInfo
- Publication number
- CN117115398B CN117115398B CN202311014639.0A CN202311014639A CN117115398B CN 117115398 B CN117115398 B CN 117115398B CN 202311014639 A CN202311014639 A CN 202311014639A CN 117115398 B CN117115398 B CN 117115398B
- Authority
- CN
- China
- Prior art keywords
- scene
- fluid
- real
- dimensional
- indoor scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/506—Illumination models
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/75—Determining position or orientation of objects or cameras using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/35—Categorising the entire scene, e.g. birthday party or wedding scene
- G06V20/36—Indoor scenes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30244—Camera pose
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/24—Fluid dynamics
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/56—Particle system, point based geometry or rendering
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Biodiversity & Conservation Biology (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computer Hardware Design (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the disclosure discloses a digital twin fluid phenomenon simulation method for virtual-real fusion. The method comprises the steps of collecting an indoor scene depth map and an indoor scene RGB map, carrying out complement processing on the indoor scene depth map according to the indoor scene RGB map, reconstructing a three-dimensional scene point cloud, determining point cloud semantics of the three-dimensional scene point cloud, carrying out reverse twin construction on a three-dimensional fluid scene based on physical perception to generate a twin fluid scene, continuously calculating a tracking frame data set according to human body trunk bone motion and hand bone motion, controlling a sensor to measure a real scene, obtaining real environment color information, and establishing a plurality of virtual light sources in the twin fluid scene to simulate the real light field information so as to display the digital twin fluid scene. The implementation method ensures the standardization of the research process and the accessibility of the expected research targets, and improves the realism and immersion of the mixed scene.
Description
Technical Field
The embodiment of the disclosure relates to the technical field of fluid simulation and interaction based on mixed reality, in particular to a digital twin fluid phenomenon simulation method based on virtual-real fusion.
Background
The traditional fluid simulation and interactive application based on computer graphics are often completely carried out in a virtual simulation environment, and have no sensing and fusion capability on real scenes, meanwhile, the traditional fluid simulation and interactive application is limited by the solving efficiency and rendering precision of a physical system, the simulation application mostly adopts an off-line computing mode to render output sequence images and generate animations, and the interactive feedback and control capability for users is weak.
Compared with a fine geometry structure of a re-carved scene, the fluid interaction scene twin reconstruction is more focused on the real-time performance and robustness of the system, needs to extract simulation boundary geometric features in real time rapidly in a complex scene, maintains stable operation of a tracking system under the conditions of low texture, variable illumination and rapid movement of a sensor, is a problem to be solved, and in addition, how to track and extract interaction plane structures, even semantic information, rapidly and accurately in reconstruction is an important task for supporting more real and intelligent fluid simulation application. Reproducing fluids consistent with real scene behavior and enabling physics-based modeling and evolution in a mixed reality environment still faces a series of problems and challenges. Firstly, what means are adopted to capture fluid data of a real scene and acquire which data are required to be determined according to an actual application scene, secondly, how to break the limitation of high time consumption and low precision of a traditional iterative optimization method for deducing a speed field from a continuous surface geometry or a continuous density field, namely, how to recover all flow field data from the acquired data in real time is a challenging problem, and finally how to quickly acquire physical properties affecting fluid behaviors from the observed data, thereby providing support for physical-based fluid dynamic evolution simulation so as to meet the perception and fusion requirements of virtual fluid on the real environment, and further deep research is required. In terms of real-time virtual-real fusion interaction based on physical reality, how to solve close-range real-time interaction between a user and a scene boundary and a virtual object and how to feed back interaction force of the virtual scene to the user is also an important problem to further increase immersion and reality of user experience. The real-time rendering problem and the challenge of the large-scale fluid in the virtual-real fusion environment are simultaneously reflected in the scene information acquisition of the environment and the speed and the quality of the fluid rendering. Although there is a great deal of research currently being done to determine the light field in a real scene, a complete global illumination distribution requires the establishment of a close association of the light field and objects in the scene, and how to combine digital twinning of a three-dimensional scene and light field information acquisition in a better way is a key issue.
The development of virtual-real fusion technology has higher requirements on the space-time consistency of a virtual scene and the natural interaction of a virtual object, so that the physical simulation application facing the mixed reality scene has the capabilities of fast sensing and reconstructing the boundary of a simulation environment, accurately acquiring the attribute of real fluid, generating dynamic fluid, interacting and timely feeding back the interactive and evolvable virtual fluid and the real user behavior. The three-dimensional scene rapid reconstruction method based on the depth image is used for achieving three-dimensional scene rapid reconstruction based on the virtual-real fusion fluid simulation, fluid parameter acquisition and evolution simulation based on physical perception, virtual-real fusion man-machine interaction and control three-dimensional research contents, and a typical demonstration application scene is constructed based on the three-dimensional scene rapid reconstruction.
Disclosure of Invention
The disclosure is in part intended to introduce concepts in a simplified form that are further described below in the detailed description. The disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Some embodiments of the present disclosure propose digital twin fluid phenomenon simulation methods of virtual-real fusion to solve one or more of the technical problems mentioned in the background section above.
The method and the device solve the technical problems that real space information is quickly reconstructed and acquired based on a three-dimensional scene of a depth image, scene boundaries are provided for virtual fluid simulation and interaction, virtual fluid scene modeling based on real space characteristics is achieved, reality and efficiency of simulation are improved based on physical perception fluid parameter acquisition and evolution simulation, virtual-real fusion man-machine interaction control achieves interaction of virtual fluid and a real user by capturing action information of the user and feeding back the action information to the virtual fluid, virtual-real through user-scene-physical phenomenon virtual-real fusion interaction is constructed, and experience of virtual-real interaction is improved.
Some embodiments of the disclosure provide a virtual-real fusion digital twin fluid phenomenon simulation method, which comprises the steps of collecting an indoor scene depth map and an indoor scene RGB map, carrying out complement processing on the indoor scene depth map according to the indoor scene RGB map to generate a full indoor scene depth map, reconstructing a three-dimensional scene point cloud according to the indoor scene RGB map, determining the point cloud semantics of the three-dimensional scene point cloud, carrying out inverse twin construction on the three-dimensional fluid scene based on physical perception to generate a twin fluid scene, continuously calculating a tracking frame data set according to the skeletal motion of a human body and the skeletal motion of a hand, controlling a sensor to measure a real scene, obtaining real environment color information, and establishing a plurality of virtual light sources in the twin fluid scene to simulate real light field information so as to display the digital twin fluid scene.
The digital twin fluid phenomenon simulation method based on virtual-real fusion of the embodiments of the present disclosure has the advantages of ensuring normalization of a research process and accessibility of an expected research target to the greatest extent, improving reality and immersion of a mixed scene, and realizing man-machine interaction with abundant scenes, vivid details, real time and high efficiency.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
Fig. 1 is a flow chart of some embodiments of a digital twinning fluid phenomenon simulation method of virtual-actual fusion according to the present disclosure.
Fig. 2 is a flow chart of fluid parameter acquisition and evolution simulation of some embodiments of a virtual-real fusion digital twin fluid phenomenon simulation method according to the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings. Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates a flow 100 of some embodiments of a digital twin fluid phenomenon simulation method of virtual-actual fusion according to the present disclosure. The digital twin fluid phenomenon simulation method based on virtual-real fusion comprises the following steps:
Step 101, collecting an indoor scene depth map and an indoor scene RGB map, and performing complement processing on the indoor scene depth map according to the indoor scene RGB map to generate a complement indoor scene depth map.
Continuously acquiring an indoor scene depth map and an indoor scene RGB map of an indoor scene, taking effective depth information on the indoor scene depth map as priori, utilizing geometric range constraint provided by color information displayed by the indoor scene RGB map, and fusing features into an image reconstruction process of a self-encoder through a convolutional neural network to generate a complete and complemented indoor scene depth map so as to improve the depth information quality.
And 102, reconstructing the three-dimensional scene point cloud according to the indoor scene RGB diagram, and determining the point cloud semantics of the three-dimensional scene point cloud.
Extracting a neighborhood range of ORB (Oriented FAST and Rotated BRIEF) features and ORB features which are reasonably distributed on an indoor scene RGB image by utilizing parallel calculation, and extracting high-dimensional features of multi-level RGB-D image pairs through a two-dimensional convolutional neural network;
Acquiring depth information of feature points and neighborhood according to the depth map of the indoor scene after completion, carrying out matching association on adjacent frame features through the feature and neighborhood geometric distribution information, and then respectively calculating camera pose under the condition of 2D-3D and 3D-3D space feature matching through an N-point perspective (PnP method) method and a nearest point iteration (ICP method), wherein the optimization functions are as follows:
Wherein u i is a two-dimensional coordinate value of the three-dimensional feature point p i in the image coordinate system, Is three-dimensional characteristics associated with p i under different visual angles, K is a camera internal reference matrix, T is a camera external reference matrix, T * is the sensor pose obtained after optimization,Is that2-Norm square of (c);
Extracting key image frames and predicting the pose under a world coordinate system according to the definition degree and the motion range of the images, and projecting key frame pixel points to a three-dimensional space through parallel calculation by using restored high-quality depth information to form dense point cloud;
the discrete expressions under different view angles are updated and fused in parallel by utilizing a voxel filtering mode, so that the reconstructed three-dimensional scene point cloud initial expression is obtained;
And encoding geometric-semantic features of the initial point cloud through the graph neural network model, and respectively predicting complete geometric information and point cloud semantic categories by using a geometric completion decoder and a semantic classification decoder.
Step 103, performing inverse twin construction on the three-dimensional fluid scene based on the physical perception to generate a twin fluid scene.
Splicing the fluid surface height field time sequence h and a 2D label L s for distinguishing the fluid and the solid area, inputting the spliced fluid surface height field time sequence h and the 2D label L s into a stack self-encoder with interlayer jump connection, outputting a 2D speed field u s of the fluid surface by a training network, and adopting L 2 norm as a loss function;
By combining the height field time sequence h and the 2D label l s, a convolutional neural network G param is applied to estimate fluid viscosity from the generated surface velocity field u s {t,t+1,t+2} of multiple frames Namely:
The loss function uses the L 1 norm, while using the 3D convolution network G v to infer internal 3D velocity field information from the surface 2D velocity field along the gravitational axis;
The final layer output of the network is dot multiplied by the obstacle mask so that the velocity in the non-fluid region is 0, creating a twinned fluid scene. A fluid parameter acquisition and evolution simulation flow diagram of some embodiments of the virtual-real fusion digital twin fluid phenomenon simulation method of the present disclosure is shown in fig. 2.
Step 104, continuously and updated calculating a tracking frame data set according to the human body trunk bone motion and the hand bone motion.
Continuously updating a tracking frame data set according to the motion of human body trunk bones and hand bones, wherein the tracking frame data in the tracking frame data set comprises a basic tracking data list, the basic tracking data list comprises a rotation matrix, a scaling factor and displacement which change among each tracking frame data, the position information, the speed information, the palm orientation, the palm ball radius and other hand tracking data of each frame of bones, and certain specific behavior actions and gestures are recognized according to multi-frame data;
Processing the rotation of the bone particles, and calculating a rotation matrix relative to the center of the bone where the particles are positioned for each particle for each frame;
When the translation of the particles is processed, the speeds of bones of a human body trunk where the particles are positioned and each bone of a hand are respectively multiplied by different translation coefficients, the translation coefficients are larger as the bones are close to the far end, and conversely, the translation coefficients are smaller, and the movement formula of each particle is as follows:
vt+1=vt×ratio+(pt×Rot-pt),
Where v t is the speed of the particle at frame t, ratio is the coefficient of the bone translation speed where the particle is located, p t is the position of the particle at frame t, and Rot is the rotation matrix of the particle.
Step 105, the control sensor measures the real scene to obtain the real environment color information, and establishes a plurality of virtual light sources in the twin fluid scene to simulate the real light field information so as to display the digital twin fluid scene.
The method comprises the steps that a sensor is controlled to measure a real scene to obtain real environment color information, wherein the real environment color information is light field information, and the light field information is stored in an image form;
processing the image by using an image space global illumination algorithm, searching for a region with high saturation in all channels, and dividing;
Using VPL (virtual point light) algorithm to find out the outline of the radiation spot by using outline tracking;
Calculating the spot area, and selecting a large spot as a light source;
And estimating the direction and the position of the incoming light according to the spot center position in the environment image, and establishing a plurality of virtual light sources in the scene to simulate the real light field information so as to display the digital twin fluid scene.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.
Claims (4)
1. A digital twin fluid phenomenon simulation method of virtual-real fusion, comprising:
collecting an indoor scene depth map and an indoor scene RGB map, and carrying out complement processing on the indoor scene depth map according to the indoor scene RGB map so as to generate a complement indoor scene depth map;
reconstructing a three-dimensional scene point cloud according to the indoor scene RGB image, and determining the point cloud semantics of the three-dimensional scene point cloud;
based on physical perception, carrying out reverse twin construction on the three-dimensional fluid scene to generate a twin fluid scene;
Continuously and updated calculating a tracking frame data set according to the human body trunk bone motion and the hand bone motion;
The method comprises the steps of controlling a sensor to measure a real scene, obtaining real environment color information, and establishing a plurality of virtual light sources in a twin fluid scene to simulate the real light field information so as to display a digital twin fluid scene;
the reconstructing the three-dimensional scene point cloud according to the indoor scene RGB image, and determining the point cloud semantics of the three-dimensional scene point cloud comprises the following steps:
Extracting ORB features with reasonable distribution and neighborhood ranges of the ORB features on an indoor scene RGB image by utilizing parallel calculation, and extracting high-dimensional features of multi-level RGB-D image pairs through a two-dimensional convolutional neural network;
Acquiring depth information of feature points and neighborhood according to the depth map of the indoor scene after completion, carrying out matching association on adjacent frame features through the feature and neighborhood geometric distribution information, and then respectively calculating camera pose under the condition of 2D-3D and 3D-3D space feature matching through an N-point perspective method and a nearest point iteration method, wherein the optimization functions are as follows:
Wherein u i is a two-dimensional coordinate value of the three-dimensional feature point p i in the image coordinate system, Is three-dimensional characteristics associated with p i under different visual angles, K is a camera internal reference matrix, T is a camera external reference matrix, T * is the sensor pose obtained after optimization,Is that2-Norm square of (c);
Extracting key image frames and predicting the pose under a world coordinate system according to the definition degree and the motion range of the images, and projecting key frame pixel points to a three-dimensional space through parallel calculation by using restored high-quality depth information to form dense point cloud;
the discrete expressions under different view angles are updated and fused in parallel by utilizing a voxel filtering mode, so that the reconstructed three-dimensional scene point cloud initial expression is obtained;
Encoding geometric-semantic features of the initial point cloud through a graph neural network model, and respectively predicting complete geometric information and point cloud semantic categories by using a geometric completion decoder and a semantic classification decoder;
wherein the reverse twinning of the three-dimensional fluid scene based on the physical perception to generate a twinned fluid scene comprises:
Splicing the fluid surface height field time sequence h and a 2D label L s for distinguishing the fluid and the solid area, inputting the spliced fluid surface height field time sequence h and the 2D label L s into a stack self-encoder with interlayer jump connection, outputting a 2D speed field u s of the fluid surface by a training network, and adopting L 2 norm as a loss function;
By combining the height field time sequence h and the 2D label l s, a convolutional neural network G param is applied to estimate fluid viscosity from the generated surface velocity field u s {t,t+1,t+2} of multiple frames Namely:
The loss function uses the L 1 norm, while using the 3D convolution network G v to infer internal 3D velocity field information from the surface 2D velocity field along the gravitational axis;
The final layer output of the network is dot multiplied by the obstacle mask so that the velocity in the non-fluid region is 0, creating a twinned fluid scene.
2. The method of claim 1, wherein the acquiring the indoor scene depth map and the indoor scene RGB map, and the complementing the indoor scene depth map according to the indoor scene RGB map, to generate the complemented indoor scene depth map, comprises:
Continuously acquiring an indoor scene depth map and an indoor scene RGB map of an indoor scene, taking effective depth information on the indoor scene depth map as priori, utilizing geometric range constraint provided by color information displayed by the indoor scene RGB map, and fusing features into an image reconstruction process of a self-encoder through a convolutional neural network to generate a complete and complemented indoor scene depth map so as to improve the depth information quality.
3. The method of claim 1, wherein the continuously updated computing tracking frame data sets from human torso skeletal motion and hand skeletal motion comprises:
continuously and updated calculating a tracking frame data set according to the motion of human body trunk bones and hand bones, wherein the tracking frame data in the tracking frame data set comprises a basic tracking data list, the basic tracking data list comprises a rotation matrix, a scaling factor and displacement which change between each tracking frame data, the position information, the speed information and the palm orientation and palm ball radius of each frame of bones, and certain specific behavior actions and gestures are recognized according to multi-frame data;
Processing the rotation of the bone particles, and calculating a rotation matrix relative to the center of the bone where the particles are positioned for each particle for each frame;
When the translation of the particles is processed, the speeds of bones of a human body trunk where the particles are positioned and each bone of a hand are respectively multiplied by different translation coefficients, the translation coefficients are larger as the bones are close to the far end, and conversely, the translation coefficients are smaller, and the movement formula of each particle is as follows:
vt+1=vt×ratio+(pt×Rot-pt),
Where v t is the speed of the particle at frame t, ratio is the coefficient of the bone translation speed where the particle is located, p t is the position of the particle at frame t, and Rot is the rotation matrix of the particle.
4. The method of claim 1, wherein the controlling the sensor to measure the real scene, obtain real environment color information, and establish a number of virtual light sources in the twin fluid scene to simulate the real light field information for presentation of the digital twin fluid scene, comprises:
the method comprises the steps that a sensor is controlled to measure a real scene to obtain real environment color information, wherein the real environment color information is light field information, and the light field information is stored in an image form;
processing the image by using an image space global illumination algorithm, searching for a region with high saturation in all channels, and dividing;
Using a VPL algorithm to find the outline of the radiation spot by using outline tracking;
Calculating the spot area, and selecting a large spot as a light source;
And estimating the direction and the position of the incoming light according to the spot center position in the environment image, and establishing a plurality of virtual light sources in the scene to simulate the real light field information so as to display the digital twin fluid scene.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311014639.0A CN117115398B (en) | 2023-08-11 | 2023-08-11 | A virtual-real fusion digital twin fluid phenomenon simulation method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202311014639.0A CN117115398B (en) | 2023-08-11 | 2023-08-11 | A virtual-real fusion digital twin fluid phenomenon simulation method |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN117115398A CN117115398A (en) | 2023-11-24 |
| CN117115398B true CN117115398B (en) | 2024-12-24 |
Family
ID=88803133
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202311014639.0A Active CN117115398B (en) | 2023-08-11 | 2023-08-11 | A virtual-real fusion digital twin fluid phenomenon simulation method |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN117115398B (en) |
Families Citing this family (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN117809498B (en) * | 2024-01-09 | 2024-08-20 | 北京千乘科技有限公司 | Virtual-real interaction multidimensional twinning projection road network system |
| CN117875181B (en) * | 2024-01-15 | 2025-03-04 | 北京航空航天大学 | A bidirectional fluid-structure interaction calculation method based on masked deep neural network |
| CN119311123A (en) * | 2024-12-17 | 2025-01-14 | 中仪英斯泰克科技有限公司 | Immersive space virtual-reality interaction method and system |
| CN120997371A (en) * | 2025-10-24 | 2025-11-21 | 湖南美创数字科技有限公司 | Real-time image scene virtual-real fusion processing method |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111161410A (en) * | 2019-12-30 | 2020-05-15 | 中国矿业大学(北京) | Mine digital twinning model and construction method thereof |
| CN112417619A (en) * | 2020-11-23 | 2021-02-26 | 江苏大学 | A system and method for optimal operation and adjustment of pump unit based on digital twin |
Family Cites Families (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114417616B (en) * | 2022-01-20 | 2024-11-05 | 青岛理工大学 | A digital twin modeling method and system for assembly robot teleoperation environment |
| CN114970321B (en) * | 2022-04-28 | 2025-03-07 | 长安大学 | A scene flow digital twin method and system based on dynamic trajectory flow |
| CN115512040A (en) * | 2022-08-26 | 2022-12-23 | 中国人民解放军军事科学院国防工程研究院 | Digital twinning-oriented three-dimensional indoor scene rapid high-precision reconstruction method and system |
-
2023
- 2023-08-11 CN CN202311014639.0A patent/CN117115398B/en active Active
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111161410A (en) * | 2019-12-30 | 2020-05-15 | 中国矿业大学(北京) | Mine digital twinning model and construction method thereof |
| CN112417619A (en) * | 2020-11-23 | 2021-02-26 | 江苏大学 | A system and method for optimal operation and adjustment of pump unit based on digital twin |
Also Published As
| Publication number | Publication date |
|---|---|
| CN117115398A (en) | 2023-11-24 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Li et al. | Vox-surf: Voxel-based implicit surface representation | |
| Li et al. | Generative ai meets 3d: A survey on text-to-3d in aigc era | |
| CN117115398B (en) | A virtual-real fusion digital twin fluid phenomenon simulation method | |
| Cheng et al. | Intelligent visual media processing: When graphics meets vision | |
| US11989900B2 (en) | Object recognition neural network for amodal center prediction | |
| WO2021093453A1 (en) | Method for generating 3d expression base, voice interactive method, apparatus and medium | |
| CN115115805B (en) | Training method, device, equipment and storage medium for three-dimensional reconstruction model | |
| US12430903B2 (en) | Object recognition neural network training using multiple data sources | |
| CN116385667B (en) | Reconstruction method of three-dimensional model, training method and device of texture reconstruction model | |
| Yao et al. | Neural radiance field-based visual rendering: A comprehensive review | |
| CN114998514B (en) | Method and device for generating virtual characters | |
| Qiu et al. | Advancing extended reality with 3d gaussian splatting: Innovations and prospects | |
| CN115222917A (en) | Three-dimensional reconstruction model training method, device, equipment and storage medium | |
| CN119991885A (en) | Generate animatable characters using 3D representations | |
| CN117635801A (en) | New view synthesis method and system based on real-time rendering of generalizable neural radiation fields | |
| WO2021173015A1 (en) | Graphical user interface for creating data structures used for computing simulated surfaces for animation generation and other purposes | |
| Martin-Brualla et al. | Gelato: Generative latent textured objects | |
| CN113673567A (en) | Panorama emotion recognition method and system based on multi-angle subregion self-adaption | |
| CN115272608B (en) | Human hand reconstruction method and device | |
| Wu et al. | [Retracted] 3D Film Animation Image Acquisition and Feature Processing Based on the Latest Virtual Reconstruction Technology | |
| Huixuan et al. | Innovative practice of virtual reality technology in animation production | |
| Balusa et al. | Bridging deep learning & 3D models from 2D images | |
| Liu et al. | A method of touchable 3d model reconstruction based on mixed reality–a case study of medical training applications | |
| Yu et al. | Research on digital protection technology of based on virtual reality | |
| Verma et al. | NeRF for metaverse: a comprehensive review of neural radiance field-based techniques for digital realm synthesis |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |