[go: up one dir, main page]

CN118963602B - Real-time image rendering generation system based on generation type artificial intelligence - Google Patents

Real-time image rendering generation system based on generation type artificial intelligence Download PDF

Info

Publication number
CN118963602B
CN118963602B CN202411448712.XA CN202411448712A CN118963602B CN 118963602 B CN118963602 B CN 118963602B CN 202411448712 A CN202411448712 A CN 202411448712A CN 118963602 B CN118963602 B CN 118963602B
Authority
CN
China
Prior art keywords
tsdf
field
lattice point
local pixel
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411448712.XA
Other languages
Chinese (zh)
Other versions
CN118963602A (en
Inventor
蒋正浩
李睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shiyou Beijing Technology Co ltd
Original Assignee
Shiyou Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shiyou Beijing Technology Co ltd filed Critical Shiyou Beijing Technology Co ltd
Priority to CN202411448712.XA priority Critical patent/CN118963602B/en
Publication of CN118963602A publication Critical patent/CN118963602A/en
Application granted granted Critical
Publication of CN118963602B publication Critical patent/CN118963602B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/55Radiosity

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

本申请提供了一种基于生成式人工智能的实时图像渲染生成系统,涉及图像处理领域,其采用基于生成式人工智能的处理和分析技术来对含有交点的截断有向距离函数TSDF场隐式面进行局部区域划分和像素特征提取,并对各个TSDF场格子点局部像素特征进行维度重构,以此根据重构后特征中各个像素梯度的关联强化特征来智能地生成渲染结果。这样,可以更好地捕捉到各个格子点之间的联系,从而在渲染时考虑到更多局部和全局的语义信息,提升渲染质量,实现更为智能化的图像渲染。

The present application provides a real-time image rendering generation system based on generative artificial intelligence, which relates to the field of image processing. It uses processing and analysis techniques based on generative artificial intelligence to perform local area division and pixel feature extraction on implicit surfaces of truncated signed distance function TSDF fields containing intersection points, and reconstruct the dimensions of local pixel features of each TSDF field grid point, so as to intelligently generate rendering results based on the associated enhanced features of each pixel gradient in the reconstructed features. In this way, the connection between each grid point can be better captured, so that more local and global semantic information can be taken into account during rendering, the rendering quality can be improved, and more intelligent image rendering can be achieved.

Description

Real-time image rendering generation system based on generation type artificial intelligence
Technical Field
The present application relates to the field of image processing, and more particularly, to a real-time image rendering generation system based on generated artificial intelligence.
Background
Image rendering techniques refer to the process of converting a three-dimensional model into a two-dimensional image, typically involving steps of illumination computation, texture mapping, shading, etc., to produce realistic visual effects. The TSDF (truncated directed distance function) technology can fuse multi-frame scanning data into a fractional field, and can efficiently reconstruct and render complex three-dimensional scenes, thereby improving the sense of reality and detail representation of rendering results. In the prior art, point rendering and triangular mesh rendering are two visualization methods of TSDF fields. But the detail of the point rendering is lost at a short distance, and the color transition is unnatural at a long distance. Triangle mesh rendering can provide more complete details, but has high calculation cost, and affects the frame rate and the real-time performance of rendering.
Aiming at the technical problems, the patent CN115170715A provides an image rendering method, an image rendering device, an electronic device and a medium, which are used for generating a ray set, calculating an intersection point with a TSDF field by receiving a display instruction and acquiring an image and parameters, then rendering lattice points by utilizing intersection point information, and finally displaying a result on an interface to realize efficient image rendering, thereby solving the problems of poor rendering effect, low efficiency, influence on frame rate and the like in the prior art, realizing real-time image rendering and improving rendering quality.
In the above patent, rendering is performed based on intersection information of rays and the implicit face of the TSDF field. However, in the TSDF field, each lattice point stores not only distance information but also attributes such as color, normal vector, etc., and if rendering is performed based on intersection information, it is mainly focused on the attribute of a single lattice point, and the correlation between lattice points is ignored. For example, in calculating the color of a certain pixel, a method based on the intersection information may refer only to the color information of the lattice point at the intersection, regardless of the color information of surrounding lattice points and other attributes (such as texture, etc.). This may result in some loss of detail during the rendering process, especially when complex surfaces are processed, which may affect the realism and the degree of refinement of the rendering result.
Accordingly, an optimized real-time image rendering generation system is desired.
Disclosure of Invention
The present application has been made to solve the above-mentioned technical problems. The embodiment of the application provides a real-time image rendering generation system based on generation type artificial intelligence, which adopts processing and analysis technology based on generation type artificial intelligence to carry out local area division and pixel characteristic extraction on a truncated directed distance function TSDF field implicit surface containing intersection points, and carries out dimension reconstruction on local pixel characteristics of grid points of each TSDF field so as to intelligently generate rendering results according to the associated strengthening characteristics of each pixel gradient in the reconstructed characteristics. Therefore, the relation among all lattice points can be better captured, so that more local and global semantic information is considered in the process of rendering, the rendering quality is improved, and more intelligent image rendering is realized.
According to one aspect of the application, a real-time image rendering generation system based on generation type artificial intelligence is provided, which comprises an image parameter acquisition module to be rendered, a ray set generation module, a TSDF field hidden surface pre-generation module, an intersection point calculation module, a rendering module and a rendering result display module, wherein the image parameter acquisition module is used for responding to a display instruction received in an interactive interface to acquire an image to be rendered and rendering parameters in the interactive interface, the ray set generation module is used for generating a ray set based on the image to be rendered and the rendering parameters, the TSDF field hidden surface pre-generation module is used for pre-generating a truncated directional distance function TSDF field hidden surface, the intersection point calculation module is used for calculating an intersection point of each ray in the ray set and the truncated directional distance function TSDF field hidden surface to obtain the truncated directional distance function TSDF field hidden surface containing an intersection point, the rendering module is used for rendering the TSDF field lattice point containing the intersection point to generate a rendering result, and the rendering result display module is used for displaying the rendering result on the interactive interface.
The rendering module comprises a TSDF field implicit surface area dividing unit, a TSDF field grid point pixel area extraction and reconstruction unit and a rendering result generating unit, wherein the TSDF field implicit surface area dividing unit is used for carrying out local area division on the cut-off directional distance function TSDF field implicit surface containing intersection points to obtain a set of TSDF field grid point local areas, the TSDF field grid point pixel area extraction and reconstruction unit is used for carrying out pixel feature extraction on the set of TSDF field grid point local areas, carrying out dimension reconstruction on the set of extracted TSDF field grid point local pixel feature vectors to obtain TSDF field grid point local pixel features, the TSDF field grid point pixel association strengthening unit is used for carrying out association strengthening on the TSDF field grid point local pixel features based on pixel gradient distribution to obtain TSDF field grid point local pixel association strengthening features, and the rendering result generating unit is used for obtaining the rendering result based on the TSDF field grid point local pixel association strengthening features.
In the real-time image rendering generation system based on the generation type artificial intelligence, the TSDF field implicit surface area dividing unit is used for dividing the local area of the truncated directional distance function TSDF field implicit surface containing the intersection points according to the TSDF field lattice points where the intersection points are located so as to obtain a set of the local areas of the TSDF field lattice points.
The real-time image rendering generation system based on the generation type artificial intelligence comprises a TSDF field lattice point local pixel feature extraction subunit, a TSDF field lattice point local pixel feature dimension reconstruction subunit and a TSDF field lattice point local pixel feature dimension reconstruction subunit, wherein the TSDF field lattice point local pixel feature extraction subunit is used for enabling a set of the TSDF field lattice point local regions to pass through a lattice point local region pixel feature extractor based on a depth neural network model to obtain a set of TSDF field lattice point local pixel feature vectors, and the TSDF field lattice point local pixel feature dimension reconstruction subunit is used for carrying out dimension reconstruction on the set of the TSDF field lattice point local pixel feature vectors according to a local region division mode to obtain a TSDF field lattice point local pixel feature matrix as the TSDF field lattice point local pixel feature.
In the real-time image rendering generation system based on the generation type artificial intelligence, the grid point local area pixel characteristic extractor based on the depth neural network model is a grid point local area pixel characteristic extractor based on a convolution neural network model.
The real-time image rendering generation system based on the generation type artificial intelligence comprises a TSDF field lattice point local pixel gradient amplitude calculation subunit, a TSDF field lattice point local pixel reinforcement subunit and a TSDF field lattice point local pixel reinforcement subunit, wherein the TSDF field lattice point local pixel gradient amplitude calculation subunit is used for calculating multi-directional gradient value distribution of each position in the TSDF field lattice point local pixel characteristic matrix, determining gradient amplitude values of each position in the TSDF field lattice point local pixel characteristic matrix based on the multi-directional gradient value distribution of each position to obtain a TSDF field lattice point local pixel characteristic gradient amplitude distribution characteristic matrix, and inputting the TSDF field lattice point local pixel gradient masking subunit into a gating masking module for masking to obtain a masked TSDF field lattice point local pixel characteristic gradient amplitude distribution characteristic matrix.
In the real-time image rendering generation system based on the generation type artificial intelligence, the TSDF field lattice point local pixel gradient amplitude calculating subunit is used for calculating gradient amplitude values of all feature values in the TSDF field lattice point local pixel feature matrix along the width direction to obtain a TSDF field lattice point local pixel width direction feature gradient value feature matrix, calculating gradient amplitude values of all feature values in the TSDF field lattice point local pixel feature matrix along the height direction to obtain a TSDF field lattice point local pixel height direction feature gradient value feature matrix, and calculating square root of square sum of feature values of corresponding positions in the TSDF field lattice point local pixel width direction feature gradient value feature matrix and the TSDF field lattice point local pixel height direction feature gradient value feature matrix to obtain the TSDF field lattice point local pixel feature gradient amplitude distribution feature matrix.
In the real-time image rendering generation system based on the generation type artificial intelligence, the TSDF field lattice point local pixel gradient masking subunit is used for comparing the characteristic value of each position in the TSDF field lattice point local pixel characteristic gradient amplitude distribution characteristic matrix with a preset threshold value to obtain the masking TSDF field lattice point local pixel characteristic gradient amplitude distribution characteristic matrix, wherein the characteristic value larger than the preset threshold value is set to be one in response to the characteristic value being larger than the preset threshold value, and the characteristic value smaller than or equal to the preset threshold value is set to be zero in response to the characteristic value being smaller than or equal to the preset threshold value.
In the real-time image rendering generation system based on the generation type artificial intelligence, the TSDF field lattice point local pixel strengthening subunit is used for carrying out position point multiplication on the masked TSDF field lattice point local pixel characteristic gradient amplitude distribution characteristic matrix and the TSDF field lattice point local pixel characteristic matrix, and inputting the characteristic matrix after the point multiplication into a sigmoid function for strengthening treatment to obtain the TSDF field lattice point local pixel association strengthening characteristic matrix.
In the real-time image rendering generation system based on the generation type artificial intelligence, the rendering result generation unit is used for enabling the TSDF field lattice point local pixel association enhancement feature matrix to pass through a rendering module based on AIGC models to obtain the rendering result.
Compared with the prior art, the real-time image rendering generation system based on the generation type artificial intelligence provided by the application adopts the processing and analysis technology based on the generation type artificial intelligence to carry out local area division and pixel characteristic extraction on the implicit surface of the truncated directed distance function TSDF field containing the intersection point, and carries out dimension reconstruction on local pixel characteristics of grid points of each TSDF field, so that a rendering result is intelligently generated according to the associated strengthening characteristics of each pixel gradient in the reconstructed characteristics. Therefore, the relation among all lattice points can be better captured, so that more local and global semantic information is considered in the process of rendering, the rendering quality is improved, and more intelligent image rendering is realized.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
In the drawings, FIG. 1 is a system block diagram of a real-time image rendering generation system based on generated artificial intelligence in accordance with an embodiment of the present application.
FIG. 2 is a block diagram of a rendering module in a real-time image rendering generation system based on generated artificial intelligence in accordance with an embodiment of the present application.
Fig. 3 is a data flow diagram of a rendering module in a real-time image rendering generation system based on generated artificial intelligence according to an embodiment of the present application.
Fig. 4 is a block diagram of a TSDF field lattice point pixel region extraction reconstruction unit in a real-time image rendering generation system based on a generated artificial intelligence according to an embodiment of the present application.
Fig. 5 is a block diagram of a TSDF field lattice point pixel association enhancement unit in a real-time image rendering generation system based on generated artificial intelligence according to an embodiment of the present application.
Detailed Description
Hereinafter, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
Image rendering techniques involve converting a three-dimensional model into a two-dimensional image, a process that involves multiple steps of illumination computation, texture mapping, shadow processing, etc., in order to create a realistic visual effect. The TSDF (truncated directed distance function) technology can reconstruct and render complex three-dimensional scenes efficiently by fusing multi-frame scanning data into a fractional field, thereby improving the realism and detail expression of rendering results. In the prior art, point rendering and triangular mesh rendering are two common TSDF field visualization methods. However, the point rendering is insufficient in detail at a short distance and the color transition is unnatural at a long distance, while the triangle mesh rendering can provide more complete detail, but the high calculation cost can reduce the frame rate and affect the real-time property of the rendering.
In order to solve the above technical problems, patent CN115170715A proposes an image rendering method, an apparatus, an electronic device, and a medium, which generate a ray set by receiving a display instruction and acquiring an image and parameters thereof, and calculate intersections of the rays and a TSDF field. And further rendering lattice points based on the intersection information, and finally displaying rendering results on the interface. The scheme realizes high-efficiency image rendering, solves the problems of poor rendering effect, low efficiency, negative influence on frame rate and the like in the prior art, realizes real-time image rendering, and improves rendering quality.
In the above patent, rendering is mainly performed based on intersection information of rays and a TSDF field implicit surface. In the TSDF field, each lattice point stores not only distance information, but also may contain attributes such as color, normal vector, etc. If rendering is performed based on only the intersection information, the attribute of a single lattice point may be excessively focused, and the correlation between surrounding lattice points is ignored. This may result in some loss of detail during rendering, especially when complex surfaces are processed, which may affect the realism and the level of detail of the rendering result.
Based on the real-time image rendering generation system, the real-time image rendering generation system based on the generation type artificial intelligence is provided. FIG. 1 is a system block diagram of a real-time image rendering generation system based on generated artificial intelligence in accordance with an embodiment of the present application. As shown in FIG. 1, in a real-time image rendering generation system 100 based on a generated artificial intelligence, the system comprises an image parameter acquisition module 110 for acquiring an image to be rendered and rendering parameters in an interactive interface in response to a display instruction received in the interactive interface, a ray set generation module 120 for generating a ray set based on the image to be rendered and the rendering parameters, a TSDF field hidden surface pre-generation module 130 for pre-generating a truncated directed distance function TSDF field hidden surface, an intersection point calculation module 140 for calculating an intersection point of each ray in the ray set and the truncated directed distance function TSDF field hidden surface to obtain the truncated directed distance function TSDF field hidden surface containing an intersection point, a rendering module 150 for rendering the TSDF field lattice point where the intersection point is located based on the truncated directed distance function TSDF field hidden surface containing the intersection point to generate a rendering result, and a rendering result display module 160 for displaying the rendering result on the interactive interface.
In the embodiment of the present application, the image parameter obtaining module to be rendered 110 is configured to obtain the image to be rendered and the rendering parameter in the interactive interface in response to the display instruction received in the interactive interface. Specifically, based on the triggering operation of a user on a preset display control on a display page, a display instruction is received and responded, and an image to be rendered and rendering parameters are obtained. Here, the display page refers to a page for displaying a rendered image, the display control refers to an anchor point for performing a display operation, which is set in the display page, and the display control may be an icon or text information, without limitation in expression form. It should be understood that, in the present application, an image to be rendered in the interactive interface refers to a pixel point of a screen currently displayed in the interactive interface, that is, a region to be rendered, and may also be understood as a surface of a TSDF field under a current camera viewing angle, and rendering parameters refer to an internal reference of a rendering camera (a position of the rendering camera in space, which is composed of one rotation and one translation) and an external reference of the rendering camera (an internal reference corresponding to a pinhole rendering camera, which converts three-dimensional coordinates into two-dimensional plane coordinates and depth). Notably, rendering a camera refers to an analog camera defined in the program, not a real camera.
In an embodiment of the present application, the ray set generating module 120 is configured to generate a ray set based on the image to be rendered and the rendering parameters. The rays refer to straight lines formed by infinitely extending one ends of line segments, which are formed by pixels corresponding to an image to be rendered and internal parameters and external parameters of a rendering camera, and the ray set refers to a set comprising a plurality of rays. In particular, in a specific implementation manner of the embodiment of the present application, a rendering camera center may be determined based on a rendering parameter, a plurality of pixels may be determined based on an image to be rendered, and a plurality of rays may be determined based on the rendering camera center and each pixel, thereby obtaining a ray set.
In the embodiment of the present application, the TSDF field implicit surface pre-generating module 130 is configured to pre-generate a truncated directional distance function TSDF field implicit surface. It should be appreciated that the pre-generated implicit surface of the TSDF field constructs a continuous surface representation by storing information about the surface of the object at discrete grid points in three-dimensional space. Each grid point stores a TSDF value representing the directed distance of the point to the nearest surface, as well as information such as the normal vector and color of the surface. I.e. the implicit face of the truncated directed distance function TSDF field is a face containing rich content information. The pre-generated TSDF field implicit surface can completely save geometric information of an object, so that more accurate surface details are provided during rendering, and the needed distance and direction information can be quickly accessed during rendering through the pre-generated TSDF field, so that performance degradation caused by real-time calculation is avoided.
In the embodiment of the present application, the intersection point calculating module 140 is configured to calculate an intersection point between each ray in the ray set and the implicit surface of the truncated directed distance function TSDF field to obtain the implicit surface of the truncated directed distance function TSDF field including the intersection point. It should be noted that, at any point in a given space, if the point has a score in the TSDF field and the score is 0, the point is a point on the implicit surface of the TSDF field. In a specific implementation manner of the embodiment of the present application, a lattice point set intersected by each ray may be calculated, the lattice point set may be screened to obtain a target lattice point, the score value of each ray may be obtained by calculating according to the score value and the weight of the target lattice point, the intersection point of each ray and the implicit surface of the TSDF field may be determined based on the score value, and then the implicit surface of the truncated directed distance function TSDF field including the intersection point may be obtained based on a plurality of intersection points.
In the embodiment of the present application, the rendering module 150 is configured to render the TSDF field lattice point where the intersection point is located based on the implicit surface of the truncated directed distance function TSDF field including the intersection point, and generate a rendering result.
In the method, a TSDF field lattice point where an intersection point is positioned is rendered based on the implicit surface of the truncated directed distance function TSDF field containing the intersection point, and a rendering result is generated, and the technical concept of the application is that the processing and analysis technology based on the generated artificial intelligence is adopted to divide local areas and extract pixel characteristics of the implicit surface of the truncated directed distance function TSDF field containing the intersection point, and the local pixel characteristics of each TSDF field lattice point are subjected to dimension reconstruction, so that the rendering result is intelligently generated according to the associated strengthening characteristics of each pixel gradient in the reconstructed characteristics. Therefore, the relation among all lattice points can be better captured, so that more local and global semantic information is considered in the process of rendering, the rendering quality is improved, and more intelligent image rendering is realized.
FIG. 2 is a block diagram of a rendering module in a real-time image rendering generation system based on generated artificial intelligence in accordance with an embodiment of the present application. Fig. 3 is a data flow diagram of a rendering module in a real-time image rendering generation system based on generated artificial intelligence according to an embodiment of the present application. As shown in fig. 2 and 3, the rendering module 150 includes a TSDF field implicit surface area dividing unit 151 configured to divide a local area of the TSDF field implicit surface including the truncated directional distance function of the intersection point to obtain a set of local areas of TSDF field lattice points, a TSDF field lattice point pixel area extraction and reconstruction unit 152 configured to extract pixel characteristics of the set of local areas of TSDF field lattice points, perform dimensional reconstruction on the set of extracted local pixel characteristic vectors of TSDF field lattice points to obtain local pixel characteristics of TSDF field lattice points, a TSDF field lattice point pixel association reinforcement unit 153 configured to perform association reinforcement based on pixel gradient distribution on the local pixel characteristics of TSDF field lattice points to obtain local pixel association reinforcement characteristics of TSDF field lattice points, and a rendering result generating unit 154 configured to obtain the rendering result based on the local pixel association reinforcement characteristics of TSDF field lattice points.
In the embodiment of the present application, the TSDF field implicit surface area dividing unit 151 is configured to perform local area division on the TSDF field implicit surface including the truncated directional distance function of the intersection point to obtain a set of local areas of lattice points of the TSDF field. Specifically, in the embodiment of the application, the TSDF field implicit surface area dividing unit is used for dividing the local area of the truncated directed distance function TSDF field implicit surface containing the intersection points according to the TSDF field lattice point where the intersection points are located so as to obtain a set of the local areas of the TSDF field lattice point. It should be understood that different intersection points and surrounding areas in the implicit surface of the truncated directed distance function TSDF field containing the intersection points have different detail characteristics, so that in order to more accurately process the characteristics between each local area, only the area around the intersection points is focused at the same time, and complex calculation on a large area far from the intersection points is avoided. That is, the positions of the intersections are usually the places where the rays intersect the object surface, and these places are the areas that need to be focused on in the rendering process, and by taking these intersections and the areas around them as local areas, these important detailed portions can be processed more finely, which helps to better extract the features in each area.
In the embodiment of the present application, the TSDF field lattice point pixel region extraction and reconstruction unit 152 is configured to perform pixel feature extraction on the set of TSDF field lattice point local regions, and perform dimension reconstruction on the set of extracted TSDF field lattice point local pixel feature vectors to obtain TSDF field lattice point local pixel features. Specifically, fig. 4 is a block diagram of a TSDF field lattice point pixel region extraction reconstruction unit in a real-time image rendering generation system based on generation type artificial intelligence according to an embodiment of the present application. As shown in fig. 4, the TSDF field lattice point pixel region extraction and reconstruction unit 152 includes a TSDF field lattice point local pixel feature extraction subunit 1521 configured to pass the set of TSDF field lattice point local regions through a lattice point local region pixel feature extractor based on a deep neural network model to obtain the set of TSDF field lattice point local pixel feature vectors, and a TSDF field lattice point local pixel feature dimension reconstruction subunit 1522 configured to perform dimension reconstruction on the set of TSDF field lattice point local pixel feature vectors according to a local region division manner to obtain a TSDF field lattice point local pixel feature matrix as the TSDF field lattice point local pixel feature.
It should be appreciated that the pixel characteristics may represent key information for each local region including, but not limited to, color, texture, edges, curvature, etc. Therefore, in order to further capture and extract the pixel characteristics contained in each TSDF field lattice point local area to more effectively characterize the information in each local area, in the technical solution of the present application, the set of TSDF field lattice point local areas is passed through a lattice point local area pixel characteristic extractor based on a convolutional neural network model to obtain a set of TSDF field lattice point local pixel characteristic vectors. Correspondingly, in order to better reflect the relationship between local areas in the follow-up, in the technical scheme of the application, the set of the TSDF field lattice point local pixel characteristic vectors is subjected to dimension reconstruction according to a local area division mode so as to obtain a TSDF field lattice point local pixel characteristic matrix, wherein the mutual association and influence relationship exists between each TSDF field lattice point local pixel characteristic feature in the set of the TSDF field lattice point local pixel characteristic vectors. In this way, interdependencies between different features can be better captured, which is important for subsequent rendering processes.
In the embodiment of the present application, the TSDF field lattice point pixel association strengthening unit 153 is configured to perform association strengthening based on pixel gradient distribution on the local pixel feature of the TSDF field lattice point to obtain a local pixel association strengthening feature of the TSDF field lattice point. Accordingly, in order to more selectively highlight and emphasize the key pixel characteristic information in the TSDF field lattice point local pixel associated enhancement characteristic matrix, in consideration of the difference of importance and contribution degree in different positions in the TSDF field lattice point local pixel associated enhancement characteristic matrix, therefore, the details and the structure of the local area are better captured, and in the technical scheme of the application, the TSDF field lattice point local pixel characteristic matrix is subjected to correlation reinforcement based on pixel gradient distribution so as to obtain the TSDF field lattice point local pixel correlation reinforcement characteristic matrix as the TSDF field lattice point local pixel correlation reinforcement characteristic. It should be understood that the pixel gradient reflects the degree of variation between pixels, and detailed information such as edges, textures and the like can be better captured through gradient distribution, and correlation between features can be enhanced through correlation enhancement, so that information in a feature matrix is more concentrated and prominent, which is helpful for better reflecting the real characteristics of the object surface in the subsequent rendering process.
In detail, firstly, a characteristic matrix of the characteristic gradient amplitude distribution of the local pixel of the TSDF field lattice point is obtained by calculating the gradient amplitude value of each position in the local pixel association strengthening characteristic matrix of the TSDF field lattice point. It will be appreciated that the gradient magnitude value reflects the degree of variation in intensity between pixels. That is, a larger gradient magnitude generally corresponds to an edge or abrupt change in the object surface. Therefore, the edge and detail characteristics can be more accurately captured through calculating the gradient amplitude value, so that the object boundary is correctly represented in the rendering process, then, in order to strengthen or inhibit some characteristics, so as to better highlight or ignore some details, so as to optimize the final rendering result, the TSDF field lattice point local pixel characteristic gradient amplitude distribution characteristic matrix is input into a gating mask module for masking processing, and the masked TSDF field lattice point local pixel characteristic gradient amplitude distribution characteristic matrix is obtained. That is, the masking process may help select and enhance features that are critical to the rendering results, in particular, features that have large gradient magnitudes that represent important details may be highlighted by masking, while features that have small gradient magnitudes that may represent noise or unimportant details are suppressed or ignored. Finally, the masked features are used as a weight matrix to carry out weighting treatment on the TSDF field lattice point local pixel association strengthening feature matrix so as to endow different features in the original feature matrix with different weights, and then the weighted features are input into a sigmoid function to strengthen, so that the finally generated TSDF field lattice point local pixel association strengthening feature matrix can better reflect the real characteristics of the object surface, and important details can be captured more accurately in the rendering process.
Specifically, fig. 5 is a block diagram of a TSDF field lattice point pixel association reinforcement unit in a real-time image rendering generation system based on generation type artificial intelligence according to an embodiment of the present application. The TSDF field lattice point pixel association reinforcement unit 153 includes a TSDF field lattice point local pixel gradient magnitude calculation subunit 1531, configured to calculate a multi-directional gradient value distribution at each position in the TSDF field lattice point local pixel feature matrix, determine a gradient magnitude value at each position in the TSDF field lattice point local pixel feature matrix based on the multi-directional gradient value distribution at each position to obtain a TSDF field lattice point local pixel feature gradient magnitude distribution feature matrix, and a TSDF field lattice point local pixel gradient masking subunit 1532, configured to input the TSDF field lattice point local pixel feature gradient magnitude distribution feature matrix into a gating masking module to perform masking processing to obtain a masked TSDF field lattice point local pixel feature gradient magnitude distribution feature matrix, and a TSDF field lattice point local pixel reinforcement subunit 1533, configured to weight-reinforce the TSDF field lattice point local pixel feature matrix with the masked TSDF field lattice point local pixel feature gradient magnitude distribution feature as a weight matrix to obtain the TSDF field lattice point local pixel association feature matrix.
More specifically, in the embodiment of the application, the TSDF field lattice point local pixel gradient amplitude calculating subunit is used for calculating gradient amplitude values of all characteristic values in the width direction in the TSDF field lattice point local pixel characteristic matrix to obtain a TSDF field lattice point local pixel width direction characteristic gradient value characteristic matrix, calculating gradient amplitude values of all characteristic values in the height direction in the TSDF field lattice point local pixel characteristic matrix to obtain a TSDF field lattice point local pixel height direction characteristic gradient value characteristic matrix, and calculating square root of square sum of characteristic values of corresponding positions in the TSDF field lattice point local pixel width direction characteristic gradient value characteristic matrix and the TSDF field lattice point local pixel height direction characteristic gradient value characteristic matrix to obtain the TSDF field lattice point local pixel characteristic gradient amplitude distribution characteristic matrix.
More specifically, in the embodiment of the application, the TSDF field lattice point local pixel gradient masking subunit is configured to compare a feature value of each position in the TSDF field lattice point local pixel feature gradient amplitude distribution feature matrix with a preset threshold value to obtain the masked TSDF field lattice point local pixel feature gradient amplitude distribution feature matrix, wherein the feature value greater than the preset threshold value is set to be one in response to the feature value being greater than the preset threshold value, and the feature value less than or equal to the preset threshold value is set to be zero in response to the feature value being less than or equal to the preset threshold value.
More specifically, in the embodiment of the application, the local pixel strengthening subunit of the TSDF field lattice point is used for carrying out position point multiplication on the characteristic gradient amplitude distribution characteristic matrix of the local pixel of the masked TSDF field lattice point and the characteristic matrix of the local pixel of the TSDF field lattice point, and inputting the characteristic matrix after the point multiplication into a sigmoid function for strengthening treatment so as to obtain the local pixel association strengthening characteristic matrix of the TSDF field lattice point.
In the embodiment of the application, specifically, the TSDF field lattice point pixel association strengthening unit is used for carrying out association strengthening treatment based on pixel gradient distribution on the TSDF field lattice point local pixel characteristic matrix by the following formula: Wherein, the method comprises the steps of, AndRespectively the first pixel in the TSDF field lattice point local pixel characteristic matrixAndThe characteristic value of the individual position is used,The first pixel characteristic matrix along the width direction in the local pixel characteristic matrix of the TSDF field lattice pointThe gradient values of the individual positions are calculated,The first pixel characteristic matrix along the height direction in the local pixel characteristic matrix of the TSDF field lattice pointThe gradient values of the individual positions are calculated,The TSDF field lattice point local pixel characteristic gradient amplitude distribution characteristic matrix is the first oneThe characteristic value of the individual position is used,In order to mask the operation of the device,Is the eigenvalue of each position in the masking TSDF field lattice point local pixel eigenvalue gradient amplitude distribution eigenvector matrix,Is a preset threshold value, and the preset threshold value is set,Is the local pixel characteristic gradient amplitude distribution characteristic matrix of the grid points of the masking TSDF field,Is the local pixel characteristic matrix of the TSDF field lattice point,Representing the multiplication by the position point,Is thatThe function of the function is that,Is the local pixel association strengthening characteristic matrix of the TSDF field lattice point.
In the embodiment of the present application, the rendering result generating unit 154 is configured to obtain the rendering result based on the local pixel association reinforcement feature of the TSDF field lattice point. Specifically, in the embodiment of the application, the rendering result generation unit is used for enabling the TSDF field lattice point local pixel association enhancement feature matrix to pass through a rendering module based on AIGC models to obtain the rendering result. The AIGC model in the present application specifically refers to an image generation model based on generating an countermeasure network. The generation countermeasure network consists of two neural networks, a generator and a discriminator. The generator is responsible for generating new data samples in an attempt to create samples that are similar to the real data. And the discriminator is responsible for judging whether the input sample is true or generated, receives the true sample and generates the sample, and outputs a probability value indicating the likelihood that the sample is true. The generator and the arbiter play games with each other during the training process. Generators attempt to generate more and more realistic samples, while discriminators continue to improve recognition. The goal of the training is to make the samples generated by the generator sufficiently realistic that the arbiter cannot distinguish their differences from the true samples. The application generates the rendering result by deconvolution coding by a generator in the AIGC model by inputting a TSDF field lattice point local pixel association enhancement feature matrix to the generator in the AIGC model. That is, the TSDF field lattice point local pixel characteristic matrix is utilized to perform pixel association reinforcement to obtain TSDF field lattice point local pixel association reinforcement characteristics, and generation processing is performed to intelligently generate the rendering result. Therefore, the relation among all lattice points can be better captured, so that more local and global semantic information is considered in the process of rendering, the rendering quality is improved, and more intelligent image rendering is realized.
In particular, in consideration of that each TSDF field lattice point local pixel feature vector in the set of TSDF field lattice point local pixel feature vectors respectively represents the image semantic features in the local image semantic space domain of the hidden surface of the truncated directed distance function TSDF field containing the intersection points, when the image semantic features and the pixel gradient distribution-based association strengthening are performed on the image semantic features and the pixel gradient association differences of the local image semantic space domain, the dimensional composite image semantic representation of the TSDF field lattice point local pixel association strengthening feature matrix can also have complex space structures based on different local space distribution strengthening structures, so that the generation type regression convergence and generalization effects of the TSDF field lattice point local pixel association strengthening feature matrix under the complex space structures are expected to be improved.
Preferably, in one example of the application, the rendering of the TSDF field lattice point local pixel associated enhancement feature matrix by a AIGC model-based rendering module to obtain the rendering result comprises calculating a sum of absolute values of respective feature values of the TSDF field lattice point local pixel associated enhancement feature matrix to obtain a first TSDF field lattice point local pixel associated enhancement spatial structure value, calculating a square root of a square sum of respective feature values of the TSDF field lattice point local pixel associated enhancement feature matrix to obtain a second TSDF field lattice point local pixel associated enhancement spatial structure value, multiplying each feature value of the TSDF field lattice point local pixel associated enhancement feature matrix by the first TSDF field lattice point local pixel associated enhancement spatial structure value and the second TSDF field lattice point local pixel associated enhancement spatial structure value to obtain a first TSDF field point local pixel associated enhancement structure reference value and a second TSDF field point local pixel associated enhancement structure reference value corresponding to each feature value, multiplying each feature value of the TSDF field lattice point local pixel associated enhancement feature matrix by the first TSDF field lattice point local pixel associated enhancement spatial structure value and the second TSDF field lattice point local pixel associated enhancement spatial structure value to obtain a second TSDF field lattice point local pixel associated enhancement spatial structure value, multiplying each feature value of the TSDF field lattice point local pixel associated enhancement feature matrix by the first TSDF field lattice point local pixel associated enhancement spatial structure value and the second TSDF field lattice point local pixel associated enhancement structural value to obtain a first TSDF field lattice point local pixel associated enhancement structural value corresponding to obtain a first TSDF field lattice point local pixel associated enhancement structural value The method comprises the steps of obtaining a first TSDF field lattice point local pixel association strengthening transformation adjustment value, obtaining a second TSDF field lattice point local pixel association strengthening transformation adjustment value by dividing a second TSDF field lattice point local pixel association strengthening structure reference value by the difference value of the second TSDF field lattice point local pixel association strengthening spatial structure value and the second TSDF field lattice point local pixel association strengthening scale transformation value, calculating the weighted sum of the first TSDF field lattice point local pixel association strengthening transformation adjustment value and the second TSDF field lattice point local pixel association strengthening transformation adjustment value to obtain each feature value of the optimized TSDF field lattice point local pixel association strengthening feature matrix, and obtaining the rendering result by the rendering module based on the AIGC model.
Here, the optimization of the TSDF field lattice point local pixel association strengthening feature matrix is expressed as: Wherein, the method comprises the steps of, Representing the TSDF field lattice point local pixel association enhancement feature matrix,Representing the characteristic values of each position in the TSDF field lattice point local pixel association strengthening characteristic matrix,Representing the width of the TSDF field lattice point local pixel association enhancement feature matrix,Representing the height of the TSDF field lattice point local pixel association enhancement feature matrix,The number of the characteristic values in the TSDF field lattice point local pixel association strengthening characteristic matrix is represented,Representing local pixel associated enhancement spatial structure values of the first TSDF field lattice point,Representing the local pixel associated enhancement spatial structure value of the second TSDF field lattice point,A first TSDF field lattice point local pixel associated emphasis transformation modifier value representing each position in the first TSDF field lattice point local pixel associated emphasis transformation modifier matrix,A second TSDF field lattice point local pixel associated emphasis transformation modifier value representing each position in the second TSDF field lattice point local pixel associated emphasis transformation modifier matrix,Representing a first TSDF field lattice point local pixel associative enhancement transform adjustment matrix,Representing a second TSDF field lattice point local pixel associative enhancement transform adjustment matrix,Indicating that the addition is performed by the position point,Representing the multiplication by the position point,The weighted super-parameter is represented by a weighted super-parameter,And representing the optimized TSDF field lattice point local pixel association strengthening characteristic matrix.
That is, in the preferred example, with respect to the spatial structure information of the feature set of the TSDF field lattice point local pixel associated enhancement feature matrix in the high-dimensional space, by performing scale-based frame transformation of each feature value of the TSDF field lattice point local pixel associated enhancement feature matrix with the normative spatial structured representation of the TSDF field lattice point local pixel associated enhancement feature matrix as a reference window, and implementing frame attention weight adjustment based on the spatial structure of each feature value of the TSDF field lattice point local pixel associated enhancement feature matrix, spatial transformation (translation, scaling and rotation) invariance of the TSDF field lattice point local pixel associated enhancement feature matrix under feature space interaction is ensured, so that convergence and generalization effects of generating regression of the feature set of the TSDF field lattice point local pixel associated enhancement feature matrix under complex spatial structure representation are improved, and image quality of the rendering result obtained by the AIGC model-based rendering module is improved. Therefore, the relation among all lattice points can be better captured, so that more local and global semantic information is considered in the process of rendering, the rendering quality is improved, and more intelligent image rendering is realized.
In the embodiment of the present application, the rendering result display module 160 is configured to display the rendering result on the interactive interface. Specifically, rendering includes a depth map, a color map, and a normal map, and rendering a final picture according to the depth map, the color map, and the normal map and displaying.
In summary, the real-time image rendering generation system 100 based on the generated artificial intelligence according to the embodiment of the present application is illustrated, which adopts the processing and analysis technology based on the generated artificial intelligence to perform local region division and pixel feature extraction on the implicit surface of the truncated directed distance function TSDF field containing the intersection point, and performs dimension reconstruction on the local pixel features of the grid points of each TSDF field, so as to intelligently generate the rendering result according to the associated enhanced features of each pixel gradient in the reconstructed features. Therefore, the relation among all lattice points can be better captured, so that more local and global semantic information is considered in the process of rendering, the rendering quality is improved, and more intelligent image rendering is realized.
As described above, the real-time image rendering generation system 100 based on the generated artificial intelligence according to the embodiment of the present application may be implemented in various terminal devices, for example, a server or the like for real-time image rendering generation based on the generated artificial intelligence. In one example, the real-time image rendering generation system 100 based on generated artificial intelligence according to an embodiment of the present application may be integrated into a terminal device as one software module and/or hardware module. For example, the real-time image rendering generation system 100 based on the generated artificial intelligence may be a software module in the operating system of the terminal device or may be an application developed for the terminal device, and of course, the real-time image rendering generation system 100 based on the generated artificial intelligence may be one of a plurality of hardware modules of the terminal device.
Alternatively, in another example, the real-time image rendering generation system 100 based on the generated artificial intelligence and the terminal device may be separate devices, and the real-time image rendering generation system 100 based on the generated artificial intelligence may be connected to the terminal device through a wired and/or wireless network and transmit the interactive information in a contracted data format.

Claims (9)

1. The real-time image rendering generation system based on the generation type artificial intelligence is characterized by comprising an image parameter acquisition module to be rendered, a rendering module and a rendering module, wherein the image parameter acquisition module is used for responding to a display instruction received in an interactive interface to acquire an image to be rendered and rendering parameters in the interactive interface; the ray set generation module is used for generating a ray set based on the image to be rendered and the rendering parameters; the TSDF field implicit surface pre-generation module is used for pre-generating a truncated directional distance function TSDF field implicit surface; the system comprises an intersection point calculation module, a rendering result display module, a TSDF field enhancement unit and a gradient enhancement unit, wherein the intersection point calculation module is used for calculating the intersection point of each ray in the ray set and the truncated directional distance function TSDF field implicit surface to obtain a set of TSDF field local areas containing intersection points, the rendering module is used for rendering TSDF field grid points where the intersection points are located based on the truncated directional distance function TSDF field implicit surface containing intersection points to generate a rendering result, the rendering result display module is used for displaying the rendering result on the interactive interface, the rendering module comprises a TSDF field implicit surface area division unit used for dividing the local area of the truncated directional distance function TSDF field implicit surface containing intersection points to obtain a set of TSDF field grid point local areas, the TSDF field grid point pixel area extraction reconstruction unit is used for extracting pixel characteristics of the set of the TSDF field grid point local areas, the extracted set of TSDF field grid point local pixel characteristic vectors is subjected to dimensional reconstruction to obtain TSDF field grid point local pixel characteristics, the TSDF field point local pixel enhancement unit is used for carrying out gradient enhancement unit local pixel point local area enhancement, and the rendering result is obtained based on the local pixel association strengthening characteristic of the TSDF field lattice point.
2. The real-time image rendering generation system based on the generated artificial intelligence of claim 1, wherein the TSDF field implicit surface area dividing unit is used for dividing the local area of the truncated directed distance function TSDF field implicit surface containing the intersection points according to the TSDF field lattice point where the intersection points are located so as to obtain the set of the local areas of the TSDF field lattice point.
3. The real-time image rendering generation system based on the generation type artificial intelligence according to claim 2 is characterized in that the TSDF field lattice point pixel region extraction and reconstruction unit comprises a TSDF field lattice point local pixel feature extraction subunit, a TSDF field lattice point local pixel feature dimension reconstruction subunit and a TSDF field lattice point local pixel feature dimension reconstruction subunit, wherein the TSDF field lattice point local pixel feature extraction subunit is used for conducting dimension reconstruction on the set of TSDF field lattice point local pixel feature vectors according to a local region division mode to obtain a TSDF field lattice point local pixel feature matrix as the TSDF field lattice point local pixel feature.
4. The real-time image rendering generation system based on the generated artificial intelligence of claim 3, wherein the grid point local area pixel feature extractor based on the depth neural network model is a grid point local area pixel feature extractor based on a convolution neural network model.
5. The real-time image rendering generation system based on the generation type artificial intelligence of claim 4, wherein the TSDF field lattice point pixel association strengthening unit comprises a TSDF field lattice point local pixel gradient amplitude calculation subunit, a TSDF field lattice point local pixel strengthening subunit and a TSDF field lattice point local pixel strengthening subunit, wherein the TSDF field lattice point local pixel gradient amplitude calculation subunit is used for calculating multi-directional gradient value distribution of each position in the TSDF field lattice point local pixel feature matrix, determining gradient amplitude values of each position in the TSDF field lattice point local pixel feature matrix based on the multi-directional gradient value distribution of each position to obtain a TSDF field lattice point local pixel feature gradient amplitude distribution feature matrix, and inputting the TSDF field lattice point local pixel feature gradient amplitude distribution feature matrix into a gating masking module for masking to obtain a masked TSDF field lattice point local pixel feature gradient feature amplitude distribution feature matrix, and the TSDF field lattice point local pixel strengthening subunit is used for weighting the TSDF field lattice point local pixel feature matrix as a weighting matrix to obtain the TSDF field lattice point local pixel association strengthening feature matrix.
6. The real-time image rendering generation system based on the generation type artificial intelligence of claim 5, wherein the TSDF field lattice point local pixel gradient magnitude calculation subunit is configured to calculate a gradient magnitude value of each feature value in the TSDF field lattice point local pixel feature matrix in a width direction to obtain a TSDF field lattice point local pixel width direction feature gradient value feature matrix, calculate a gradient magnitude value of each feature value in the TSDF field lattice point local pixel feature matrix in a height direction to obtain a TSDF field lattice point local pixel height direction feature gradient value feature matrix, and calculate a square root of a square sum of feature values of corresponding positions in the TSDF field lattice point local pixel width direction feature gradient value feature matrix and the TSDF field lattice point local pixel height direction feature gradient value feature matrix to obtain the TSDF field lattice point local pixel feature gradient magnitude distribution feature matrix.
7. The real-time image rendering generation system based on the generation type artificial intelligence of claim 6, wherein the TSDF field lattice point local pixel gradient masking subunit is configured to compare a feature value of each position in the TSDF field lattice point local pixel feature gradient magnitude distribution feature matrix with a preset threshold to obtain the masked TSDF field lattice point local pixel feature gradient magnitude distribution feature matrix, wherein the feature value greater than the preset threshold is set to one in response to the feature value being greater than the preset threshold, and the feature value less than or equal to the preset threshold is set to zero in response to the feature value being less than or equal to the preset threshold.
8. The real-time image rendering generation system based on the generation type artificial intelligence of claim 7, wherein the TSDF field lattice point local pixel reinforcement subunit is configured to perform a position-wise multiplication on the masked TSDF field lattice point local pixel feature gradient amplitude distribution feature matrix and the TSDF field lattice point local pixel feature matrix, and input the feature matrix after the point multiplication into a sigmoid function for reinforcement processing to obtain the TSDF field lattice point local pixel association reinforcement feature matrix.
9. The real-time image rendering generation system based on the generated artificial intelligence of claim 8, wherein the rendering result generation unit is configured to pass the TSDF field lattice point local pixel association enhancement feature matrix through a rendering module based on AIGC models to obtain the rendering result.
CN202411448712.XA 2024-10-17 2024-10-17 Real-time image rendering generation system based on generation type artificial intelligence Active CN118963602B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411448712.XA CN118963602B (en) 2024-10-17 2024-10-17 Real-time image rendering generation system based on generation type artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411448712.XA CN118963602B (en) 2024-10-17 2024-10-17 Real-time image rendering generation system based on generation type artificial intelligence

Publications (2)

Publication Number Publication Date
CN118963602A CN118963602A (en) 2024-11-15
CN118963602B true CN118963602B (en) 2024-12-06

Family

ID=93400137

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411448712.XA Active CN118963602B (en) 2024-10-17 2024-10-17 Real-time image rendering generation system based on generation type artificial intelligence

Country Status (1)

Country Link
CN (1) CN118963602B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115170715A (en) * 2022-06-29 2022-10-11 先临三维科技股份有限公司 Image rendering method and device, electronic equipment and medium
CN118212372A (en) * 2024-05-21 2024-06-18 成都信息工程大学 A mapping method integrating neural implicit surface representation and volume rendering

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2550091B (en) * 2013-03-15 2018-04-04 Imagination Tech Ltd Rendering with point sampling and pre-computed light transport information
CN118521728B (en) * 2024-07-22 2024-11-12 南昌市小核桃科技有限公司 A high-fidelity neural implicit SLAM method based on dictionary factor representation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115170715A (en) * 2022-06-29 2022-10-11 先临三维科技股份有限公司 Image rendering method and device, electronic equipment and medium
CN118212372A (en) * 2024-05-21 2024-06-18 成都信息工程大学 A mapping method integrating neural implicit surface representation and volume rendering

Also Published As

Publication number Publication date
CN118963602A (en) 2024-11-15

Similar Documents

Publication Publication Date Title
Yang et al. Recursive-nerf: An efficient and dynamically growing nerf
CN110533721A (en) A kind of indoor objects object 6D Attitude estimation method based on enhancing self-encoding encoder
CN115115805B (en) Training method, device, equipment and storage medium for three-dimensional reconstruction model
CN113593033B (en) A 3D model feature extraction method based on mesh subdivision structure
CN112329752B (en) Human eye image processing model training method, image processing method and device
US10169908B2 (en) Method, apparatus, storage medium and device for controlled synthesis of inhomogeneous textures
KR20240086003A (en) Method and apparatus for improving realism of rendered image
CN118691742A (en) A 3D point cloud reconstruction method based on self-training conditional diffusion model
CN117593187A (en) Remote sensing image super-resolution reconstruction method based on meta-learning and transducer
CN119540497B (en) Three-dimensional grid reconstruction method and device with enhanced multidimensional features and three-plane characterization
Huang et al. Single image super-resolution reconstruction of enhanced loss function with multi-gpu training
Sharma et al. Volumetric rendering with baked quadrature fields
Jiang et al. Tcgan: Semantic-aware and structure-preserved gans with individual vision transformer for fast arbitrary one-shot image generation
CN112002019B (en) Method for simulating character shadow based on MR mixed reality
CN116883524A (en) Image generation model training, image generation method and device and computer equipment
CN112017159B (en) A Realistic Simulation Method for Ground Targets in Remote Sensing Scenarios
CN119006676A (en) Head image generation method and system based on 3D Gaussian field
CN118963602B (en) Real-time image rendering generation system based on generation type artificial intelligence
CN116363329B (en) Three-dimensional image generation method and system based on CGAN and LeNet-5
CN115222895B (en) Image generation method, device, equipment and storage medium
CN114494007B (en) A text-guided natural image super-resolution reconstruction method
Sun Colorization of gray scale images based on convolutional block attention and Pix2Pix network
Ye et al. Interactive anime sketch colorization with style consistency via a deep residual neural network
Huang et al. Mesh-controllable multi-level-of-detail text-to-3D generation
RU2770132C1 (en) Image generators with conditionally independent pixel synthesis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant