CN114155336B - Virtual object display method, device, electronic device and storage medium - Google Patents
Virtual object display method, device, electronic device and storage medium Download PDFInfo
- Publication number
- CN114155336B CN114155336B CN202010833390.6A CN202010833390A CN114155336B CN 114155336 B CN114155336 B CN 114155336B CN 202010833390 A CN202010833390 A CN 202010833390A CN 114155336 B CN114155336 B CN 114155336B
- Authority
- CN
- China
- Prior art keywords
- parameter
- virtual object
- parameters
- position point
- map
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 63
- 239000013598 vector Substances 0.000 claims abstract description 196
- 238000005286 illumination Methods 0.000 claims abstract description 86
- 239000011159 matrix material Substances 0.000 claims description 71
- 230000004927 fusion Effects 0.000 claims description 57
- 230000004044 response Effects 0.000 claims description 11
- 230000008569 process Effects 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 4
- 230000000875 corresponding effect Effects 0.000 claims 7
- 230000002596 correlated effect Effects 0.000 claims 2
- 238000013507 mapping Methods 0.000 abstract description 103
- 230000000694 effects Effects 0.000 abstract description 49
- 238000010586 diagram Methods 0.000 description 16
- 238000012545 processing Methods 0.000 description 16
- 230000002093 peripheral effect Effects 0.000 description 10
- 230000001133 acceleration Effects 0.000 description 9
- 238000009877 rendering Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 239000000919 ceramic Substances 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 244000025254 Cannabis sativa Species 0.000 description 1
- 241000196324 Embryophyta Species 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000010409 thin film Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/506—Illumination models
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Generation (AREA)
Abstract
The disclosure relates to a virtual object display method, a virtual object display device, electronic equipment and a storage medium, and belongs to the technical field of computers. The method comprises the following steps: obtaining object parameters of a virtual object; obtaining diffuse reflection parameters and reflection parameters of the virtual object according to normal vectors and illumination vectors of the plurality of position points; determining a target map matched with the diffuse reflection parameter and the reflection parameter; and adding the target map on the surface of the virtual object, and displaying the virtual object added with the target map. According to the method, the influence of the reflection phenomenon and the reflection phenomenon on the display effect of the virtual object is considered, the information quantity is increased, and the determined target mapping can reflect the brightness of the virtual object more accurately, so that when the virtual object added with the target mapping is displayed, the virtual object is more vivid, and the display effect of the virtual object is improved.
Description
Technical Field
The disclosure relates to the technical field of computers, and in particular relates to a virtual object display method, a virtual object display device, electronic equipment and a storage medium.
Background
With the development of computer technology, virtual objects are generally displayed in the fields of electronic games, virtual reality, and the like, and in order to make the display effect of the virtual objects more natural, it is necessary to add a map to the surface of the virtual object to simulate the effect of the virtual object surface being irradiated by a virtual light source.
When the virtual object is displayed, firstly, diffuse reflection parameters of the virtual object are acquired, the diffuse reflection parameters can reflect the brightness degree of the surface of the virtual object, a target map matched with the diffuse reflection parameters is determined, and the target map is added to the surface of the virtual object, so that the virtual object with the target map added is displayed.
However, the above scheme only considers the diffuse reflection parameters of the virtual object, and the data volume of the diffuse reflection parameters is small, so that the determined target map is not accurate enough, and the display effect of the virtual object is poor.
Disclosure of Invention
The disclosure provides a virtual object display method, a device, electronic equipment and a storage medium, which improve the display effect of a virtual object.
According to an aspect of the embodiments of the present disclosure, there is provided a virtual object display method, including:
obtaining object parameters of a virtual object, wherein the object parameters at least comprise normal vectors and illumination vectors of a plurality of position points on the surface of the virtual object, and the illumination vectors of the position points are vectors in the light illumination direction of the position points;
According to the normal vectors and the illumination vectors of the plurality of position points, diffuse reflection parameters and reflection parameters of the virtual object are obtained, and the diffuse reflection parameters and the reflection parameters are in negative correlation;
determining a target map matched with the diffuse reflection parameter and the reflection parameter;
And adding the target map on the surface of the virtual object, and displaying the virtual object added with the target map.
In one possible implementation, the diffuse reflection parameters of the virtual object include diffuse reflection sub-parameters of the plurality of location points, and the specular reflection parameters of the virtual object include specular sub-parameters of the plurality of location points;
The obtaining the diffuse reflection parameter and the reflection parameter of the virtual object according to the normal vector and the illumination vector of the plurality of position points comprises the following steps:
for each position point, obtaining diffuse reflection subparameters of the position points according to the normal vector and the illumination vector of the position point;
Performing inverse processing on the diffuse reflection subparameter to obtain a first reflection subparameter of the position point;
According to a preset first adjustment parameter, adjusting the first reflector sub-parameter to obtain a second reflector sub-parameter of the position point;
combining the diffuse reflection sub-parameters of the plurality of position points to obtain the diffuse reflection parameters;
And combining the second reflection sub-parameters of the plurality of position points to obtain the reflection parameters.
In another possible implementation manner, the obtaining the diffuse reflection subparameter of the location point according to the normal vector and the illumination vector of the location point includes:
taking the dot product of the normal vector and the illumination vector as a first numerical value;
and determining the larger value of the first value and 0 as the diffuse reflection subparameter of the position point.
In another possible implementation manner, the diffuse reflection parameter of the virtual object includes diffuse reflection sub-parameters of the plurality of location points, and the reflection parameter of the virtual object includes reflection sub-parameters of the plurality of location points; the determining a target map matched with the diffuse reflection parameter and the reflection parameter comprises:
respectively obtaining brightness subparameters of each position point according to the diffuse reflection subparameter and the reflection subparameter of each position point;
Respectively obtaining weights of a plurality of alternative mapping according to the brightness subparameter of each position point and the serial numbers of the alternative mapping, wherein the serial numbers of the alternative mapping are determined after the alternative mapping is orderly arranged according to the brightness from big to small, the serial numbers of the alternative mapping represent the brightness of the alternative mapping, and the serial numbers of different alternative mapping are different;
and determining the target mapping according to the weights of the multiple alternative mapping corresponding to each position point.
In another possible implementation manner, the obtaining the luminance subparameter of each location point according to the diffuse reflection subparameter and the reflection subparameter of each location point includes:
For each position point, adding the diffuse reflection subparameter and the reflection subparameter of the position point to obtain a first brightness subparameter of the position point;
And multiplying the first brightness subparameter by the number of the alternative maps to obtain a second brightness subparameter of the position point.
In another possible implementation manner, the obtaining weights of the multiple alternative maps according to the luminance subparameter of each location point and the sequence numbers of the multiple alternative maps respectively includes:
For each position point, determining a matching parameter of each alternative mapping according to a difference value between the brightness subparameter of the position point and the serial number of each alternative mapping, wherein the matching parameter represents the matching degree between the brightness of the alternative mapping and the brightness of the position point;
taking the difference value between the matching parameters of any two adjacent alternative maps as the weight of the first alternative map in the any two alternative maps, and taking the matching parameter of the last alternative map as the weight of the last alternative map, wherein the any two adjacent alternative maps are two adjacent alternative maps in a plurality of alternative maps which are sequentially arranged from big to small according to brightness.
In another possible implementation, the method further includes:
According to the sequence numbers of the multiple alternative maps, sequentially fusing the alternative maps with adjacent sequence numbers to obtain at least one fusion map, wherein at least one channel in a red channel, a green channel or a blue channel of the fusion map corresponds to one alternative map;
The determining the matching parameter of each alternative map according to the difference between the luminance subparameter of the position point and the serial number of each alternative map comprises the following steps:
acquiring a sequence number matrix of each fusion map, wherein the sequence number matrix of each fusion map comprises a sequence number of each alternative map in the fusion map;
Acquiring a brightness sub-parameter matrix, wherein the brightness sub-parameter matrix comprises a plurality of brightness sub-parameters, and the number of the brightness sub-parameters is equal to the number of the alternative maps contained in the fusion map;
And obtaining the difference value between the brightness sub-parameter matrix and the sequence number matrix of each fusion map, and determining a matching parameter matrix, wherein the matching parameter matrix comprises the matching parameters of each alternative map.
In another possible implementation manner, the determining the matching parameter of the alternative map according to the difference between the luminance subparameter of the location point and the sequence number of the alternative map includes:
in response to the difference being greater than a first reference value, taking the first reference value as a matching parameter for the alternative map; or alternatively
In response to the difference being less than a second reference value, taking the second reference value as a matching parameter for the alternative map; or alternatively
And responding to the difference value not larger than the first reference value and not smaller than the second reference value, and taking the difference value as a matching parameter of the alternative mapping.
In another possible implementation manner, the determining the target map according to the weight corresponding to each candidate map includes:
Taking the candidate mapping corresponding to the maximum weight as the target mapping; or alternatively
And carrying out weighted fusion on the multiple alternative maps according to the weight of each alternative map, and taking the fused map as the target map.
In another possible implementation, the object parameters further include perspective vectors of the plurality of location points, the perspective vectors of the location points being vectors in a direction of the virtual camera relative to the location points;
the method further comprises the steps of adding the target mapping on the surface of the virtual object, and displaying the virtual object after the target mapping is added, wherein the method further comprises the following steps:
acquiring highlight parameters of the virtual object according to normal vectors, illumination vectors and view angle vectors of the plurality of position points, wherein the highlight parameters represent the brightest position point on the surface of the virtual object;
the adding the target map on the surface of the virtual object, displaying the virtual object after adding the target map, includes:
and adding the target mapping on the surface of the virtual object, and displaying the virtual object added with the target mapping according to the highlight parameter.
In another possible implementation manner, the obtaining the highlight parameter of the virtual object according to the normal vector, the illumination vector and the view angle vector of the plurality of location points includes:
for each position point, taking the dot product of the sum vector of the view angle vector and the illumination vector of the position point and the normal vector of the position point as a second numerical value;
taking the product of the larger value of the second value sum 0 and the illumination vector as a first high photon parameter;
And adjusting the first high-photon parameter according to a preset second adjustment parameter to obtain a second high-photon parameter.
In another possible implementation, after the determining the target map that matches the diffuse reflection parameter and the specular reflection parameter, the method further includes:
Extending each position point along a corresponding normal direction by a preset distance, forming an extended region outside the surface of the virtual object, and filling the reference color into the extended region;
the adding the target map on the surface of the virtual object, displaying the virtual object after adding the target map, includes:
and adding the target map on the surface of the virtual object, and displaying the virtual object added with the target map and an extension area outside the surface of the virtual object.
According to still another aspect of the embodiments of the present disclosure, there is provided a virtual object display apparatus including:
A first parameter acquisition unit configured to perform acquisition of object parameters of a virtual object, the object parameters including at least normal vectors and illumination vectors of a plurality of position points on a surface of the virtual object, the illumination vectors of the position points being vectors in a light irradiation direction of the position points;
A second parameter obtaining unit configured to obtain a diffuse reflection parameter and a reflection parameter of the virtual object according to normal vectors and illumination vectors of the plurality of position points, wherein the diffuse reflection parameter and the reflection parameter have a negative correlation;
a target map determining unit configured to perform determining a target map that matches the diffuse reflection parameter and the specular reflection parameter;
And a virtual object display unit configured to perform adding the target map to the surface of the virtual object, and display the virtual object after adding the target map.
In one possible implementation, the diffuse reflection parameters of the virtual object include diffuse reflection sub-parameters of the plurality of location points, and the specular reflection parameters of the virtual object include specular sub-parameters of the plurality of location points;
the second parameter acquisition unit is configured to perform:
for each position point, obtaining diffuse reflection subparameters of the position points according to the normal vector and the illumination vector of the position point;
Performing inverse processing on the diffuse reflection subparameter to obtain a first reflection subparameter of the position point;
According to a preset first adjustment parameter, adjusting the first reflector sub-parameter to obtain a second reflector sub-parameter of the position point;
combining the diffuse reflection sub-parameters of the plurality of position points to obtain the diffuse reflection parameters;
And combining the second reflection sub-parameters of the plurality of position points to obtain the reflection parameters.
In another possible implementation manner, the second parameter obtaining unit is configured to perform:
taking the dot product of the normal vector and the illumination vector as a first numerical value;
and determining the larger value of the first value and 0 as the diffuse reflection subparameter of the position point.
In another possible implementation manner, the diffuse reflection parameter of the virtual object includes diffuse reflection sub-parameters of the plurality of location points, and the reflection parameter of the virtual object includes reflection sub-parameters of the plurality of location points; the target map determining unit includes:
A luminance sub-parameter obtaining sub-unit configured to obtain luminance sub-parameters of each position point according to the diffuse reflection sub-parameter and the reflection sub-parameter of each position point, respectively;
A weight obtaining subunit, configured to obtain weights of a plurality of alternative maps according to the luminance subparameter of each position point and the serial numbers of the alternative maps, where the serial numbers of the alternative maps are determined after the alternative maps are sequentially arranged from big to small according to the luminance, the serial numbers of the alternative maps represent the luminance of the alternative maps, and the serial numbers of different alternative maps are different;
And a mapping determining subunit configured to perform the determination of the target mapping according to the weights of the plurality of candidate maps corresponding to each location point.
In another possible implementation manner, the luminance sub-parameter obtaining subunit is configured to perform:
For each position point, adding the diffuse reflection subparameter and the reflection subparameter of the position point to obtain a first brightness subparameter of the position point;
And multiplying the first brightness subparameter by the number of the alternative maps to obtain a second brightness subparameter of the position point.
In another possible implementation, the weight acquisition subunit is configured to perform:
For each position point, determining a matching parameter of each alternative mapping according to a difference value between the brightness subparameter of the position point and the serial number of each alternative mapping, wherein the matching parameter represents the matching degree between the brightness of the alternative mapping and the brightness of the position point;
taking the difference value between the matching parameters of any two adjacent alternative maps as the weight of the first alternative map in the any two alternative maps, and taking the matching parameter of the last alternative map as the weight of the last alternative map, wherein the any two adjacent alternative maps are two adjacent alternative maps in a plurality of alternative maps which are sequentially arranged from big to small according to brightness.
In another possible implementation, the apparatus further includes:
The mapping fusion unit is configured to fuse the alternative mapping with adjacent sequence numbers according to the sequence numbers of the plurality of alternative mapping to obtain at least one fusion mapping, wherein at least one channel in a red channel, a green channel or a blue channel of the fusion mapping corresponds to one alternative mapping;
the weight acquisition subunit is configured to perform:
acquiring a sequence number matrix of each fusion map, wherein the sequence number matrix of each fusion map comprises a sequence number of each alternative map in the fusion map;
Acquiring a brightness sub-parameter matrix, wherein the brightness sub-parameter matrix comprises a plurality of brightness sub-parameters, and the number of the brightness sub-parameters is equal to the number of the alternative maps contained in the fusion map;
And obtaining the difference value between the brightness sub-parameter matrix and the sequence number matrix of each fusion map, and determining a matching parameter matrix, wherein the matching parameter matrix comprises the matching parameters of each alternative map.
In another possible implementation, the weight acquisition subunit is configured to perform:
in response to the difference being greater than a first reference value, taking the first reference value as a matching parameter for the alternative map; or alternatively
In response to the difference being less than a second reference value, taking the second reference value as a matching parameter for the alternative map; or alternatively
And responding to the difference value not larger than the first reference value and not smaller than the second reference value, and taking the difference value as a matching parameter of the alternative mapping.
In another possible implementation, the map determining subunit is configured to perform:
Taking the candidate mapping corresponding to the maximum weight as the target mapping; or alternatively
And carrying out weighted fusion on the multiple alternative maps according to the weight of each alternative map, and taking the fused map as the target map.
In another possible implementation, the object parameters further include perspective vectors of the plurality of location points, the perspective vectors of the location points being vectors in a direction of the virtual camera relative to the location points;
The apparatus further comprises:
a third parameter obtaining unit configured to obtain a highlight parameter of the virtual object according to a normal vector, an illumination vector and a view angle vector of the plurality of position points, wherein the highlight parameter characterizes the brightest position point on the surface of the virtual object;
And the virtual object display unit is configured to add the target mapping on the surface of the virtual object and display the virtual object added with the target mapping according to the highlight parameters.
In another possible implementation manner, the third parameter obtaining unit is configured to perform:
for each position point, taking the dot product of the sum vector of the view angle vector and the illumination vector of the position point and the normal vector of the position point as a second numerical value;
taking the product of the larger value of the second value sum 0 and the illumination vector as a first high photon parameter;
And adjusting the first high-photon parameter according to a preset second adjustment parameter to obtain a second high-photon parameter.
In another possible implementation, the apparatus further includes:
an extended region acquisition unit configured to perform extending each position point along a corresponding normal direction by a preset distance, forming an extended region outside the surface of the virtual object, and filling a reference color into the extended region;
The virtual object display unit is configured to perform adding the target map to the surface of the virtual object, and display the virtual object and an extension area outside the surface of the virtual object after adding the target map.
According to still another aspect of the embodiments of the present disclosure, there is provided an electronic device including:
One or more processors;
volatile or non-volatile memory for storing the one or more processor-executable commands;
Wherein the one or more processors are configured to perform the virtual object display method of the above aspect.
According to still another aspect of the embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform the virtual object display method of the above aspect.
According to yet another aspect of embodiments of the present disclosure, there is provided a computer program product, which when executed by a processor of an electronic device, causes the electronic device to perform the virtual object display method of the above aspect.
According to the virtual object display method, the device, the electronic equipment and the storage medium, diffuse reflection parameters and reflection parameters of a virtual object can be obtained, and as the diffuse reflection parameters and the reflection parameters can reflect the brightness of the virtual object, compared with the situation that only the matched mapping is determined according to the diffuse reflection parameters, when the matched target mapping is determined according to the diffuse reflection parameters and the reflection parameters, the influence of reflection phenomena on the virtual object display effect is considered, the information quantity is increased, the determined target mapping can reflect the brightness of the virtual object more accurately, and therefore when the virtual object with the target mapping added is displayed, the virtual object is more vivid, and the virtual object display effect is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart illustrating a virtual object display method according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating another virtual object display method according to an exemplary embodiment.
FIG. 3 is a schematic diagram illustrating object parameters for a location point according to an exemplary embodiment.
FIG. 4 is a schematic diagram of a virtual object shown according to an example embodiment.
FIG. 5 is a schematic diagram of an alternative map shown according to an example embodiment.
FIG. 6 is a schematic diagram of a fusion map shown according to an exemplary embodiment.
Fig. 7 is a schematic diagram illustrating a highlight phenomenon according to an exemplary embodiment.
FIG. 8 is a schematic diagram of a sketch image shown according to an example embodiment.
Fig. 9 is a schematic diagram of a virtual object shown according to an example embodiment.
Fig. 10 is a block diagram illustrating a virtual object display apparatus according to an exemplary embodiment.
Fig. 11 is a block diagram of another virtual object display apparatus according to an exemplary embodiment.
Fig. 12 is a block diagram of a terminal according to an exemplary embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description of the present disclosure and the claims and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The user information (including but not limited to user equipment information, user personal information, etc.) related to the present disclosure is information authorized by the user or sufficiently authorized by each party.
In order to facilitate understanding of the technical process of the embodiments of the present disclosure, the following explains nouns related to the embodiments of the present disclosure:
Real-time rendering (EALTIME RENDERING): the model rendering technology is used for drawing three-dimensional data into two-dimensional images according to a graphics algorithm and displaying the two-dimensional images in real time, and is applied to virtual reality, virtual scenes in game application and the like.
Illumination model (Illumination Model): the illumination model is also called a shading model and is used for describing color values presented by virtual object position points under the influence of light rays, visual angles and the like, and the illumination model comprises a physical-based theoretical model and an experience model based on experience. For example, the illumination model includes a Phong (specular reflection) model, a Blinn-Phong (modified specular reflection) model, a Cook-Torrance (micro surface) model, and the like. The Blinn-Phong model is a highlight calculation model and can be used for fast and real-time rendering.
Rendering Pass: one rendering path represents one rendering operation, and when a virtual object is displayed, the virtual object is rendered and displayed through a plurality of rendering paths.
Virtual scene: the virtual scene is a virtual scene that an application program displays (or provides) while running on a terminal. The virtual scene is used for simulating a three-dimensional virtual space, the three-dimensional virtual space is an open space, or the virtual scene is a virtual scene simulating a real environment in reality, or the virtual scene is a half-simulated half-fictional virtual scene, or the virtual scene is a completely fictional virtual scene, and the virtual scene is any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene and a three-dimensional virtual scene. For example, the virtual scene includes a river, a grass, land, a building, and the like.
Fig. 1 is a flowchart of a virtual object display method according to an exemplary embodiment, and referring to fig. 1, the method is applied to a terminal, and includes the following steps:
In step 101, object parameters of a virtual object are acquired, wherein the object parameters at least include normal vectors and illumination vectors of a plurality of position points on the surface of the virtual object, and the illumination vector of the position point is a vector in a light irradiation direction of the position point.
In step 102, according to the normal vectors and the illumination vectors of the plurality of position points, the diffuse reflection parameters and the reflection parameters of the virtual object are obtained, and the diffuse reflection parameters and the reflection parameters are in negative correlation.
In step 103, a target map is determined that matches the diffuse reflection parameters and the specular reflection parameters.
In step 104, a target map is added to the surface of the virtual object, and the virtual object with the target map added is displayed.
According to the method provided by the embodiment of the disclosure, the diffuse reflection parameters and the reflection parameters of the virtual object can be obtained, and the diffuse reflection parameters and the reflection parameters can reflect the brightness of the virtual object, so that compared with the situation that the matched mapping is determined only according to the diffuse reflection parameters, when the matched target mapping is determined according to the diffuse reflection parameters and the reflection parameters, the influence of the reflection phenomenon on the display effect of the virtual object is considered, the information quantity is increased, the determined target mapping can reflect the brightness of the virtual object more accurately, and therefore, when the virtual object with the target mapping added is displayed, the virtual object is more vivid, and the display effect of the virtual object is improved.
In one possible implementation, the diffuse reflection parameters of the virtual object include diffuse reflection sub-parameters of a plurality of location points, and the specular reflection parameters of the virtual object include specular sub-parameters of a plurality of location points;
According to the normal vector and the illumination vector of a plurality of position points, obtaining diffuse reflection parameters and reflection parameters of the virtual object comprises the following steps:
For each position point, obtaining diffuse reflection subparameters of the position points according to the normal vector and the illumination vector of the position points;
performing inverse processing on the diffuse reflection subparameter to obtain a first reflection subparameter of the position point;
According to a preset first adjustment parameter, adjusting the first reflector sub-parameter to obtain a second reflector sub-parameter of the position point;
Combining the diffuse reflection sub-parameters of the plurality of position points to obtain diffuse reflection parameters;
and combining the second reflection sub-parameters of the plurality of position points to obtain the reflection parameters.
In another possible implementation manner, obtaining the diffuse reflection subparameter of the position point according to the normal vector and the illumination vector of the position point includes:
taking the dot product of the normal vector and the illumination vector as a first numerical value;
The larger of the first value and 0 is determined as the diffuse reflection subparameter of the location point.
In another possible implementation, the diffuse reflection parameter of the virtual object includes diffuse reflection sub-parameters of a plurality of location points, and the specular reflection parameter of the virtual object includes specular sub-parameters of a plurality of location points; determining a target map matching the diffuse reflection parameter and the reflection parameter, comprising:
respectively acquiring brightness subparameters of each position point according to the diffuse reflection subparameter and the reflection subparameter of each position point;
According to the brightness subparameter of each position point and the serial numbers of a plurality of alternative mapping, the weights of the plurality of alternative mapping are respectively obtained, the serial numbers of the alternative mapping are determined after the plurality of alternative mapping are sequentially arranged according to the brightness from large to small, the serial numbers of the alternative mapping represent the brightness of the alternative mapping, and the serial numbers of different alternative mapping are different;
And determining the target map according to the weights of the multiple alternative maps corresponding to each position point.
In another possible implementation manner, according to the diffuse reflection subparameter and the reflection subparameter of each position point, respectively obtaining the brightness subparameter of each position point includes:
For each position point, adding the diffuse reflection subparameter and the reflection subparameter of the position point to obtain a first brightness subparameter of the position point;
And multiplying the first brightness subparameter by the number of the alternative maps to obtain a second brightness subparameter of the position point.
In another possible implementation manner, according to the luminance subparameter of each position point and the sequence numbers of the multiple alternative maps, the weights of the multiple alternative maps are respectively obtained, including:
for each position point, determining a matching parameter of each alternative mapping according to the difference value between the brightness subparameter of the position point and the serial number of each alternative mapping, wherein the matching parameter characterizes the matching degree between the brightness of the alternative mapping and the brightness of the position point;
Taking the difference value between the matching parameters of any two adjacent alternative maps as the weight of the first alternative map in any two alternative maps, and taking the matching parameter of the last alternative map as the weight of the last alternative map, wherein any two adjacent alternative maps refer to two adjacent alternative maps in a plurality of alternative maps which are sequentially arranged from big to small according to brightness.
In another possible implementation, the method further includes:
According to the sequence numbers of the multiple alternative maps, the alternative maps with adjacent sequence numbers are fused in sequence to obtain at least one fusion map, and at least one channel in a red channel, a green channel or a blue channel of the fusion map corresponds to one alternative map;
determining a matching parameter of each alternative map according to the difference between the luminance subparameter of the position point and the serial number of each alternative map, including:
Acquiring a sequence number matrix of each fusion map, wherein the sequence number matrix of the fusion map comprises a sequence number of each alternative map in the fusion map;
Acquiring a brightness sub-parameter matrix, wherein the brightness sub-parameter matrix comprises a plurality of brightness sub-parameters, and the number of the brightness sub-parameters is equal to the number of the alternative maps contained in the fusion map;
And obtaining a difference value between the brightness sub-parameter matrix and the sequence number matrix of each fusion map, and determining a matching parameter matrix, wherein the matching parameter matrix comprises the matching parameters of each alternative map.
In another possible implementation manner, determining the matching parameter of the alternative map according to the difference between the luminance subparameter of the position point and the serial number of the alternative map includes:
responding to the difference value being larger than the first reference value, and taking the first reference value as a matching parameter of the alternative mapping; or alternatively
Responding to the difference value being smaller than the second reference value, and taking the second reference value as a matching parameter of the alternative mapping; or alternatively
And responding to the difference value not larger than the first reference value and not smaller than the second reference value, and taking the difference value as a matching parameter of the alternative mapping.
In another possible implementation, determining the target map according to the weight corresponding to each candidate map includes:
Taking the candidate mapping corresponding to the maximum weight as a target mapping; or alternatively
And carrying out weighted fusion on the multiple alternative maps according to the weight of each alternative map, and taking the fused map as a target map.
In another possible implementation, the object parameters further include perspective vectors of a plurality of location points, the perspective vector of a location point being a vector in a direction of the virtual camera relative to the location point;
Adding a target map to the surface of the virtual object, and before displaying the virtual object with the target map added, the method further comprises:
Acquiring highlight parameters of the virtual object according to normal vectors, illumination vectors and view angle vectors of the plurality of position points, wherein the highlight parameters represent the brightest position point on the surface of the virtual object;
Adding a target map on the surface of the virtual object, displaying the virtual object with the target map added, and comprising:
and adding a target map on the surface of the virtual object, and displaying the virtual object with the target map added according to the highlight parameters.
In another possible implementation manner, obtaining the highlight parameters of the virtual object according to the normal vectors, the illumination vectors and the view angle vectors of the plurality of position points includes:
For each position point, taking the dot product of the sum vector of the view angle vector and the illumination vector of the position point and the normal vector of the position point as a second numerical value;
taking the product of the larger value of the second value sum 0 and the illumination vector as a first high-photon parameter;
and adjusting the first high-photon parameter according to a preset second adjustment parameter to obtain a second high-photon parameter.
In another possible implementation, after determining the target map matching the diffuse reflection parameter and the reflection parameter, the method further includes:
extending each position point along the corresponding normal direction by a preset distance, forming an extended area outside the surface of the virtual object, and filling the reference color into the extended area;
Adding a target map on the surface of the virtual object, displaying the virtual object with the target map added, and comprising:
and adding a target map on the surface of the virtual object, and displaying the virtual object added with the target map and an extension area outside the surface of the virtual object.
The virtual object display method provided by the embodiment of the disclosure can be applied to various scenes.
For example, in a game scene.
The virtual scene is created in the game application, the virtual light source is arranged in the virtual scene, when the virtual object is displayed in the virtual scene, the influence of the irradiation of the virtual light source on the virtual object is required to be considered, the virtual object is displayed by adopting the method provided by the embodiment of the application, and the diffuse reflection effect and the reflection effect generated on the surface of the virtual object under the irradiation of the virtual light source can be considered, so that the displayed virtual object is more real, and the display effect of the virtual object is improved.
As another example, the method is applied to a virtual reality scene.
A user can shoot a picture of an actual scene by using a camera of the terminal, and a virtual object is added in the picture, when the virtual object is added, the influence of the irradiation of a virtual light source on the virtual object is required to be considered, and the virtual object is displayed by adopting the method provided by the embodiment of the application, and the diffuse reflection effect and the reflection effect generated on the surface of the virtual object under the irradiation of the light source can be considered, so that the display effect of the virtual object is improved.
Fig. 2 is a flowchart of another virtual object display method according to an exemplary embodiment, and referring to fig. 2, the method is applied to a terminal, where the terminal is a portable, pocket-sized, hand-held, or other type of terminal, such as a mobile phone, a computer, a tablet computer, etc., and the method includes the following steps:
201. the terminal acquires object parameters of the virtual object.
In the embodiment of the disclosure, in order to make the display effect of the virtual object more natural, a map is added to the surface of the virtual object, when the map is selected, the brightness of the surface of the virtual object under the irradiation of the light source is considered, so that the map with the brightness matched with the brightness of the surface of the virtual object is selected, and the map is added to the surface of the virtual object, so that the effect of the surface of the virtual object under the irradiation of the light source is simulated, and the display effect of the virtual object is improved.
The virtual object refers to an object arranged in a virtual scene, and optionally, the virtual object is a dynamic object such as a person, an animal, a plant, or a static object such as a table, a chair, or the like. The object parameters of the virtual object are parameters required when the virtual object is displayed in the virtual scene, and according to the object parameters of the virtual object, various parameters generated by the virtual object under the irradiation of the virtual light source can be determined, wherein the various parameters are used for describing the conditions that the display effect of the virtual object under the irradiation of the virtual light source should be met. And selecting corresponding maps according to various parameters so that the display effect achieved when the selected maps are added to the surface of the virtual object meets the conditions.
In one possible implementation, since the virtual object surface includes a plurality of location points, each location point having a corresponding object parameter, the object parameters of the virtual object include a normal vector and an illumination vector at the plurality of location points. The normal vector at any position point is a vector in the normal direction of the position point, namely a vector in the direction perpendicular to the surface of the virtual object where the position point is located, and the illumination vector at any position point is a vector in the light illumination direction of the position point.
For example, referring to the schematic diagram of the position point shown in fig. 3, the position point a is a position point on the surface of the sphere, the vector N is a normal vector at the position point a, and the vector L is an illumination vector at the position point a.
202. And the terminal acquires diffuse reflection parameters and reflection parameters of the virtual object according to the normal vectors and the illumination vectors of the plurality of position points.
In the embodiment of the disclosure, since the diffuse reflection phenomenon and the reflection phenomenon are generated on the surface of the object under the irradiation of the light source, in order to make the display effect of the virtual object more realistic, the diffuse reflection parameter and the reflection parameter of the surface of the virtual object are obtained, the diffuse reflection parameter is used for describing the diffuse reflection condition that the virtual object should satisfy under the irradiation of the virtual light source, and the reflection parameter is used for describing the reflection condition that the virtual object should satisfy under the irradiation of the virtual light source, so that the virtual object is displayed according to the diffuse reflection parameter and the reflection parameter.
Since the virtual object is irradiated by the virtual light source in the virtual scene, and is also irradiated by the reflected light of other virtual objects in the virtual scene, the irradiation of the reflected light of other virtual objects affects the darker part on the surface of the virtual object, that is, the darker part on the surface of the virtual object is irradiated by the reflected light of other virtual objects, and the brightness is improved. That is, the smaller the diffuse reflection parameter generated when the virtual object is irradiated by the virtual light source, the darker the surface of the virtual object, and the larger the reflection parameter generated when the reflected light is irradiated by the reflected light of other virtual objects, the larger the influence of the reflected light on the brightness on the surface of the virtual object, so that the diffuse reflection parameter and the reflection parameter of the virtual object have a negative correlation.
That is, the diffuse reflection parameter and the reflection parameter of the virtual object are affected by the illumination intensity, the brighter the illumination intensity is, the larger the diffuse reflection parameter is, the smaller the influence of the reflected light is, and the smaller the reflection parameter is; the smaller the illumination intensity, the darker the virtual object surface, the smaller the diffuse reflection parameter, the larger the influence of reflected light, and the larger the reflection parameter.
The diffuse reflection parameters of the virtual object include diffuse reflection sub-parameters of a plurality of location points on the surface of the virtual object, and the specular reflection parameters of the virtual object include specular sub-parameters of a plurality of location points on the surface of the virtual object. Because the surface of the virtual object comprises a plurality of position points, when the diffuse reflection parameters and the reflection parameters of the virtual object are acquired, the diffuse reflection subparameter and the reflection subparameter of each position point are required to be acquired respectively, the diffuse reflection subparameters of the plurality of position points are combined to obtain the diffuse reflection parameters of the virtual object, and the reflection subparameters of the plurality of position points are combined to obtain the reflection parameters of the virtual object. For example, if the diffuse reflection parameter of the virtual object is a matrix, the diffuse reflection sub-parameter of each position point is each element in the matrix.
The embodiment of the disclosure is illustrated by taking diffuse reflection subparameters and reflection subparameters of any position points as examples.
In one possible implementation manner, since the diffuse reflection phenomenon refers to a phenomenon that light irradiates a rough object surface, and the normal direction of each position point on the object surface is inconsistent, so that the light is reflected in different directions, the terminal obtains the diffuse reflection subparameter of the position point according to the normal vector and the illumination vector of the position point.
Alternatively, the terminal uses (0, 1) to represent the brightness of the position point on the virtual object surface, 0 represents the brightest, and 1 represents the darkest. Since the dot product of the two vectors is less than 0, the terminal determines the dot product of the normal vector and the illumination vector as a first value, and determines the larger value of the first value and 0 as the diffuse reflection subparameter of the position point. That is, if the dot product is less than 0, 0 is taken as the diffuse reflection subparameter, and if the dot product is not less than 0, the first numerical value is taken as the diffuse reflection subparameter.
For example, the following formula is used to obtain the diffuse reflection subparameter of the location point:
D=max(0,dot(N,L));
Wherein D represents the diffuse reflection subparameter of the position point, N represents the normal vector of the position point, L represents the illumination vector of the position point, and dot (N, L) represents the dot product of N and L.
Since the brightness of the surface of the virtual object is affected not only by the virtual light source but also by the reflected light of other virtual objects, the terminal also needs to acquire the reflection sub-parameters of each position point.
In one possible implementation manner, the terminal performs inverse processing on the diffuse reflection subparameter to obtain a first reflection subparameter of the position point, and adjusts the first reflection subparameter according to a preset first adjustment parameter to obtain a second reflection subparameter of the position point, wherein the second reflection subparameter represents the brightness of the position point when the position point is irradiated by reflected light of other virtual objects.
The diffuse reflection sub-parameter is subjected to a negation process to determine the influence of the reflected light of other virtual objects on the brightness of the position point when the position point is irradiated with the reflected light. Alternatively, 1 represents brightest and 0 represents darkest, and the negation is to subtract 1 from the diffuse reflectance subparameter. For example, if the diffuse reflection subparameter of the location point is 0, the brightness of the location point is the lowest, i.e. the darkest, then the inverse processing is performed on 0, so as to obtain the first reflection subparameter 1, which indicates that the brightness of the location point is increased more when the location point is irradiated by the reflected light of other virtual objects.
If the first reflection sub-parameter obtained by the reflection processing is taken as the reflection sub-parameter of the position point, the value obtained after the addition of the diffuse reflection sub-parameter and the reflection sub-parameter is 0, and the position point has no reflection effect on light, so that the first reflection sub-parameter is adjusted according to the first adjustment parameter to obtain the adjusted second reflection sub-parameter, and the brightness of the position point is accurately reflected by adopting the parameter obtained after the addition of the diffuse reflection sub-parameter and the adjusted second reflection sub-parameter. The first adjustment parameter is used for indicating the influence degree of the reflected light on the virtual object, and is set according to the intensity of the reflected light of other virtual objects, if the reflected light of other virtual objects is strong, that is, the influence of the reflected light of other virtual objects on the virtual object is large, the first adjustment parameter is a large parameter, and if the reflected light of other virtual objects is weak, that is, the influence of the reflected light of other virtual objects on the virtual object is small, the first adjustment parameter is a small parameter, and the first adjustment parameter is more than 0 and less than 1.
For example, referring to the virtual object shown in fig. 4, in the case where the virtual light source is located at the upper right side, the virtual object of the left diagram in fig. 4 is illuminated by the virtual light source, the right side is brighter, and the lower left corner is darker, but the illumination of the reflected light of other virtual objects produces the brightness in the virtual object opposite to the brightness produced by the illumination of the virtual light source, as in the virtual object of the right diagram in fig. 4, the right side is darker, and the lower left corner is brighter.
203. And the terminal determines a target map matched with the diffuse reflection parameter and the reflection parameter.
In the embodiment of the disclosure, since the terminal needs to add the map on the surface of the virtual object when displaying the virtual object, the virtual object seen by the user is the virtual object after the map is added. However, the selection of the map has a great influence on the display effect of the virtual object, for example, a certain position point of the virtual object is not irradiated by light, and then the position point is darker, the required map should be a darker map, if the display effect of the position point is inconsistent with the actual situation due to the brighter map, the display effect is poor. Because the diffuse reflection parameter and the reflection parameter are used for representing the brightness of the surface of the virtual object, the matched target mapping is determined according to the diffuse reflection parameter and the reflection parameter, so that the target mapping is added to the surface of the virtual object to display a good display effect.
In one possible implementation manner, before determining the target map matched with the diffuse reflection parameter and the reflection parameter, the terminal needs to acquire a plurality of alternative maps, and as the brightness of different parts of the virtual object is different under the irradiation of the virtual light source when the virtual object is displayed, the brightness of the plurality of alternative maps is also different, so that the plurality of alternative maps can meet the requirements of different brightness displayed by the virtual object.
And after the plurality of alternative maps are sequentially ordered from big to small according to brightness, determining the serial number of each alternative map, wherein the serial numbers of the alternative maps represent the brightness of the alternative maps, the serial numbers of different alternative maps are different, namely the brightness of different alternative maps is different, the smaller the serial number is, the brighter the brightness of the alternative map is, and the larger the serial number is, the darker the brightness of the alternative map is. The target map is any one of the multiple alternative maps or a map obtained by fusing the multiple alternative maps.
The embodiment of the disclosure is to obtain a map capable of reflecting the brightness of a position point on the surface of a virtual object according to the brightness of the position point, so that the display effect of the virtual object is consistent with the actual display effect after the map is added to the virtual object.
In one possible implementation, if the sketch image is formed by lines, and if the sketch effect is to be displayed in the image of the virtual object, the obtained alternative mapping is a line graph, the more sparse the distribution of the lines in the alternative mapping is, the brighter the alternative mapping is, the more compact the distribution of the lines in the alternative mapping is, and the darker the alternative mapping is. For example, referring to the alternative maps shown in fig. 5, the brightness of the 6 alternative maps decreases sequentially from left to right, with the first alternative map numbered 0, the second alternative map numbered 1, and the last alternative map numbered 5.
In addition, the more the number of alternative maps, the finer the division of brightness, and the more the alternative maps can be selected by the terminal, the better the display effect.
In another possible implementation, if the virtual object displays an effect other than a sketch effect, a corresponding type of map is obtained from the other effect. For example, to simulate the effect of a lattice, the alternative map is composed of a plurality of points.
In one possible implementation, the determining, by the terminal, a target map that matches the diffuse reflection parameter and the reflection parameter includes: the terminal obtains the brightness subparameter of each position point according to the diffuse reflection subparameter and the reflection subparameter of each position point; respectively acquiring weights of a plurality of alternative maps according to the brightness subparameter of each position point and the serial numbers of the plurality of alternative maps; and determining the target map according to the weights of the multiple alternative maps corresponding to each position point.
When determining the target mapping of the virtual object, since the virtual object includes a plurality of position points and the illumination received by each position point is different, that is, the brightness of each position point is different, in order to make the target mapping of the virtual object more conform to the actual illumination, the target mapping corresponding to each position point needs to be determined respectively, and the target mapping corresponding to each position point is combined together to obtain the target mapping of the virtual object.
The diffuse reflection subparameter and the reflection subparameter are used for reflecting the brightness corresponding to the position point, if the brightness subparameter is obtained only according to the diffuse reflection subparameter, the reflection effect of the virtual object cannot be reflected in the displayed virtual object, so that the displayed virtual object is inconsistent with the actual situation, the display effect is poor, and the brightness subparameter of the position point is obtained according to the diffuse reflection subparameter and the reflection subparameter, so that the obtained brightness subparameter can reflect the brightness of the position point more accurately.
In one possible implementation manner, the terminal obtains the luminance subparameter of each location point according to the diffuse reflection subparameter and the reflection subparameter of each location point, respectively, including: for each position point, the terminal adds the diffuse reflection subparameter and the reflection subparameter of the position point to obtain a first brightness subparameter of the position point; multiplying the first brightness subparameter by the number of the alternative maps to obtain a second brightness subparameter of the position point, and taking the second brightness subparameter as the brightness subparameter of the position point.
In one possible implementation manner, the terminal obtains weights of the multiple alternative maps according to the luminance subparameter of each position point and the serial numbers of the multiple alternative maps, respectively, including: for each position point, the terminal determines the matching parameter of each alternative mapping according to the difference value between the brightness subparameter of the position point and the serial number of each alternative mapping; the difference between the matching parameters of any two adjacent alternative maps is taken as the weight of the first alternative map in the any two alternative maps, and the matching parameter of the last alternative map is taken as the weight of the last alternative map. The matching parameters represent the matching degree between the brightness of the alternative mapping and the brightness of the position point, and any two adjacent alternative mapping refers to two adjacent alternative mapping in a plurality of alternative mapping which are sequentially arranged according to brightness.
If the sequence number of the alternative mapping is smaller, the difference between the brightness subparameter and the sequence number of the alternative mapping is larger, and if the sequence number of the alternative mapping is smaller, the difference between the brightness subparameter and the sequence number of the alternative mapping is smaller, and the value range of the matching parameter is fixed, so that the first reference value is used as the matching parameter of the alternative mapping in response to the difference being larger than the first reference data; responding to the difference value being smaller than the second reference value, and taking the second reference value as a matching parameter of the alternative mapping; and responding to the difference value not larger than the first reference value and not smaller than the second reference value, and taking the difference value as a matching parameter of the alternative mapping. Wherein the first reference value is greater than the second reference value.
For example, there are 6 alternative maps, the serial numbers of the six alternative maps from light to dark are sequentially 0,1, 2, 3, 4 and 5, the diffuse reflection subparameter and the reflection subparameter are added to be 0.75, the brightness subparameter is 4.5, the difference between the brightness subparameter and the serial numbers of the six alternative maps is sequentially 4.5, 3.5, 2.5, 1.5, 0.5 and-0.5, the first reference value is 1, the second reference value is 0, and the matching parameters for obtaining the six alternative maps are sequentially 1, 0.5 and 0, so that the weights for obtaining the six alternative maps are sequentially 0, 0.5 and 0.
In another possible implementation manner, if each candidate image is stored separately, each candidate image occupies one storage space, and in total, needs to occupy an enlarged storage space, then the multiple candidate images are fused, so as to reduce the number of images to be stored and save the storage space. Therefore, the terminal fuses the candidate maps with adjacent sequence numbers according to the sequence numbers of the plurality of candidate maps to obtain at least one fused map. At least one channel of the red channel, the green channel or the blue channel of the fusion map corresponds to one alternative map, and the alternative maps with adjacent serial numbers refer to two or three alternative maps with adjacent serial numbers.
For example, referring to fig. 5, there are six alternative maps with serial numbers of 0, 1, 2, 3, 4 and 5, three alternative maps with adjacent serial numbers are fused, referring to fig. 6, the alternative maps with serial numbers of 0, 1 and 2 are fused to obtain a fusion map on the left of fig. 6, the red channel of the fusion map corresponds to the alternative map with serial number of 0, the green channel corresponds to the alternative map with serial number of 1, the blue channel corresponds to the alternative map with serial number of 2, and the alternative maps with serial numbers of 3, 4 and 5 are fused to obtain a fusion map on the right of fig. 6, the red channel of the fusion map corresponds to the alternative map with serial number of 3, the green channel corresponds to the alternative map with serial number of 4, and the blue channel corresponds to the alternative map with serial number of 5.
Then, the terminal determines a matching parameter of each alternative map according to the difference between the luminance subparameter of the position point and the serial number of each alternative map, including: the terminal obtains a sequence number matrix of each fusion map; acquiring a brightness sub-parameter matrix; and obtaining the difference between the brightness sub-parameter matrix and the sequence number matrix of each fusion map, and determining a matching parameter matrix. The sequence number matrix of the fusion map comprises a sequence number of each alternative map in the fusion map.
For example, the fusion map includes three alternative maps, and the sequence number of each alternative map is 0,1 and 2 in sequence, and then the sequence number matrix is (0, 1 and 2); the luminance subparameter matrix comprises a plurality of luminance subparameters, and the number of the plurality of luminance subparameters is equal to the number of the alternative maps contained in the fusion map, for example, the luminance subparameter is 4.5, and the fusion map comprises three alternative maps, and the luminance subparameter matrix is (4.5,4.5,4.5); the matching parameter matrix includes the matching parameters of each candidate map, for example, the difference between the luminance subparameter matrix (4.5,4.5,4.5) and the sequence number matrix (0, 1, 2) is (4.5,3.5,2.5), and the matching parameter matrix is (1, 1).
After obtaining the matching parameter matrix corresponding to each fusion map, the terminal uses the difference value between the matching parameters of any two adjacent alternative maps as the weight of the first alternative map in the two alternative maps, and uses the matching parameter of the last alternative map as the weight of the last alternative map.
For example, the weight of each alternative map is obtained using the following formula:
weights=saturate((intensity*m)-(a1,a2,a3));
Wherein weights denotes a matching parameter matrix of the fusion map, intensity denotes a luminance subparameter of the position point, m denotes the number of alternative maps, a denotes the number of alternative maps, saturate (·) denotes that each obtained matching parameter is limited between 0 and 1, i.e. if intensity m-a1 is larger than 1, the result of intensity m-a1 is 1, and if intensity m-a1 is smaller than 0, the result of intensity m-a1 is 0.
In the process of determining the matching parameter matrix by adopting the brightness sub-parameter matrix and the sequence number matrix, the matching parameters of two or three alternative maps can be obtained by one calculation, and compared with the matching parameters of each alternative map respectively calculated, the calculation amount is reduced, and the weight acquisition speed is improved.
In addition, when fusing a plurality of alternative maps, if the number of the alternative maps is a multiple of 3, fusing the adjacent three alternative maps into one fusion map; if the number of the alternative maps is not a multiple of 3 and one alternative map is left, directly taking the remaining one alternative map as a fusion map; if the number of alternative maps is not a multiple of 3 and two alternative maps remain, the remaining two alternative maps are fused.
In one possible implementation manner, the determining, by the terminal, the target map according to the weight corresponding to each candidate map includes: the terminal takes the candidate mapping corresponding to the maximum weight as a target mapping; or the terminal performs weighted fusion on the multiple candidate maps according to the weight of each candidate map, and takes the fused map as a target map.
204. And the terminal acquires highlight parameters of the virtual object according to the normal vectors, the illumination vectors and the view angle vectors of the plurality of position points.
Optionally, the object parameters further comprise a perspective vector, the perspective vector of the location point being a vector in a direction of the virtual camera relative to the location point. Setting a virtual camera in the virtual scene, wherein the picture obtained by shooting the scene where the virtual object is located by the virtual camera is the picture which is displayed by the terminal and comprises the virtual object. After the virtual light source irradiates the position point on the surface of the virtual object, if the position point directly reflects the light to the position where the virtual camera is located, the virtual camera directly shoots the position point, and then the position point is the brightest position point.
For example, referring to the schematic view of the location point shown in fig. 3, the vector H is the view angle vector at the location point a.
The embodiments of the present disclosure will be described by taking a high photon parameter obtained from a location point as an example. The highlight parameter characterizes the brightest location point on the surface of the virtual object. Therefore, the highlight parameters of the virtual object are obtained according to the normal vector, the illumination vector and the view angle vector of the position point.
In one possible implementation, the terminal uses a dot product of a sum vector of a view angle vector and an illumination vector of the position point and a normal vector of the position point as a second numerical value; taking the product of the larger value of the second value sum 0 and the illumination vector as a first high-photon parameter; and adjusting the first high-photon parameter according to a preset second adjustment parameter to obtain a second high-photon parameter, and taking the second high-photon parameter as the high-photon parameter of the position point. The sum vector is a half-angle vector obtained by adding the view angle vector and the illumination vector.
When the high photon parameter is larger than a certain reference value, the position point corresponding to the high photon parameter is the brightest position point, and the position point is brightest when the virtual object is present later; when the high photon parameter is smaller than a certain preset parameter, the position point corresponding to the high photon parameter is not the brightest position point, and then the virtual object is displayed according to the target map of the position point. For example, referring to the virtual object shown in fig. 7, the upper left corner of the virtual object is a white area, which is the brightest area in the virtual object, and includes a plurality of position points indicated by highlight parameters.
The second adjustment parameter is any value greater than 0 and less than 1, for example, 0.1, 0.2, etc. The smaller the second adjustment parameter is, the smaller the obtained high photon parameter is, and correspondingly, the smaller the high light range displayed in the virtual object is; the larger the second adjustment parameter is, the larger the obtained high photon parameter is, and accordingly, the larger the high light range displayed in the virtual object is.
For example, the high photon parameters of a location point are obtained with reference to the following formula:
G=I*max(0,dot(N,H));
Wherein G represents a specular parameter, I represents a viewing angle vector, N represents a normal vector, H represents an illumination vector, and dot (N, H) represents a dot product of N and H.
205. And adding a target map on the surface of the virtual object by the terminal, and displaying the virtual object with the target map added according to the highlight parameters.
In the embodiment of the disclosure, after determining the target map of each position point, the terminal adds a corresponding target map to each position point on the surface of the virtual object, and displays the virtual object with the target map added, where the target map reflects the brightness of each position point on the surface of the virtual object, and the virtual object seen by the user is more in line with the real object. Optionally, the target maps of any two location points are the same or different.
After the terminal acquires the highlight parameters of the virtual object, a highlight effect is added in the displayed virtual object, so that the position indicated by the highlight parameters of the virtual object is brightest, and a user can see the highlight part in the virtual object.
Taking a sketch image as an example, referring to a sketch image of a hand drawing shown in fig. 8, objects drawn in the sketch image have light reflection and high light, and the embodiment of the disclosure is to make a virtual object simulate the display effect of the sketch image. Referring to the virtual object shown in the left diagram of fig. 9, the added map of the surface of the virtual object is determined only according to the diffuse reflection parameter, and although the virtual object simulates the display effect of the sketch image to a certain extent, it can be clearly seen that there is no reflection and no high light in the virtual object. Referring to the virtual object shown in the right graph in fig. 9, the attached map on the surface of the virtual object is determined according to the diffuse reflection parameter and the reflection parameter, compared with the virtual object in the left graph, the lower left corner of the virtual object in the left graph can obviously show that reflection exists, the upper right corner of the virtual object in the right side has obvious highlight, and the virtual object in the right side simulates the display effect of the sketch image.
In addition, in one possible implementation manner, the virtual object is subjected to edge tracing, that is, each position point extends along the corresponding normal direction by a preset distance, an extension area is formed outside the surface of the virtual object, the extension area is the edge displayed outside the surface of the virtual object, the reference color is filled into the extension area, and the filled reference color is the color outside the surface when the virtual object is displayed. When the terminal displays the virtual object, adding a target map on the surface of the virtual object, and displaying the virtual object added with the target map and an extension area outside the surface of the virtual object, thereby realizing the display effect of extending the virtual object outwards.
In one possible implementation manner, after acquiring the highlight parameters and the extension areas of the virtual object, the terminal adds a target map to the surface of the virtual object, displays the extension areas outside the surface of the virtual object, and displays the virtual object and the extension areas outside the surface of the virtual object after adding the target map according to the highlight parameters.
In addition, for the virtual object presenting the sketching effect, in order to enable the three-dimensional virtual object in the virtual scene to display the two-dimensional sketching effect, the maps with different brightness are added on the surface of the three-dimensional virtual object, so that the three-dimensional virtual object with the maps added displays the two-dimensional sketching effect, a user can rotate the three-dimensional virtual object, view the virtual object from multiple angles, and the drawing of the two-dimensional virtual object from multiple angles is avoided.
It should be noted that, in the embodiment of the present disclosure, only the target maps corresponding to each location point are obtained according to the object parameters of the location point, and the target maps of a plurality of location points are added to the virtual object surface, respectively.
According to the method provided by the embodiment of the disclosure, the diffuse reflection parameters and the reflection parameters of the virtual object can be obtained, and the diffuse reflection parameters and the reflection parameters can reflect the brightness of the virtual object, so that compared with the situation that the matched mapping is determined only according to the diffuse reflection parameters, when the matched target mapping is determined according to the diffuse reflection parameters and the reflection parameters, the influence of the reflection phenomenon on the display effect of the virtual object is considered, the information quantity is increased, the determined target mapping can reflect the brightness of the virtual object more accurately, and therefore, when the virtual object with the target mapping added is displayed, the virtual object is more vivid, and the display effect of the virtual object is improved.
In addition, in the embodiment of the disclosure, the viewing angle vectors of a plurality of position points on the surface of the virtual object are also obtained, and then the highlight parameters of the virtual object are obtained according to the normal vectors, the illumination vectors and the viewing angle vectors of the plurality of position points.
In addition, in the embodiment of the disclosure, a first adjustment parameter is set to adjust the reflective sub-parameter of each position point, a second adjustment parameter is set to adjust the high-photon parameter of each position point, and because the first adjustment parameter and the second adjustment parameter are set by a technician, the sizes of the first adjustment parameter and the second adjustment parameter can be adjusted, so that the reflective sub-parameter and the high-photon parameter can be flexibly adjusted, the flexibility is improved, the brightness of the virtual object can be more conveniently adjusted, and the display effect is adjusted.
In addition, in the embodiment of the disclosure, the alternative maps with adjacent serial numbers are fused to obtain the fusion map, each alternative map is not required to be stored separately, and the fusion map is stored directly, so that the number of the alternative maps required to be stored is reduced, the storage space is saved, and the operation efficiency of the equipment is improved.
In addition, in the process of determining the matching parameter matrix by adopting the brightness sub-parameter matrix and the sequence number matrix, the matching parameters of two or three alternative maps can be obtained by one calculation, and compared with the matching parameters of each alternative map respectively calculated, the calculation amount is reduced, and the weight acquisition speed is improved.
Fig. 10 is a block diagram illustrating a virtual object display apparatus according to an exemplary embodiment. Referring to fig. 10, the apparatus includes:
A first parameter acquiring unit 1001 configured to perform acquisition of object parameters of a virtual object, the object parameters including at least normal vectors and illumination vectors of a plurality of position points on a surface of the virtual object, the illumination vectors of the position points being vectors in a light irradiation direction of the position points;
A second parameter obtaining unit 1002 configured to obtain a diffuse reflection parameter and a reflection parameter of the virtual object according to normal vectors and illumination vectors of the plurality of position points, where the diffuse reflection parameter and the reflection parameter have a negative correlation;
A target map determining unit 1003 configured to perform determination of a target map matching the diffuse reflection parameter and the specular reflection parameter;
the virtual object display unit 1004 is configured to perform adding a target map to the virtual object surface, and display the virtual object after adding the target map.
According to the device provided by the embodiment of the disclosure, the diffuse reflection parameters and the reflection parameters of the virtual object can be obtained, and the diffuse reflection parameters and the reflection parameters can reflect the brightness of the virtual object, so that compared with the situation that the matched mapping is determined only according to the diffuse reflection parameters, when the matched target mapping is determined according to the diffuse reflection parameters and the reflection parameters, the influence of the reflection phenomenon on the display effect of the virtual object is considered, the information quantity is increased, the determined target mapping can reflect the brightness of the virtual object more accurately, and therefore, when the virtual object with the target mapping added is displayed, the virtual object is more vivid, and the display effect of the virtual object is improved.
In one possible implementation, the diffuse reflection parameters of the virtual object include diffuse reflection sub-parameters of a plurality of location points, and the specular reflection parameters of the virtual object include specular sub-parameters of a plurality of location points; a second parameter acquisition unit 1002 configured to perform:
For each position point, obtaining diffuse reflection subparameters of the position points according to the normal vector and the illumination vector of the position points;
performing inverse processing on the diffuse reflection subparameter to obtain a first reflection subparameter of the position point;
According to a preset first adjustment parameter, adjusting the first reflector sub-parameter to obtain a second reflector sub-parameter of the position point;
Combining the diffuse reflection sub-parameters of the plurality of position points to obtain diffuse reflection parameters;
and combining the second reflection sub-parameters of the plurality of position points to obtain the reflection parameters.
In another possible implementation, the second parameter obtaining unit 1002 is configured to perform:
taking the dot product of the normal vector and the illumination vector as a first numerical value;
The larger of the first value and 0 is determined as the diffuse reflection subparameter of the location point.
In another possible implementation, the diffuse reflection parameter of the virtual object includes diffuse reflection sub-parameters of a plurality of location points, and the specular reflection parameter of the virtual object includes specular sub-parameters of a plurality of location points; referring to fig. 11, the target map determining unit 1003 includes:
A luminance sub-parameter obtaining sub-unit 1013 configured to perform obtaining luminance sub-parameters of each position point respectively according to the diffuse reflection sub-parameter and the reflection sub-parameter of each position point;
A weight obtaining subunit 1023 configured to obtain weights of the plurality of alternative maps according to the luminance subparameter of each position point and the sequence numbers of the plurality of alternative maps, respectively, where the sequence numbers of the alternative maps are determined after the plurality of alternative maps are sequentially arranged from large to small according to the luminance, and the sequence numbers of the alternative maps represent the luminance of the alternative maps, and the sequence numbers of the different alternative maps are different;
the map determining subunit 1033 is configured to perform the determination of the target map according to the weights of the plurality of candidate maps corresponding to each position point.
In another possible implementation, referring to fig. 11, the luminance sub-parameter obtaining subunit 1013 is configured to perform:
For each position point, adding the diffuse reflection subparameter and the reflection subparameter of the position point to obtain a first brightness subparameter of the position point;
And multiplying the first brightness subparameter by the number of the alternative maps to obtain a second brightness subparameter of the position point.
In another possible implementation, referring to fig. 11, the weight acquisition subunit 1023 is configured to perform:
for each position point, determining a matching parameter of each alternative mapping according to the difference value between the brightness subparameter of the position point and the serial number of each alternative mapping, wherein the matching parameter characterizes the matching degree between the brightness of the alternative mapping and the brightness of the position point;
Taking the difference value between the matching parameters of any two adjacent alternative maps as the weight of the first alternative map in any two alternative maps, and taking the matching parameter of the last alternative map as the weight of the last alternative map, wherein any two adjacent alternative maps refer to two adjacent alternative maps in a plurality of alternative maps which are sequentially arranged from big to small according to brightness.
In another possible implementation, referring to fig. 11, the apparatus further includes:
A map fusion unit 1005 configured to perform fusion of alternative maps with adjacent sequence numbers according to the sequence numbers of the plurality of alternative maps, to obtain at least one fusion map, where at least one channel of the red channel, the green channel or the blue channel of the fusion map corresponds to one alternative map;
the weight acquisition subunit 1023 is configured to perform:
Acquiring a sequence number matrix of each fusion map, wherein the sequence number matrix of the fusion map comprises a sequence number of each alternative map in the fusion map;
Acquiring a brightness sub-parameter matrix, wherein the brightness sub-parameter matrix comprises a plurality of brightness sub-parameters, and the number of the brightness sub-parameters is equal to the number of the alternative maps contained in the fusion map;
And obtaining a difference value between the brightness sub-parameter matrix and the sequence number matrix of each fusion map, and determining a matching parameter matrix, wherein the matching parameter matrix comprises the matching parameters of each alternative map.
In another possible implementation, the weight acquisition subunit 1023 is configured to perform:
responding to the difference value being larger than the first reference value, and taking the first reference value as a matching parameter of the alternative mapping; or alternatively
Responding to the difference value being smaller than the second reference value, and taking the second reference value as a matching parameter of the alternative mapping; or alternatively
And responding to the difference value not larger than the first reference value and not smaller than the second reference value, and taking the difference value as a matching parameter of the alternative mapping.
In another possible implementation, the map determining subunit 1033 is configured to perform:
Taking the candidate mapping corresponding to the maximum weight as a target mapping; or alternatively
And carrying out weighted fusion on the multiple alternative maps according to the weight of each alternative map, and taking the fused map as a target map.
In another possible implementation, the object parameters further include perspective vectors of a plurality of location points, the perspective vector of a location point being a vector in a direction of the virtual camera relative to the location point; referring to fig. 11, the apparatus further includes:
A third parameter obtaining unit 1006 configured to obtain a highlight parameter of the virtual object according to the normal vector, the illumination vector, and the viewing angle vector of the plurality of position points, the highlight parameter representing the brightest position point on the surface of the virtual object;
The virtual object display unit 1004 is configured to perform adding a target map to the surface of the virtual object, and display the virtual object after adding the target map according to the highlight parameter.
In another possible implementation manner, the third parameter obtaining unit 1006 is configured to perform:
For each position point, taking the dot product of the sum vector of the view angle vector and the illumination vector of the position point and the normal vector of the position point as a second numerical value;
taking the product of the larger value of the second value sum 0 and the illumination vector as a first high-photon parameter;
and adjusting the first high-photon parameter according to a preset second adjustment parameter to obtain a second high-photon parameter.
In another possible implementation, referring to fig. 11, the apparatus further includes:
An extended region acquisition unit 1007 configured to perform extending each position point along a corresponding normal direction by a preset distance, forming an extended region outside the surface of the virtual object, filling the reference color into the extended region;
The virtual object display unit 1004 is configured to perform adding a target map to the surface of the virtual object, and display the virtual object to which the target map is added and an extension area outside the surface of the virtual object.
The specific manner in which the individual units perform the operations in relation to the apparatus of the above embodiments has been described in detail in relation to the embodiments of the method and will not be described in detail here.
Fig. 12 shows a block diagram of a terminal 1200 according to an exemplary embodiment of the present application. The terminal 1200 may be a portable mobile terminal such as: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. Terminal 1200 may also be referred to as a user device, portable terminal, laptop terminal, desktop terminal, etc.
In general, the terminal 1200 includes: a processor 1201 and a memory 1202.
Processor 1201 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1201 may be implemented in at least one hardware form of DSP (DIGITAL SIGNAL Processing), FPGA (Field-Programmable gate array), PLA (Programmable Logic Array ). Processor 1201 may also include a main processor, which is a processor for processing data in an awake state, also referred to as a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 1201 may be integrated with a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 1201 may also include an AI (ARTIFICIAL INTELLIGENCE ) processor for processing computing operations related to machine learning.
Memory 1202 may include one or more computer-readable storage media, which may be non-transitory. Memory 1202 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1202 is used to store at least one instruction for execution by processor 1201 to implement the virtual object display method provided by the method embodiments of the present application.
In some embodiments, the terminal 1200 may further optionally include: a peripheral interface 1203, and at least one peripheral. The processor 1201, the memory 1202, and the peripheral interface 1203 may be connected by a bus or signal lines. The individual peripheral devices may be connected to the peripheral device interface 1203 via buses, signal lines, or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1204, a display 1205, a camera assembly 1206, audio circuitry 1207, a positioning assembly 1208, and a power supply 1209.
The peripheral interface 1203 may be used to connect at least one peripheral device associated with an I/O (Input/Output) to the processor 1201 and the memory 1202. In some embodiments, the processor 1201, the memory 1202, and the peripheral interface 1203 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 1201, the memory 1202, and the peripheral interface 1203 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 1204 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1204 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 1204 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 1204 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuit 1204 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: the world wide web, metropolitan area networks, intranets, generation mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (WIRELESS FIDELITY ) networks. In some embodiments, the radio frequency circuit 1204 may further include NFC (NEAR FIELD Communication) related circuits, which is not limited by the present application.
The display 1205 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display 1205 is a touch display, the display 1205 also has the ability to collect touch signals at or above the surface of the display 1205. The touch signal may be input as a control signal to the processor 1201 for processing. At this time, the display 1205 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 1205 may be one and disposed on a front panel of the terminal 1200; in other embodiments, the display 1205 may be at least two, respectively disposed on different surfaces of the terminal 1200 or in a folded design; in other embodiments, the display 1205 may be a flexible display disposed on a curved surface or a folded surface of the terminal 1200. Even more, the display 1205 may be arranged in an irregular pattern that is not rectangular, i.e., a shaped screen. The display 1205 can be made of LCD (Liquid CRYSTAL DISPLAY), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 1206 is used to capture images or video. Optionally, camera assembly 1206 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 1206 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuitry 1207 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1201 for processing, or inputting the electric signals to the radio frequency circuit 1204 for voice communication. For purposes of stereo acquisition or noise reduction, a plurality of microphones may be respectively disposed at different portions of the terminal 1200. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 1201 or the radio frequency circuit 1204 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, the audio circuitry 1207 may also include a headphone jack.
The positioning component 1208 is used to locate the current geographic location of the terminal 1200 to enable navigation or LBS (Location Based Service, location-based services). The positioning component 1208 may be a positioning component based on the united states GPS (Global Positioning System ), the beidou system of china, or the galileo system of russia.
The power supply 1209 is used to power the various components in the terminal 1200. The power source 1209 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When the power source 1209 comprises a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1200 also includes one or more sensors 1210. The one or more sensors 1210 include, but are not limited to: acceleration sensor 1211, gyroscope sensor 1212, pressure sensor 1213, fingerprint sensor 1214, optical sensor 1215, and proximity sensor 1216.
The acceleration sensor 1211 may detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 1200. For example, the acceleration sensor 1211 may be used to detect components of gravitational acceleration in three coordinate axes. The processor 1201 may control the display 1205 to display a user interface in either a landscape view or a portrait view based on the gravitational acceleration signal acquired by the acceleration sensor 1211. The acceleration sensor 1211 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 1212 may detect a body direction and a rotation angle of the terminal 1200, and the gyro sensor 1212 may collect a 3D motion of the user on the terminal 1200 in cooperation with the acceleration sensor 1211. The processor 1201 may implement the following functions based on the data collected by the gyro sensor 1212: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 1213 may be disposed at a side frame of the terminal 1200 and/or at a lower layer of the display 1205. When the pressure sensor 1213 is provided at a side frame of the terminal 1200, a grip signal of the terminal 1200 by a user may be detected, and the processor 1201 performs a left-right hand recognition or a shortcut operation according to the grip signal collected by the pressure sensor 1213. When the pressure sensor 1213 is disposed at the lower layer of the display 1205, the processor 1201 controls the operability control on the UI interface according to the pressure operation of the user on the display 1205. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 1214 is used to collect a fingerprint of the user, and the processor 1201 identifies the identity of the user based on the fingerprint collected by the fingerprint sensor 1214, or the fingerprint sensor 1214 identifies the identity of the user based on the fingerprint collected. Upon recognizing that the user's identity is a trusted identity, the processor 1201 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 1214 may be provided on the front, back, or side of the terminal 1200. When a physical key or a vendor Logo is provided on the terminal 1200, the fingerprint sensor 1214 may be integrated with the physical key or the vendor Logo.
The optical sensor 1215 is used to collect the ambient light intensity. In one embodiment, processor 1201 may control the display brightness of display 1205 based on the intensity of ambient light collected by optical sensor 1215. Specifically, when the intensity of the ambient light is high, the display brightness of the display screen 1205 is turned up; when the ambient light intensity is low, the display brightness of the display screen 1205 is turned down. In another embodiment, processor 1201 may also dynamically adjust the shooting parameters of camera assembly 1206 based on the intensity of ambient light collected by optical sensor 1215.
A proximity sensor 1216, also referred to as a distance sensor, is typically provided on the front panel of the terminal 1200. The proximity sensor 1216 is used to collect the distance between the user and the front of the terminal 1200. In one embodiment, when the proximity sensor 1216 detects that the distance between the user and the front face of the terminal 1200 gradually decreases, the processor 1201 controls the display 1205 to switch from the bright screen state to the off screen state; when the proximity sensor 1216 detects that the distance between the user and the front surface of the terminal 1200 gradually increases, the processor 1201 controls the display 1205 to switch from the off-screen state to the on-screen state.
It will be appreciated by those skilled in the art that the structure shown in fig. 12 is not limiting and that more or fewer components than shown may be included or certain components may be combined or a different arrangement of components may be employed.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, which when executed by a processor of an electronic device, enables the electronic device to perform the steps performed by the terminal in the virtual object display method described above.
In an exemplary embodiment, a computer program product is also provided, which, when executed by a processor of an electronic device, enables the electronic device to perform the steps performed by the terminal in the virtual object display method described above.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (27)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010833390.6A CN114155336B (en) | 2020-08-18 | 2020-08-18 | Virtual object display method, device, electronic device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010833390.6A CN114155336B (en) | 2020-08-18 | 2020-08-18 | Virtual object display method, device, electronic device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114155336A CN114155336A (en) | 2022-03-08 |
CN114155336B true CN114155336B (en) | 2024-11-22 |
Family
ID=80460112
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010833390.6A Active CN114155336B (en) | 2020-08-18 | 2020-08-18 | Virtual object display method, device, electronic device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114155336B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115272556A (en) * | 2022-07-25 | 2022-11-01 | 网易(杭州)网络有限公司 | Method, apparatus, medium, and device for determining reflected light and global light |
CN119027571A (en) * | 2023-05-23 | 2024-11-26 | 华为技术有限公司 | Image drawing method and electronic device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108109194A (en) * | 2017-12-29 | 2018-06-01 | 广东工业大学 | The realization method and system of laser paper effect in virtual reality scenario |
CN108447112A (en) * | 2018-01-24 | 2018-08-24 | 重庆爱奇艺智能科技有限公司 | Analogy method, device and the VR equipment of role's light environment |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6639594B2 (en) * | 2001-06-03 | 2003-10-28 | Microsoft Corporation | View-dependent image synthesis |
CN111179396B (en) * | 2019-12-12 | 2020-12-11 | 腾讯科技(深圳)有限公司 | Image generation method, image generation device, storage medium, and electronic device |
CN111009026B (en) * | 2019-12-24 | 2020-12-01 | 腾讯科技(深圳)有限公司 | Object rendering method and device, storage medium and electronic device |
-
2020
- 2020-08-18 CN CN202010833390.6A patent/CN114155336B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108109194A (en) * | 2017-12-29 | 2018-06-01 | 广东工业大学 | The realization method and system of laser paper effect in virtual reality scenario |
CN108447112A (en) * | 2018-01-24 | 2018-08-24 | 重庆爱奇艺智能科技有限公司 | Analogy method, device and the VR equipment of role's light environment |
Also Published As
Publication number | Publication date |
---|---|
CN114155336A (en) | 2022-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11393154B2 (en) | Hair rendering method, device, electronic apparatus, and storage medium | |
CN109978989B (en) | Three-dimensional face model generation method, three-dimensional face model generation device, computer equipment and storage medium | |
CN112870707B (en) | Virtual object display method in virtual scene, computer device and storage medium | |
CN110992493B (en) | Image processing method, device, electronic equipment and storage medium | |
CN109308727B (en) | Virtual image model generation method and device and storage medium | |
CN110064200B (en) | Object construction method and device based on virtual environment and readable storage medium | |
CN111464749B (en) | Method, device, equipment and storage medium for image synthesis | |
CN113763228B (en) | Image processing method, device, electronic equipment and storage medium | |
WO2020233403A1 (en) | Personalized face display method and apparatus for three-dimensional character, and device and storage medium | |
CN110880204A (en) | Virtual vegetation display method and device, computer equipment and storage medium | |
CN109886208B (en) | Object detection method and device, computer equipment and storage medium | |
CN111144365A (en) | Living body detection method, living body detection device, computer equipment and storage medium | |
CN110853128B (en) | Virtual object display method and device, computer equipment and storage medium | |
CN112907716B (en) | Cloud rendering method, device, equipment and storage medium in virtual environment | |
CN112565806B (en) | Virtual gift giving method, device, computer equipment and medium | |
CN114155336B (en) | Virtual object display method, device, electronic device and storage medium | |
CN112308103B (en) | Method and device for generating training samples | |
CN112967261B (en) | Image fusion method, device, equipment and storage medium | |
CN113209610B (en) | Virtual scene picture display method and device, computer equipment and storage medium | |
CN113971714B (en) | Target object image rendering method, device, electronic device and storage medium | |
CN110992268B (en) | Background setting method, device, terminal and storage medium | |
CN109388732B (en) | Music map generating and displaying method, device and storage medium | |
CN110097619B (en) | Animation effect implementation method, device and equipment in application program | |
CN109472855B (en) | Volume rendering method and device and intelligent device | |
CN110672036B (en) | Method and device for determining projection area |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |