CN119537630A - Information display method, device, equipment, storage medium and program product - Google Patents
Information display method, device, equipment, storage medium and program product Download PDFInfo
- Publication number
- CN119537630A CN119537630A CN202411688723.5A CN202411688723A CN119537630A CN 119537630 A CN119537630 A CN 119537630A CN 202411688723 A CN202411688723 A CN 202411688723A CN 119537630 A CN119537630 A CN 119537630A
- Authority
- CN
- China
- Prior art keywords
- rendering
- dimensional model
- target object
- view angle
- event
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 76
- 238000009877 rendering Methods 0.000 claims abstract description 249
- 230000008859 change Effects 0.000 claims abstract description 113
- 230000000007 visual effect Effects 0.000 claims abstract description 52
- 238000006243 chemical reaction Methods 0.000 claims abstract description 41
- 239000011159 matrix material Substances 0.000 claims description 30
- 238000004590 computer program Methods 0.000 claims description 20
- 210000001508 eye Anatomy 0.000 claims description 20
- 210000003128 head Anatomy 0.000 claims description 15
- 230000008569 process Effects 0.000 claims description 10
- 238000001514 detection method Methods 0.000 claims description 8
- 239000000463 material Substances 0.000 claims description 8
- 238000012937 correction Methods 0.000 claims description 7
- 230000003068 static effect Effects 0.000 claims description 4
- 230000007704 transition Effects 0.000 claims description 2
- 206010034719 Personality change Diseases 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 25
- 238000010586 diagram Methods 0.000 description 34
- 238000004891 communication Methods 0.000 description 8
- 238000012545 processing Methods 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 6
- 210000005252 bulbus oculi Anatomy 0.000 description 4
- 230000003993 interaction Effects 0.000 description 3
- 239000013307 optical fiber Substances 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Landscapes
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the invention discloses an information display method, an information display device, information display equipment, an information storage medium and an information display program product, and relates to the technical field of computers. The method comprises the steps of responding to a visual angle conversion triggering event, obtaining event data corresponding to the visual angle conversion triggering event, wherein the visual angle conversion triggering event comprises a terminal gesture change event of a current terminal and/or a body gesture change event of a current user, the event data corresponding to the terminal gesture change event comprises terminal gesture data, the event data corresponding to the body gesture change event comprises body gesture data, rendering and displaying a three-dimensional model of a target object in a second rendering visual angle based on the event data, and the first rendering visual angle is different from the second rendering visual angle, so that the effect that the rendering visual angle of the three-dimensional model changes along with the change of the terminal gesture and/or the body gesture of the user is achieved.
Description
Technical Field
Embodiments of the present invention relate to the field of computer technologies, and in particular, to an information display method, an apparatus, a device, a storage medium, and a program product.
Background
Currently, when displaying a target object such as an article on an intelligent terminal such as a mobile phone, the target object is usually displayed in a planar manner in a form of a picture, a video or the like. For example, in an item display area in the home page or item detail page of an Application (APP), an item image and related descriptive data are statically displayed through a picture, a flattened two-dimensional visual effect is presented, and interaction with a target object is only possible through a finger click of a user.
In the process of realizing the invention, the prior art is found to have at least the following problems:
when the target object is displayed in a planar two-dimensional mode, due to the fact that two-dimensional information is limited, a good visual effect is difficult to bring to a user, and a display scheme needs to be enhanced.
Disclosure of Invention
The embodiment of the invention provides an information display method, an information display device, information display equipment, a storage medium and a program product, so as to optimize an object display scheme and improve a visual display effect.
In a first aspect, an embodiment of the present invention provides an information display method, where the method includes:
Rendering and displaying the three-dimensional model of the target object in a preset display area of the current page at a first rendering view angle;
The method comprises the steps of responding to a detection of a view angle conversion trigger event, acquiring event data corresponding to the view angle conversion trigger event, wherein the view angle conversion trigger event comprises a terminal gesture change event of a current terminal and/or a body gesture change event of a current user, the event data corresponding to the terminal gesture change event comprises terminal gesture data, and the event data corresponding to the body gesture change event comprises body gesture data;
And rendering and displaying the three-dimensional model of the target object at a second rendering view angle based on the event data, wherein the first rendering view angle is different from the second rendering view angle.
In a second aspect, an embodiment of the present invention further provides an information display apparatus, including:
The first rendering view angle display module is used for rendering and displaying the three-dimensional model of the target object in a preset display area of the current page at a first rendering view angle;
The system comprises an event data acquisition module, a visual angle conversion triggering event acquisition module and a visual angle conversion triggering module, wherein the event data acquisition module is used for responding to the detection of the visual angle conversion triggering event and acquiring event data corresponding to the visual angle conversion triggering event, the visual angle conversion triggering event comprises a terminal gesture change event of a current terminal and/or a body gesture change event of a current user, the event data corresponding to the terminal gesture change event comprises terminal gesture data, and the event data corresponding to the body gesture change event comprises body gesture data;
the second rendering view angle display module is used for rendering and displaying the three-dimensional model of the target object at a second rendering view angle based on the event data, wherein the first rendering view angle is different from the second rendering view angle.
In a third aspect, an embodiment of the present invention provides an electronic device, including:
One or more processors;
A memory for storing one or more programs;
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the information presentation methods as provided by any of the embodiments of the present invention.
In a fourth aspect, embodiments of the present invention further provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements an information presentation method as provided by any of the embodiments of the present invention.
In a fifth aspect, embodiments of the present invention further provide a computer program product comprising a computer program which, when executed by a processor, implements the information presentation method provided by any embodiment of the present invention.
The embodiments of the above invention have the following advantages or benefits:
After the three-dimensional model of the target object is rendered and displayed at the first rendering view angle, if a terminal gesture change event and/or a current user body gesture change event are detected, rendering and displaying the three-dimensional model of the target object at a second rendering view angle different from the first rendering view angle based on event data corresponding to the terminal gesture change event and/or the current user body gesture change event, so that the rendering view angle of the three-dimensional model of the target object displayed on the page can be changed along with the terminal gesture and/or the user body gesture change, and further the visual display effect is effectively improved.
Drawings
Fig. 1 is a flowchart of an information display method according to an embodiment of the present invention;
FIG. 2 is a schematic illustration of a three-dimensional model of a target object at different rendering perspectives provided in accordance with an embodiment of the present invention;
FIG. 3 is a schematic illustration of a three-dimensional model of a target object at yet another different rendering perspective provided in accordance with an embodiment of the present invention;
Fig. 4A is a schematic view showing an effect of a three-dimensional model breakthrough window of a target object according to an embodiment of the present invention;
FIG. 4B is an example schematic diagram of a three-dimensional model breakthrough window presentation of a target object according to an embodiment of the present invention;
FIG. 5 is an effect diagram of a three-dimensional model presentation of yet another target object provided in accordance with an embodiment of the present invention;
FIG. 6 is a schematic diagram of a three-dimensional model presentation of a target object provided in accordance with an embodiment of the present invention;
FIG. 7 is a schematic diagram showing a three-dimensional model through a preset mask layer according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a first rendering perspective display of a three-dimensional model of a target object within an information display window of a current page according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of performing a second rendering perspective display on a three-dimensional model of a target object within an information display window of a current page according to an embodiment of the present invention;
fig. 10 is a schematic diagram of performing a first rendering perspective display on a three-dimensional model of a target object outside an information display window of a current page according to an embodiment of the present invention;
FIG. 11 is a schematic diagram of performing a second rendering perspective display on a three-dimensional model of a target object outside an information display window of a current page according to an embodiment of the present invention;
FIG. 12 is a flow chart of yet another information presentation method according to an embodiment of the present invention;
FIG. 13 is a schematic diagram of displaying a target object based on multiple display layers according to an embodiment of the present invention;
FIG. 14 is a flowchart of another information display method according to an embodiment of the present invention;
FIG. 15 is a schematic view of a view cone provided in accordance with an embodiment of the invention;
FIG. 16 is a view cone tilt schematic provided in accordance with an embodiment of the invention;
Fig. 17 is a schematic structural diagram of an information display device according to an embodiment of the present invention;
fig. 18 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
Fig. 1 is a flowchart of an information display method provided by an embodiment of the present invention, where the embodiment may be applied to a scenario where a three-dimensional model of an object such as an article, a commodity, or an object is displayed on a terminal page. For example, in the case where the three-dimensional model of the commodity is displayed on the top page of the APP or the commodity detail page. The method can be executed by an information display device integrated in an intelligent terminal, the device can be realized by software and/or hardware, and the intelligent terminal can be a mobile terminal such as a mobile phone, a tablet personal computer (PAD), a wearable device and the like, a personal computer (PC, personal Computer) and the like. As shown in fig. 1, the method specifically includes the following steps:
And S110, rendering and displaying the three-dimensional model of the target object in a preset display area of the current page at a first rendering view angle.
Alternatively, the current page may include a home page, an item detail page, a live page, a comment page, or a waterfall page of the application. The preset display area may be an area for displaying the target object in the current page. The target object may comprise a commodity. Rendering perspective refers to the angle and position at which a scene is observed and rendered during rendering. In the rendering process, the position parameters of the virtual camera are used, and images with different visual angles can be rendered by adjusting the position parameters of the virtual camera. The virtual camera is used for simulating the behavior of a real camera in the terminal, capturing and recording image information in a virtual scene, and displaying a target object in a two-dimensional plane by rendering the image information. The first rendering perspective may refer to a rendering perspective used when rendering and displaying the three-dimensional model of the target object at any time, for example, the any time may be an initial time when rendering and displaying the three-dimensional model of the target object at the current page.
S120, responding to the detection of the view angle conversion trigger event, and acquiring event data corresponding to the view angle conversion trigger event.
The visual angle conversion triggering event comprises a terminal gesture change event of the current terminal and/or a body gesture change event of the current user, event data corresponding to the terminal gesture change event comprises terminal gesture data, and event data corresponding to the body gesture change event comprises body gesture data.
For example, the terminal may be an intelligent terminal, including an electronic terminal device such as a mobile phone, a PAD, a wearable device, a PC, and the like. The terminal gesture change event may be an event in which the position and/or angle of the terminal is changed, resulting in a change in the viewing angle between the user and the terminal. The terminal pose data may include terminal angle data and/or terminal position data before and/or after the terminal pose is changed.
The body posture change event may be an event in which the position and/or angle of a body part of the user changes, resulting in a change in the angle of view between the user and the terminal. The body position data may include body position angle data and/or body position data before and/or after the body position change.
In practical application, the view angle conversion triggering event may be an event of a view angle change between a user and the terminal caused by the gesture change of the terminal, or may be an event of a view angle change between the user and the terminal caused by the gesture change of the user, or may be an event of a view angle change between the user and the terminal caused by the gesture change of both the terminal and the user.
And S130, rendering and displaying the three-dimensional model of the target object at a second rendering view angle based on the event data.
Wherein the first rendering perspective is different from the second rendering perspective. Because the terminal posture or the user body posture changes, the visual angles of the user and the terminal also change, and in order to present the image of the three-dimensional model matched with the changed visual angle, the rendering visual angle of the three-dimensional model of the target object needs to be adaptively adjusted, namely, the first rendering visual angle is adjusted to be the second rendering visual angle, so that the visual display effect is improved.
Fig. 2 is a schematic illustration of a three-dimensional model of a target object at different rendering angles according to an embodiment of the present invention. In the presentation shown in fig. 2, the user body posture remains unchanged and the terminal posture changes. As shown in fig. 2, an exemplary, intermediate rendering presentation is a rendering presentation of a three-dimensional model of any commodity at rendering perspective 1. When the rendering display diagram on the left side is that the terminal tilts (or rotates) leftwards, rendering display is carried out on the three-dimensional model of the commodity at the rendering view angle 2. When the right rendering display diagram is that the terminal is inclined (or rotated) to the right, rendering display is carried out on the three-dimensional model of the commodity at the rendering view angle 3.
Fig. 3 is a schematic illustration of a three-dimensional model of a target object at yet another different rendering perspective provided in accordance with an embodiment of the present invention. In the presentation shown in fig. 3, the user's body posture changes and the terminal posture does not change. As shown in fig. 3, an exemplary, intermediate rendering presentation is a rendering presentation of a three-dimensional model of any commodity at rendering perspective 4. The rendering display diagram on the left side is that when the body, the head or the eyes of the user move leftwards, the three-dimensional model of the commodity is rendered and displayed at the rendering view angle 5. The right rendering display diagram is that when the body, the head or the eyes of the user move rightwards, the three-dimensional model of the commodity is rendered and displayed at the rendering view angle 6.
The rendering view angle of the three-dimensional model of the commodity is adjusted along with the gesture change of the terminal and/or the user, commodity model details of different angles can be checked, the accuracy and the immersive performance of the user in observing the commodity are improved, the commodity display angle can be always associated with the view angle of the user, the real commodity display mode can be simulated, and the visual experience is improved.
According to the technical scheme, a three-dimensional model of a target object is rendered and displayed at a first rendering view angle in a preset display area of a current page, event data corresponding to a view angle conversion trigger event is obtained in response to detection of the view angle conversion trigger event, wherein the view angle conversion trigger event comprises a terminal posture change event of the current terminal and/or a body posture change event of a current user, the event data corresponding to the terminal posture change event comprises terminal posture data, the event data corresponding to the body posture change event comprises body posture data, the three-dimensional model of the target object is rendered and displayed at a second rendering view angle based on the event data, the first rendering view angle is different from the second rendering view angle, the problem that in the prior art, the target object is fixed in display angle and cannot be dynamically adjusted along with the view angle conversion of a user is solved, the fact that the rendering view angle of the target object is adjusted along with the posture change of the terminal and/or the user can be checked, the accuracy and the immersion of the object observed by the user can be improved, the object display angle can be always associated with the user view angle, the real object display mode can be simulated, and the technical effect of visual experience can be improved.
In an alternative implementation of the embodiment of the present invention, S110 may have a variety of implementations. The method comprises the steps of rendering and displaying a three-dimensional model of a target object at a first rendering view angle in a preset display area of a current page, wherein rendering and displaying the three-dimensional model of the target object at the first rendering view angle in an information display window of the current page, or rendering and displaying the three-dimensional model of the target object at the first rendering view angle outside the information display window of the current page, wherein one part of the three-dimensional model is located in the information display window of the current page, and the other part of the three-dimensional model is located outside the information display window of the current page.
The information display window may be a display window in the current page for displaying the target object information. In the information display window of the current page, rendering and displaying the three-dimensional model of the target object at the first rendering view angle, which can be understood as rendering and displaying the three-dimensional model of the target object at the first rendering view angle in a fixed display window. That is, the display of the target object does not exceed the information display window of the current page.
And rendering and displaying the three-dimensional model of the target object at a first rendering view angle outside the information display window of the current page, wherein the rendering and displaying of the three-dimensional model of the target object at the first rendering view angle can be understood as jumping out of the information display window. When the three-dimensional model of the target object is displayed outside the information display window of the current page, the three-dimensional model is closer to the current virtual camera relative to the terminal screen. That is, the three-dimensional model is less distant from the current virtual camera than the terminal screen. The distance between any point on the three-dimensional model and the current virtual camera may be smaller than the distance between the terminal screen and the current virtual camera, or the distance between the center point of the three-dimensional model and the current virtual camera may be smaller than the distance between the terminal screen and the current virtual camera.
Or when rendering and displaying the three-dimensional model, one part of the three-dimensional model is positioned in the information display window of the current page, and the other part of the three-dimensional model is positioned outside the information display window of the current page. The three-dimensional model of the target object is partially arranged in the information display window and partially arranged outside the information display window, so that a user can experience various display effects, for example, the characteristics of partial commodities displayed outside the information display window can be highlighted.
In the embodiment of the invention, any one of the three modes can be adopted to render and display the three-dimensional model of the target object. In addition, in the alternative implementation manner of the embodiment of the invention, a plurality of display modes can be combined for use.
Optionally, after the three-dimensional model of the target object is rendered and displayed in the information display window of the current page at the first rendering view angle, the method further comprises the steps of controlling the three-dimensional model of the target object to move from the information display window to the outside of the information display window, selecting pixels outside a preset mask layer in the three-dimensional model to render and display in the three-dimensional model in the moving process of the three-dimensional model of the target object, and skipping the rendering and display of the pixels inside the preset mask layer in the three-dimensional model.
The method comprises the steps of displaying the three-dimensional model of the target object in a dynamic mode, wherein the three-dimensional model is displayed in a window in a rendering mode, and displaying the three-dimensional model in a dynamic mode when a user observes commodities.
In addition, in practical application, the three-dimensional model can be displayed by moving from outside to inside the information display window. Specifically, when the three-dimensional model of the target object is rendered and displayed outside the information display window of the current page at the first rendering view angle, the method further comprises the step of controlling the three-dimensional model of the target object to move inwards from outside the information display window to inside the information display window. The information display window is dynamically displayed from inside to outside or from outside to inside on the three-dimensional model, so that the free display of the target object and the effect of following the dynamic adjustment of the user can be realized. For example, when the terminal approaches the user, a display mode of controlling the three-dimensional model of the target object to move from the inside of the information display window to the outside of the information display window may be adopted. When the terminal is far away from the user, a display mode of controlling the three-dimensional model of the target object to move from the outside of the information display window to the inside of the information display window can be adopted.
Fig. 4A is a schematic view showing an effect of a three-dimensional model dynamic breakthrough window of a target object according to an embodiment of the present invention. Fig. 4A is a schematic diagram of a three-dimensional model of a target object displayed in the information display window. The effect diagram of the right part of fig. 4A is a schematic diagram of the three-dimensional model of the target object displayed outside the information display window of the current page. The three-dimensional model of the control target object moves from inside the information display window to outside the information display window, that is, a process of changing from the left effect diagram to the right effect diagram as shown in fig. 4A. The three-dimensional model of the control target object moves from outside to inside the information display window, i.e., the process of changing from the right effect diagram to the left effect diagram as shown in fig. 4A. Fig. 4B is an example schematic diagram of a three-dimensional model breakthrough window display of a target object, such as a commodity, according to an embodiment of the present invention.
Fig. 5 is an effect diagram of a three-dimensional model presentation of yet another target object provided according to an embodiment of the present invention. As shown in fig. 5, the three-dimensional model of the target object is partially within the information presentation window and partially outside the information presentation window. Fig. 6 is a schematic diagram of a three-dimensional model presentation of a target object according to an embodiment of the present invention. As shown in fig. 6, a three-dimensional model of a target object such as a commodity is partially modeled within the information presentation window, and partially outside the information presentation window.
In the dynamic window breaking display process, in order to further improve the window breaking effect of the model passing through the picture, the visual effect of the model part outside the preset mask layer is achieved to be prominently displayed, in the moving process of the three-dimensional model, pixels outside the preset mask layer in the three-dimensional model are selected to be rendered and displayed, and pixels inside the preset mask layer in the three-dimensional model are skipped to be rendered and displayed.
The preset mask layer can be used for shielding a model part in the preset mask layer in the three-dimensional model so as not to be displayed. The depth value of the preset mask layer may be preset so that the preset mask layer is located above a User Interface (UI) layer. The pre-set mask layer may be transparent or non-transparent and have no color information. In specific rendering, the depth value of the pixel to be rendered on the three-dimensional model is compared with the depth value of the preset mask layer through a rendering pipeline, the pixel with the depth value larger than the depth value of the preset mask layer, namely, the pixel outside the preset mask layer is rendered and displayed, and the pixel with the depth value smaller than the depth value of the preset mask layer, namely, the pixel inside the preset mask layer is skipped and displayed.
Fig. 7 is a schematic diagram showing a three-dimensional model through a preset mask layer according to an embodiment of the present invention. As shown in fig. 7, the outside of the preset mask layer may be understood as a side area of the preset mask layer close to the virtual camera. The inside of the predetermined mask layer may be understood as a side area of the predetermined mask layer away from the virtual camera. As shown in fig. 7, the three-dimensional model part outside the preset mask layer can be rendered and displayed, while the three-dimensional model part inside the preset mask layer is not rendered and displayed, so that the effect that the three-dimensional model passes through the picture and is displayed is realized.
The pre-set mask layer may be transparent or non-transparent. When the preset mask layer is transparent, the preset mask layer can shield the three-dimensional model part in the preset mask layer, but the content of the UI layer in the preset mask layer can be rendered and displayed. When the preset mask layer is non-transparent, the preset mask layer can not only shield the three-dimensional model part in the preset mask layer, but also shield the whole content of the UI layer, namely the content of the UI layer cannot be rendered and displayed. The three-dimensional model can be provided with various display performances by rendering and displaying the three-dimensional model in the information display window of the current page, or moving the three-dimensional model out of the information display window of the current page, when the three-dimensional model is displayed out of the information display window, the three-dimensional model can be used for displaying the three-dimensional model by breaking the three-dimensional model, and when the three-dimensional model is displayed out of the information display window, the three-dimensional model is displayed by moving the three-dimensional model out of the information display window of the current page, or moving the three-dimensional model out of the information display window of the current page, the three-dimensional model is displayed by moving the three-dimensional model out of the information display window, so that the dynamic sense of the broken window can be improved. When the partial model is shielded through the preset mask layer, the window breaking feeling can be further improved, so that a user perceives the effect that the model passes through the picture.
It should be noted that, when the three-dimensional model of the target object is rendered and displayed in the second rendering angle, any rendering and displaying mode may be selected. That is, the three-dimensional model of the target object may be rendered and displayed in the second rendering view angle in a display manner in which the three-dimensional model is in the information display window of the current page, or outside the information display window of the current page, or a part of the three-dimensional model is in the information display window, or outside the information display window of the current page, or inside the information display window.
In order to better explain the specific application effect of the information display method provided by the embodiment of the invention. Fig. 8 is a schematic diagram of performing a first rendering perspective display on a three-dimensional model of a target object, such as a commodity, in an information display window of a current page according to an embodiment of the present invention. Fig. 9 is a schematic diagram of performing a second rendering perspective display on a three-dimensional model of the commodity in an information display window of a current page according to an embodiment of the present invention. As shown in fig. 8 and 9, the three-dimensional model of the target object can be made to dynamically adjust the presentation view angle following the change of the terminal posture or the user posture within the information presentation window.
Fig. 10 is a schematic diagram of performing a first rendering perspective display on a three-dimensional model of a target object, such as a commodity, outside an information display window of a current page according to an embodiment of the present invention. Fig. 11 is a schematic diagram of performing a second rendering view display on a three-dimensional model of the commodity outside an information display window of a current page according to an embodiment of the present invention. As shown in fig. 10 and 11, the three-dimensional model of the target object can be made to follow the change of the terminal gesture or the user gesture outside the information display window to dynamically adjust the display viewing angle.
The three-dimensional model of the article may be controlled to move from inside the information display window to outside the information display window as shown in fig. 8 and 9, or the three-dimensional model of the article may be controlled to move from outside the information display window to inside the information display window as shown in fig. 10 and 11.
Fig. 12 is a flowchart of yet another information display method according to an embodiment of the present invention. As shown in fig. 12, the method includes:
And S510, rendering and displaying the two-dimensional material data associated with the target object by an information display layer in the information display window of the current page, and rendering and displaying the three-dimensional model of the target object by a first rendering view angle by a model display layer in the information display window of the current page.
The two-dimensional material data may be any two-dimensional information data related to the target object, for example, when the target object is a commodity, the two-dimensional material data includes, but is not limited to, information such as one or any of names, brands, prices, characteristics, and functional parameters of the commodity.
The information presentation layer may include at least one presentation layer, and the model presentation layer may also include at least one presentation layer. The depth of different presentation layers within the information presentation window is different, i.e. the different presentation layers have different depth values. When the information display layer comprises a plurality of display layers, the two-dimensional material data associated with the target object can be displayed in multiple layers, so that the target object can be displayed in a more three-dimensional mode.
Alternatively, the model presentation layer may be below or above the information presentation layer. That is, the model presentation layer may have a depth greater than or less than the depth of the information presentation layer. By carrying out layered display on the three-dimensional model and the two-dimensional material data, the stereoscopic impression of commodity display can be improved, and the content behind the screen can be fully expressed.
In the concrete implementation, the background information of the target object can be displayed on a background display layer in the information display window.
The background presentation layer may comprise one or more presentation layers, and the depth of the different presentation layers within the information presentation window is different. The placement background information includes a dynamic image or a static picture or a background three-dimensional model. By adding the background display layer on the basis of the information display layer and the model display layer, the scene adaptability of the commodity can be improved, so that a user can fully know the use scene of the commodity.
Fig. 13 is a schematic diagram of displaying a target object based on multiple display layers according to an embodiment of the present invention. As shown in fig. 13, the target object may be fully expressed by 4 presentation layers, which are an information presentation layer a, an information presentation layer B, a model presentation layer C, and a background presentation layer D in order of increasing depth values from front to back. The target object is rendered and displayed through the multi-depth display layer, so that the three-dimensional display performance of the target object can be improved.
The multi-display-layer synthesized display mode provided by the embodiment of the invention not only can be applied to the information display window of the current page, when the three-dimensional model of the target object is rendered and displayed at the first rendering view angle, but also can be applied to the situation that the three-dimensional model of the target object is rendered and displayed at the first rendering view angle on the current page, wherein one part of the three-dimensional model is positioned in the information display window of the current page, and the other part of the three-dimensional model is positioned outside the information display window of the current page. That is, a portion of the three-dimensional model displayed in the information display window may employ the above-described multi-display layer composite display manner.
S520, responding to the detection of the view angle conversion trigger event, and acquiring event data corresponding to the view angle conversion trigger event.
The visual angle conversion triggering event comprises a terminal gesture change event of the current terminal and/or a body gesture change event of the current user, event data corresponding to the terminal gesture change event comprises terminal gesture data, and event data corresponding to the body gesture change event comprises body gesture data.
And S530, rendering and displaying the three-dimensional model of the target object at a second rendering view angle based on the event data, wherein the first rendering view angle is different from the second rendering view angle.
Optionally, rendering and displaying the three-dimensional model of the target object at a second rendering view angle based on the event data comprises rendering and displaying the three-dimensional model of the target object at the second rendering view angle again based on the event data, wherein the display view angle of the three-dimensional model after rendering and displaying at the second rendering view angle has a positive correlation or a negative correlation with a terminal posture change event of a current terminal and/or a body posture change event of the current user, and particularly can be the display view angle of the three-dimensional model, and has a positive correlation or a negative correlation with a terminal posture change direction and/or a body posture change direction. The positive correlation may be that the direction of change of the display view angle of the three-dimensional model is the same as the direction of change of the terminal posture and/or the direction of change of the body posture. The direction of change may be any one or a combination of directions up, down, left, and right. For example, as the terminal and/or user body is rotated to the left, the presentation view of the three-dimensional model is also rotated to the left. As another example, as the terminal and/or user's body is rotated to the right, the presentation view of the three-dimensional model is also rotated to the right. The negative correlation may be that the direction of change of the presentation view of the three-dimensional model is different, such as opposite, from the direction of change of the terminal posture and/or the direction of change of the body posture. For example, as the terminal and/or user body is rotated left, the presentation view of the three-dimensional model is rotated right. As another example, as the terminal and/or user body is rotated to the right, the presentation view of the three-dimensional model is rotated to the left.
It should be noted that, when the three-dimensional model of the target object is rendered and displayed at the second rendering angle, the above-mentioned synthetic display manner of multiple display layers may also be adopted. And when the model display layer comprises a plurality of display layers, each display layer can display one three-dimensional model of the target object, which means that a plurality of three-dimensional models of the target object exist, and for the plurality of three-dimensional models, the rendering view angle needs to be converted from a first rendering view angle to a second rendering view angle so as to promote the whole interactivity.
Fig. 14 is a flowchart of another information display method according to an embodiment of the present invention. As shown in fig. 14, the method includes:
And S710, rendering and displaying the three-dimensional model of the target object in a preset display area of the current page at a first rendering view angle.
Optionally, rendering and displaying the three-dimensional model of the target object at a first rendering view angle in a preset display area of the current page, wherein rendering and displaying the three-dimensional model of the target object at the first rendering view angle is performed in an information display window of the current page, or rendering and displaying the three-dimensional model of the target object at the first rendering view angle is performed outside the information display window of the current page, wherein one part of the three-dimensional model is positioned in the information display window of the current page, and the other part of the three-dimensional model is positioned outside the information display window of the current page.
Optionally, in an information display window of a current page, rendering and displaying a three-dimensional model of a target object at a first rendering view angle, wherein the rendering and displaying comprises an information display layer in the information display window of the current page, rendering and displaying two-dimensional material data related to the target object, and a model display layer in the information display window, rendering and displaying the three-dimensional model of the target object at the first rendering view angle, wherein the information display layer and the model display layer respectively comprise at least one display layer, and the depths of different display layers in the information display window are different.
Optionally, in the information display window of the current page, rendering and displaying the three-dimensional model of the target object at a first rendering view angle, and further comprising displaying the placement background information of the target object on a background display layer of the information display window, wherein the placement background information comprises a dynamic image or a static image or a background three-dimensional model.
S720, responding to the detection of the view angle conversion trigger event, and acquiring event data corresponding to the view angle conversion trigger event.
The visual angle conversion triggering event comprises a terminal gesture change event of the current terminal and/or a body gesture change event of the current user, event data corresponding to the terminal gesture change event comprises terminal gesture data, and event data corresponding to the body gesture change event comprises body gesture data. Optionally, the body posture change event comprises an eye posture change event and/or a head posture change event. The event data corresponding to the eye posture change event or the head posture change event includes current eye center point data of the current user.
And S730, determining the view angle offset based on the event data.
The view angle offset represents a second position point of the virtual camera corresponding to the second rendering view angle, and is relative to a position offset of a first position point of the virtual camera corresponding to the first rendering view angle. Since rendering images of different perspectives can be achieved by adjusting the position of the virtual camera, this step requires first determining a positional offset characterizing the position point of the virtual camera after the change relative to the position point of the virtual camera before the change.
Specifically, there are a variety of implementations of S730.
Optionally, when the view angle conversion triggering event includes a terminal gesture change event, the event data includes current gyroscope angle data of a current terminal, the view angle offset is determined based on the event data, the view angle offset is determined based on the current gyroscope angle data and preset correction angle data, terminal deflection direction data is obtained through calculation, and the view angle offset is determined based on the terminal deflection direction data.
The gyro angle data may be data obtained by an angular velocity sensor (i.e., a gyroscope) based on the law of conservation of angular momentum. Specifically, when an object rotates in a certain direction, the gyroscope has one or more tiny rotors which keep the original rotation direction and speed when the terminal rotates, so that the rotation state of the terminal is perceived by measuring the change of the angular speed of the rotors. The current gyroscope angle data may be orientation gesture data of the terminal.
The preset correction angle data may be a preset fixed value, and is used for offsetting the current gyroscope angle data, so that the corrected viewing angle is more convenient for the user to observe. The preset correction angle data may be an angle determined according to a hand-held gesture of the user to the terminal, for example, the value may be 15 degrees.
The terminal deflection direction data may be a difference between the current gyroscope angle data and the preset correction angle data. The deflection direction of the terminal can be accurately identified through the current gyroscope angle data and the preset correction angle data, so that the display angle of the three-dimensional model of the target object can be reasonably adjusted. The terminal deflection direction data is specifically three-dimensional vector data, which may be expressed as (x, y, z), and the viewing angle offset may include lateral data and longitudinal data in the terminal deflection direction data, that is, the viewing angle offset may include x data and y data in the terminal deflection direction data.
Optionally, a second implementation manner of the step S730 is that when the visual angle conversion triggering event comprises a body posture change event and the body posture change event comprises an eye posture change event and/or a head posture change event, the event data comprises current eye center point data of the current user, the visual angle offset is determined based on the event data, the visual angle orientation data of the user is calculated based on the current eye center point data and the screen center point data, and the visual angle offset is determined based on the visual angle orientation data of the user.
The current eye center point data may be data obtained by performing face recognition on a user and acquiring a normal direction of eyes and a screen so as to determine a direction and a position of a user's sight. For example, the current eye center point data is obtained through an augmented reality recognition technology. Illustratively, the current eye center point data may be calculated by the formula eye_center=mv_matrix× (left_eye_position+right_eye_position)/2. Where eye_center is the current eye center point data. left eye position is left eye position. right eye position is the right eye position. MV_matrix is a Model-View Matrix (Model-View Matrix). Mv_matrix is a4×4 Matrix. The eye center may be a three-dimensional vector.
The screen center point data may be a preset value, such as set to (0, 0). The user perspective orientation data may be the difference between the current eye center point data and the screen center point data. So that the viewing angle offset can be determined based on the user viewing angle orientation data. The display angle of the three-dimensional model can be dynamically adjusted along with the change of the eye or head gesture of the user. For example, the view offset may include landscape data and portrait data in the user view orientation data, i.e., the view offset may include x data and y data in the user view orientation data.
Optionally, a third implementation manner of the step S730 is that when the perspective conversion triggering event includes a body posture change event and the body posture change event includes a hand posture change event, the event data includes current hand position point data of the current user, the step of determining the perspective offset based on the event data includes calculating model deflection direction data based on the current hand position point data and the previous hand position point data before the hand posture change, and the step of determining the perspective offset based on the model deflection direction data. For example, the difference between the current hand position point data and the previous hand position point data is used as model yaw direction data, and the view angle offset may include x data and y data in the model yaw direction data.
The current hand position point data of the current user can be position point data of the user on a terminal screen. The model deflection direction data can be determined by determining the data such as the movement position, movement distance and the like of the finger of the user on the terminal screen. Accordingly, the viewing angle offset is determined from the model yaw direction data. The display angle of the three-dimensional model can be dynamically adjusted along with the change of the hand action of the user.
Optionally, a fourth implementation of S730 is to combine at least two of the first, second and third implementations of S730. When the combination is performed, the view angle offset obtained by the combined at least two implementation manners can be vector added to obtain the final view angle offset. For example, when the terminal gesture and the user body gesture are changed, accurate visual angle offset determination is realized.
In addition to the above embodiments, the range of the viewing angle offset may be limited to prevent the rendering distortion. Specifically, after the view angle offset is determined based on the event data and before the three-dimensional model of the target object is rendered and displayed in the second rendering view angle based on the view angle offset, the method further comprises adjusting the view angle offset beyond the preset value range if the view angle offset is beyond the preset value range, so that the adjusted view angle offset is within the preset value range.
The viewing angle offset may include multidimensional data, and a corresponding preset value range may be set for each dimensional data. Illustratively, the preset value range is interval [ -2,2]. In a specific application example, if the viewing angle offset exceeds the range [ -2,2], the viewing angle offset can be adjusted to fall within [ -2,2], so as to avoid distortion of rendering effects. In order to further ensure the rendering effect, when the view angle offset is multidimensional data, the multidimensional data can be adjusted in the same proportion, so that the view angle deflection amplitude is ensured to be the same in each dimension while the data of each dimension falls into [ -2,2].
And S740, rendering and displaying the three-dimensional model of the target object at a second rendering view based on the view angle offset.
The first rendering view angle can be adjusted according to the view angle offset, and the second rendering view angle is obtained. Thus, the three-dimensional model shown at the second rendering perspective is more tailored to the user's viewing angle. There are various ways of performing the first rendering angle adjustment according to the viewing angle offset. For example, a mapping relationship between the view angle offset and the first rendering angle adjustment may be established, and the second rendering view angle may be obtained according to the mapping relationship. Or the relation between the view angle offset in the space and the projection coordinates in the screen can be determined according to theoretical knowledge in computer graphics, so that the first rendering view angle is adjusted, and the second rendering view angle is obtained. By adjusting the rendering view angle of the three-dimensional model according to the view angle offset, the object display can dynamically display along with the view angle of the user, more details can be displayed, and the object display effect similar to the real space is achieved.
In order to simulate a picture observed by a real eyeball, optionally, rendering and displaying the three-dimensional model of the target object at a second rendering view angle based on the view angle offset, wherein the rendering and displaying comprise constructing an off-axis projection matrix based on a perspective projection matrix based on a transverse view angle offset and a longitudinal view angle offset in the view angle offset, determining projection coordinates of the three-dimensional model of the target object based on the off-axis projection matrix, and rendering and displaying the three-dimensional model of the target object based on the projection coordinates.
Wherein the perspective projection matrix is a concept in computer graphics. The perspective projection matrix is used for converting object coordinates in the three-dimensional world into projection coordinates on the two-dimensional screen, so that perspective or orthogonal projection effect is realized. The perspective projection matrix defines a view volume (view frustum) or a view cone (view volume), i.e. a region of the object visible in the viewing space (VIEW SPACE).
Fig. 15 is a schematic view of a view cone provided in accordance with an embodiment of the present invention. As shown in fig. 15, in the perspective projection matrix, clipping planes are defined by six tuples left (left), right (right), top (top), bottom (bottom), near (near), far (far). In addition, parameters such as viewing angle, aspect ratio and the like can be defined in the perspective projection matrix. The shape and size of the view cone can be determined by parameters in the perspective projection matrix, thereby affecting the image rendered on the terminal screen. The basic perspective projection matrix is
In the embodiment of the invention, in order to enable the screen picture to be consistent with the observation angle of a user, the inclined view cone is adopted on the basis of the perspective projection matrix, so that the picture observed by a real eyeball can be simulated. Fig. 16 is a view cone tilt schematic provided in accordance with an embodiment of the invention. As shown in fig. 16, the left side is a normal viewing cone and the right side is an off-axis viewing cone. At the time of the viewing angle conversion trigger event, the change of the rendering viewing angle can be realized based on the off-axis viewing cone shown on the right side in fig. 16, thereby simulating the picture observed by the real eyeball.
When the oblique view cone is realized, an off-axis projection matrix is required to be constructed according to the transverse view angle offset and the longitudinal view angle offset in the view angle offsets on the basis of the perspective projection matrix. For example, the off-axis projection matrix may beWhere x is the lateral viewing angle offset of the viewing angle offsets and y is the longitudinal viewing angle offset of the viewing angle offsets. That is, x is the lateral data in the terminal deflection direction data or the user viewing angle orientation data, and y is the longitudinal data in the terminal deflection direction data or the user viewing angle orientation data.
After the off-axis projection matrix is obtained, the projection coordinates of the three-dimensional model of the target object can be determined according to the screen projection related knowledge in computer graphics, and rendering and displaying are carried out on the three-dimensional model of the target object based on the projection coordinates. By adopting the off-axis projection matrix to conduct three-dimensional model rendering display, the image observed by the real eyeball can be simulated.
It should be noted that, when determining the projection coordinates of the three-dimensional model of the target object based on the off-axis projection matrix and rendering and displaying the three-dimensional model of the target object based on the projection coordinates, the method may be combined with the foregoing multi-display layer display mode and/or window breaking display mode, which is not described herein again.
On the basis of any one of the above embodiments, optionally, the information display method further includes detecting a distance between eyes or a head of a current user and a terminal screen in real time, and adjusting a size of a three-dimensional model of the target object to be displayed currently according to the distance. Specifically, in actual display, a display mode of reducing the size of the three-dimensional model of the target object currently displayed when the distance between the eyes or the head of the current user and the terminal screen increases may be adopted. And/or a display mode of enlarging the size of the three-dimensional model of the target object currently displayed when the distance between the eyes or the head of the current user and the terminal screen is reduced can be adopted. By the method, the display effect of the three-dimensional model of the target object on the near size and the far size can be achieved, so that commodity detail information can be better displayed in the near distance, and commodity whole contour information can be better displayed in the far distance.
It should be noted that, in the technical solution of the present disclosure, the related aspects of collecting, updating, analyzing, processing, using, transmitting, storing, etc. of the personal information (such as terminal gesture data and/or body gesture data) of the user all conform to the rules of relevant laws and regulations, and are used for legal purposes without violating the public welfare. Necessary measures are taken for the personal information of the user, illegal access to the personal information data of the user is prevented, and the personal information security, network security and national security of the user are maintained.
The following is an embodiment of an information display device provided by the embodiment of the present invention, which belongs to the same inventive concept as the information display method of the above embodiments, and details of the embodiments of the information display device that are not described in detail may be referred to in the description of the embodiments of the present invention.
Fig. 17 is a schematic structural diagram of an information display device according to an embodiment of the present invention. As shown in fig. 17, the apparatus includes a first rendering perspective display module 1010, an event data acquisition module 1020, and a second rendering perspective display module 1030. Wherein:
the first rendering view angle display module 1010 is configured to render and display the three-dimensional model of the target object in a preset display area of the current page at a first rendering view angle;
The event data obtaining module 1020 is configured to obtain event data corresponding to a view angle conversion trigger event in response to detecting the view angle conversion trigger event, where the view angle conversion trigger event includes a terminal gesture change event of a current terminal and/or a body gesture change event of a current user, the event data corresponding to the terminal gesture change event includes terminal gesture data, and the event data corresponding to the body gesture change event includes body gesture data;
the second rendering perspective display module 1030 is configured to render and display the three-dimensional model of the target object at a second rendering perspective based on the event data, where the first rendering perspective is different from the second rendering perspective.
Optionally, the body posture change event comprises an eye posture change event and/or a head posture change event.
Optionally, the first rendering perspective display module 1010 includes:
the display window internal display unit is used for rendering and displaying the three-dimensional model of the target object in a first rendering view angle in the information display window of the current page, or
The display window external display unit is used for rendering and displaying the three-dimensional model of the target object at a first rendering view angle outside the information display window of the current page, or
The display window-to-window display unit is used for rendering and displaying the three-dimensional model of the target object in the current page at a first rendering view angle, wherein one part of the three-dimensional model is positioned in the information display window of the current page, and the other part of the three-dimensional model is positioned outside the information display window of the current page.
Optionally, the display unit in the display window includes:
The information display layer display subunit is used for rendering and displaying the two-dimensional material data associated with the target object on the information display layer in the information display window of the current page;
The model display layer display subunit is used for rendering and displaying the three-dimensional model of the target object at a first rendering view angle on the model display layer in the information display window;
The information display layer and the model display layer respectively comprise at least one display layer, and the depth of different display layers in the information display window is different.
Optionally, the display unit in the display window further includes:
The background display layer display subunit is used for displaying the placed background information of the target object on the background display layer in the information display window;
Wherein, the placement background information comprises a dynamic image or a static picture or a background three-dimensional model.
Optionally, the second rendering perspective display module 1030 includes:
And the display view angle determining unit is used for rendering and displaying the three-dimensional model of the target object at a second rendering view angle again based on the event data, wherein the display view angle of the three-dimensional model after rendering and displaying at the second rendering view angle has a positive correlation or a negative correlation with a terminal posture change event of the current terminal and/or a body posture change event of the current user.
Optionally, the second rendering perspective display module 1030 includes:
The visual angle offset determining unit is used for determining a visual angle offset based on the event data, wherein the visual angle offset represents the position offset of a second position point of the virtual camera corresponding to the second rendering visual angle relative to a first position point of the virtual camera corresponding to the first rendering visual angle;
The second rendering view angle display unit is used for rendering and displaying the three-dimensional model of the target object at the second rendering view angle based on the view angle offset.
Optionally, the second rendering perspective display unit includes:
An off-axis projection matrix construction subunit, configured to construct an off-axis projection matrix based on the perspective projection matrix based on the lateral view angle offset and the longitudinal view angle offset in the view angle offsets;
And the second rendering view angle display subunit is used for determining the projection coordinates of the three-dimensional model of the target object based on the off-axis projection matrix and rendering and displaying the three-dimensional model of the target object based on the projection coordinates.
Optionally, when the perspective conversion triggering event includes a terminal gesture change event, the event data includes current gyroscope angle data of the current terminal;
a viewing angle offset determining unit comprising:
The terminal deflection direction data calculating subunit is used for calculating and obtaining terminal deflection direction data based on the current gyroscope angle data and preset correction angle data;
And a viewing angle offset determining subunit for determining a viewing angle offset based on the terminal deflection direction data.
Optionally, when the perspective transition trigger event includes a body posture change event and the body posture change event includes an eye posture change event and/or a head posture change event, the event data includes current eye center point data of the current user;
a viewing angle offset determining unit comprising:
The user visual angle orientation data calculation subunit is used for calculating the user visual angle orientation data based on the current eye center point data and the screen center point data;
And a viewing angle offset determination subunit configured to determine a viewing angle offset based on the user viewing angle orientation data.
Optionally, the device further includes:
The view angle offset adjustment module is used for adjusting the view angle offset exceeding the preset value range if the view angle offset exceeds the preset value range after the view angle offset is determined based on the event data and before the three-dimensional model of the target object is rendered and displayed in the second rendering view angle based on the view angle offset, so that the adjusted view angle offset is located in the preset value range.
Optionally, the device further includes:
a model display movement control module for controlling the three-dimensional model of the target object to move from the information display window to the outside of the information display window after rendering and displaying the three-dimensional model of the target object in the information display window of the current page at a first rendering view angle, and
And the rendering module based on the mask layer is used for selecting pixels outside a preset mask layer in the three-dimensional model to perform rendering display in the moving process of the three-dimensional model of the target object, and skipping the rendering display of the pixels inside the preset mask layer in the three-dimensional model.
Optionally, the device further includes:
The size adjustment module is used for detecting the distance between eyes or heads of the current user and the terminal screen in real time and adjusting the size of the three-dimensional model of the currently displayed target object according to the distance.
Optionally, the current page includes a home page, an item detail page, a live page, a comment page or a waterfall page of the application program, and the target object includes an item.
The information display device provided by the embodiment of the invention can execute the information display method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of executing the information display method.
Fig. 18 is a schematic structural diagram of an electronic device according to an embodiment of the present invention. The electronic device 10 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 18, the electronic device 10 includes at least one processor 11, and a memory such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc. communicatively connected to the at least one processor 11, wherein the memory stores a computer program executable by the at least one processor, and the processor 11 can perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including an input unit 16, such as a keyboard, mouse, etc., an output unit 17, such as various types of displays, speakers, etc., a storage unit 18, such as a magnetic disk, optical disk, etc., and a communication unit 19, such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as the information presentation method.
In some embodiments, the information presentation method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as the storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the information presentation method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the information presentation method in any other suitable way (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include being implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be a special or general purpose programmable processor, operable to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user, for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback), and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a Local Area Network (LAN), a Wide Area Network (WAN), a blockchain network, and the Internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.
The computer storage media of embodiments of the invention may take the form of any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium include an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations of the present invention may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
It will be appreciated by those of ordinary skill in the art that the modules or steps of the invention described above may be implemented in a general purpose computing device, they may be centralized on a single computing device, or distributed over a network of computing devices, or they may alternatively be implemented in program code executable by a computer device, such that they are stored in a memory device and executed by the computing device, or they may be separately fabricated as individual integrated circuit modules, or multiple modules or steps within them may be fabricated as a single integrated circuit module. Thus, the present invention is not limited to any specific combination of hardware and software.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.
Claims (18)
1. An information display method, characterized in that the method comprises:
Rendering and displaying the three-dimensional model of the target object in a preset display area of the current page at a first rendering view angle;
The method comprises the steps of responding to a detection of a view angle conversion trigger event, acquiring event data corresponding to the view angle conversion trigger event, wherein the view angle conversion trigger event comprises a terminal gesture change event of a current terminal and/or a body gesture change event of a current user, the event data corresponding to the terminal gesture change event comprises terminal gesture data, and the event data corresponding to the body gesture change event comprises body gesture data;
And rendering and displaying the three-dimensional model of the target object at a second rendering view angle based on the event data, wherein the first rendering view angle is different from the second rendering view angle.
2. The method of claim 1, wherein the body posture change event comprises an eye posture change event and/or a head posture change event.
3. The method according to claim 1, wherein rendering the three-dimensional model of the target object at the first rendering perspective in the preset rendering area of the current page comprises:
in the information display window of the current page, rendering and displaying the three-dimensional model of the target object at a first rendering view angle, or
Rendering and displaying the three-dimensional model of the target object at a first rendering view angle outside an information display window of the current page, or
Rendering and displaying the three-dimensional model of the target object at a first rendering view angle on the current page, wherein one part of the three-dimensional model is positioned in an information display window of the current page, and the other part of the three-dimensional model is positioned outside the information display window of the current page.
4. A method according to claim 3, wherein rendering the three-dimensional model of the target object at the first rendering perspective within the information presentation window of the current page comprises:
the information display layer in the information display window of the current page renders and displays the two-dimensional material data associated with the target object;
the model display layer in the information display window is used for rendering and displaying the three-dimensional model of the target object at a first rendering view angle;
Wherein the information display layer and the model display layer respectively comprise at least one display layer, and the depth of different display layers in the information display window is different.
5. The method of claim 4, wherein rendering the three-dimensional model of the target object at the first rendering perspective within the information presentation window of the current page further comprises:
the background display layer in the information display window displays the placed background information of the target object;
Wherein, the placement background information comprises a dynamic image or a static picture or a background three-dimensional model.
6. The method of claim 1, wherein rendering the three-dimensional model of the target object based on the event data at a second rendering perspective comprises:
And rendering and displaying the three-dimensional model of the target object at a second rendering view angle again based on the event data, wherein the display view angle of the three-dimensional model after rendering and displaying at the second rendering view angle has a positive correlation or a negative correlation with a terminal posture change event of the current terminal and/or a body posture change event of the current user.
7. The method of claim 1, wherein rendering the three-dimensional model of the target object based on the event data at a second rendering perspective comprises:
Determining a view angle offset based on the event data, wherein the view angle offset represents a position offset of a second position point of the virtual camera corresponding to the second rendering view angle relative to a first position point of the virtual camera corresponding to the first rendering view angle;
and rendering and displaying the three-dimensional model of the target object at a second rendering view based on the view angle offset.
8. The method of claim 7, wherein rendering the three-dimensional model of the target object at the second rendering perspective based on the perspective offset comprises:
constructing an off-axis projection matrix on the basis of the perspective projection matrix based on the transverse view angle offset and the longitudinal view angle offset in the view angle offsets;
And determining projection coordinates of the three-dimensional model of the target object based on the off-axis projection matrix, and rendering and displaying the three-dimensional model of the target object based on the projection coordinates.
9. The method of claim 7, wherein when the perspective transition trigger event comprises a terminal attitude change event, the event data comprises current gyroscope angle data of a current terminal;
The determining a viewing angle offset based on the event data includes:
calculating to obtain terminal deflection direction data based on the current gyroscope angle data and preset correction angle data;
and determining a viewing angle offset based on the terminal deflection direction data.
10. The method of claim 7, wherein the perspective switch trigger event comprises a body posture change event and the event data comprises current eye center point data of a current user when the body posture change event comprises an eye posture change event and/or a head posture change event;
The determining a viewing angle offset based on the event data includes:
calculating to obtain user visual angle orientation data based on the current eye center point data and screen center point data;
a perspective offset is determined based on the user perspective orientation data.
11. The method of claim 7, wherein after determining a perspective offset based on the event data and before rendering the three-dimensional model of the target object at the second rendering perspective based on the perspective offset, the method further comprises:
and if the visual angle offset exceeds the preset value range, adjusting the visual angle offset exceeding the preset value range so that the adjusted visual angle offset is positioned in the preset value range.
12. A method according to claim 3, wherein after rendering the three-dimensional model of the target object at the first rendering perspective within the information presentation window of the current page, the method further comprises:
controlling the three-dimensional model of the target object to move from inside to outside of the information display window, and
And in the process of moving the three-dimensional model of the target object, selecting pixels outside a preset mask layer in the three-dimensional model for rendering display, and skipping rendering display of pixels inside the preset mask layer in the three-dimensional model.
13. The method according to claim 1, wherein the method further comprises:
Detecting the distance between eyes or heads of a current user and a terminal screen in real time, and adjusting the size of the three-dimensional model of the currently displayed target object according to the distance.
14. The method of any of claims 1-13, wherein the current page comprises a home page, an item detail page, a live page, a comment page, or a waterfall page of an application, and the target object comprises an item.
15. An information presentation apparatus, the apparatus comprising:
The first rendering view angle display module is used for rendering and displaying the three-dimensional model of the target object in a preset display area of the current page at a first rendering view angle;
The system comprises an event data acquisition module, a visual angle conversion triggering event acquisition module and a visual angle conversion triggering module, wherein the event data acquisition module is used for responding to the detection of the visual angle conversion triggering event and acquiring event data corresponding to the visual angle conversion triggering event, the visual angle conversion triggering event comprises a terminal gesture change event of a current terminal and/or a body gesture change event of a current user, the event data corresponding to the terminal gesture change event comprises terminal gesture data, and the event data corresponding to the body gesture change event comprises body gesture data;
the second rendering view angle display module is used for rendering and displaying the three-dimensional model of the target object at a second rendering view angle based on the event data, wherein the first rendering view angle is different from the second rendering view angle.
16. An electronic device, the electronic device comprising:
One or more processors;
A memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the information presentation method of any of claims 1-14.
17. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the information presentation method according to any one of claims 1-14.
18. A computer program product comprising a computer program which, when executed by a processor, implements the information presentation method according to any one of claims 1-14.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411688723.5A CN119537630A (en) | 2024-11-22 | 2024-11-22 | Information display method, device, equipment, storage medium and program product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411688723.5A CN119537630A (en) | 2024-11-22 | 2024-11-22 | Information display method, device, equipment, storage medium and program product |
Publications (1)
Publication Number | Publication Date |
---|---|
CN119537630A true CN119537630A (en) | 2025-02-28 |
Family
ID=94710425
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411688723.5A Pending CN119537630A (en) | 2024-11-22 | 2024-11-22 | Information display method, device, equipment, storage medium and program product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN119537630A (en) |
-
2024
- 2024-11-22 CN CN202411688723.5A patent/CN119537630A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6780642B2 (en) | Information processing equipment, information processing methods and programs | |
US8194101B1 (en) | Dynamic perspective video window | |
US9749619B2 (en) | Systems and methods for generating stereoscopic images | |
US11282264B2 (en) | Virtual reality content display method and apparatus | |
US10726625B2 (en) | Method and system for improving the transmission and processing of data regarding a multi-user virtual environment | |
US9613463B2 (en) | Augmented reality extrapolation techniques | |
US9437038B1 (en) | Simulating three-dimensional views using depth relationships among planes of content | |
JP7008730B2 (en) | Shadow generation for image content inserted into an image | |
US11893702B2 (en) | Virtual object processing method and apparatus, and storage medium and electronic device | |
US9268410B2 (en) | Image processing device, image processing method, and program | |
US20170186219A1 (en) | Method for 360-degree panoramic display, display module and mobile terminal | |
US20200035034A1 (en) | Method, device, terminal device and storage medium for realizing augmented reality image | |
CN109743626B (en) | Image display method, image processing method and related equipment | |
US20160217616A1 (en) | Method and System for Providing Virtual Display of a Physical Environment | |
US11294535B2 (en) | Virtual reality VR interface generation method and apparatus | |
KR20230044401A (en) | Personal control interface for extended reality | |
US11449196B2 (en) | Menu processing method, device and storage medium in virtual scene | |
CN112565883A (en) | Video rendering processing system and computer equipment for virtual reality scene | |
CN118747039A (en) | Method, device, electronic device and storage medium for moving virtual objects | |
EP4485357A2 (en) | Image processing method and apparatus, electronic device, and storage medium | |
US11910068B2 (en) | Panoramic render of 3D video | |
CN119537630A (en) | Information display method, device, equipment, storage medium and program product | |
TWM630947U (en) | Stereoscopic image playback apparatus | |
CN116721317A (en) | Image processing method, related apparatus, storage medium, and program product | |
CN117641026A (en) | Model display method, device, equipment and medium based on virtual reality space |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination |