[go: up one dir, main page]

CN119277123A - How to display cloud terminal products - Google Patents

How to display cloud terminal products Download PDF

Info

Publication number
CN119277123A
CN119277123A CN202411279801.6A CN202411279801A CN119277123A CN 119277123 A CN119277123 A CN 119277123A CN 202411279801 A CN202411279801 A CN 202411279801A CN 119277123 A CN119277123 A CN 119277123A
Authority
CN
China
Prior art keywords
cloud terminal
terminal product
rendering
data
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411279801.6A
Other languages
Chinese (zh)
Inventor
邱阳
高伟杰
吴卫民
戴丹升
陈超
王俊楷
钟丰丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Internet Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Internet Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Internet Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202411279801.6A priority Critical patent/CN119277123A/en
Publication of CN119277123A publication Critical patent/CN119277123A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23412Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs for generating or manipulating the scene composition of objects, e.g. MPEG-4 objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application discloses a display method of a cloud terminal product, and belongs to the technical field of videos. The cloud terminal product rendering method comprises the steps of determining the form of a cloud terminal product and rendering attributes corresponding to the cloud terminal product according to pre-acquired scene information of a virtual scene, generating rendering data of the cloud terminal product displayed in the virtual scene according to the form of the cloud terminal product and the rendering attributes corresponding to the cloud terminal product, and sending the rendering data to a client.

Description

Cloud terminal product display method
Technical Field
The application belongs to the technical field of videos, and particularly relates to a display method of a cloud terminal product.
Background
In the related art, if a user needs to use a cloud terminal product in products such as augmented Reality (Augmented Reality, AR)/Virtual Reality (VR), there are mainly two choices, namely (1) using a browser in an operating system of the products such as AR/VR to access the cloud terminal product, and (2) installing an APP in the products such as AR/VR to access the cloud terminal product in an APP manner. The experience of using the cloud terminal product in the virtual scene is poor for the user.
Disclosure of Invention
The embodiment of the application aims to provide a display method of cloud terminal products, which can solve the problem that experience feeling of a user using cloud terminal products in a virtual scene is poor.
The embodiment of the application provides a cloud terminal product display method, which is applied to a server and comprises the steps of determining the form of a cloud terminal product and rendering attributes corresponding to the cloud terminal product according to scene information of a pre-acquired virtual scene, generating rendering data of the cloud terminal product displayed in the virtual scene according to the form of the cloud terminal product and the rendering attributes corresponding to the cloud terminal product, and sending the rendering data to a client.
In a second aspect, the embodiment of the application provides another display method of a cloud terminal product, which is applied to a client, and comprises the steps of sending scene identification information of a virtual scene to a server, wherein the scene identification information is used for determining the form of the cloud terminal product, receiving rendering data corresponding to the form of the cloud terminal product, sent by the server, and displaying the rendering data in the virtual scene.
The embodiment of the application provides a display device of a cloud terminal product, which is applied to a server and comprises a determining module, a generating module and a transmission module, wherein the determining module is used for determining the form of the cloud terminal product and the rendering attribute corresponding to the cloud terminal product according to the scene information of a pre-acquired virtual scene, the generating module is used for generating rendering data of the cloud terminal product displayed in the virtual scene according to the form of the cloud terminal product and the rendering attribute corresponding to the cloud terminal product, and the transmission module is used for transmitting the rendering data to a client.
In a fourth aspect, the embodiment of the application provides another display device of a cloud terminal product, which is applied to a client, and comprises a sending module, a receiving module and a display module, wherein the sending module is used for sending scene identification information of a virtual scene to a server, the scene identification information is used for determining the form of the cloud terminal product, the receiving module is used for receiving rendering data corresponding to the form of the cloud terminal product, and the display module is used for displaying the rendering data in the virtual scene.
In a fifth aspect, an embodiment of the present application provides an electronic device comprising a processor and a memory storing a program or instructions executable on the processor, which when executed by the processor, implement the steps of the method according to the first or second aspect.
In a sixth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon a program or instructions which when executed by a processor implement the steps of the method according to the first or second aspect.
In a seventh aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect or the second aspect.
In an eighth aspect, embodiments of the present application provide a computer program product stored in a storage medium, the program product being executable by at least one processor to implement the method according to the first or second aspect.
In the embodiment of the application, the server side can determine the form of the cloud terminal product and the rendering attribute corresponding to the cloud terminal product according to the scene information of the pre-acquired virtual scene, generate the rendering data of the cloud terminal product displayed in the virtual scene according to the form of the cloud terminal product and the rendering attribute corresponding to the cloud terminal product, and send the rendering data to the client side. The cloud terminal product experience method and device can enable the experience of the user using the cloud terminal product in the virtual scene to be self-adaptive and convenient, and the immersion of the user can be improved.
Drawings
Fig. 1 shows a flow diagram of a method for displaying a cloud terminal product according to an embodiment of the present application;
FIG. 2 is a schematic diagram of scene recognition according to an embodiment of the present application;
FIG. 3 is a schematic diagram of processing a video stream according to an embodiment of the present application;
FIG. 4 illustrates a schematic diagram of a dynamic rendering provided by an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating an interactive processing and interface rendering according to an embodiment of the present application;
Fig. 6 is a flow chart illustrating another message issuing method according to an embodiment of the present application;
Fig. 7 is a schematic structural diagram of a display device for a cloud terminal product according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a display device of another cloud terminal product according to an embodiment of the present application;
Fig. 9 is a block diagram illustrating a structure of an electronic device according to an exemplary embodiment of the present application.
Detailed Description
The technical solutions of the embodiments of the present application will be clearly described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which are obtained by a person skilled in the art based on the embodiments of the present application, fall within the scope of protection of the present application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
In use, the cloud terminal can migrate more computing capacity to the cloud terminal, for example, tasks with heavy computing capacity such as computing, storage and control are placed on the cloud terminal for processing, the cloud terminal determines display data of the cloud terminal, and the cloud terminal only plays a role in connection and display. After the cloud terminal is connected with the cloud server through a protocol, the same use effect as that of a common terminal can be achieved, and various requirements of a user for office work, life, entertainment and the like are met.
At present, cloud terminal products are mainly provided for users in forms of Application (APP), applet, HTML5 (HyperText Mark-up Language) which is an extensive extension markup Language applied to Internet and is also a main Language forming a webpage document, wherein HTML5 is a new version of HTML, and is called H5 for short) webpage and the like. If a user needs to use a cloud mobile phone/cloud terminal product in augmented reality/products, two main options are:
(1) And accessing the cloud terminal product by using a browser in the AR/VR operating system.
(2) And installing the APP in the AR/VR product, and accessing the cloud terminal product in an APP mode.
The related technical proposal has the following defects:
Firstly, the use of a browser in an AR/VR operation system to access a cloud terminal product is not convenient enough, and a user needs to manually open the browser and input a website in a virtual environment, so that the operation complexity of the user is increased, and meanwhile, the access mode of the browser is immersion-type to a lower degree.
Secondly, at present, a User accesses a cloud terminal product by using a browser or an APP, the resolution of the cloud terminal product and the layout product form of a User Interface (UI) are fixed, and the experience of the User in a virtual scene is poor.
The method for displaying the cloud terminal product provided by the embodiment of the application is described in detail through specific embodiments and application scenes thereof by combining the accompanying drawings.
Fig. 1 illustrates a method for displaying a cloud terminal product according to an exemplary embodiment of the present application, which is applied to a server, where the method 100 may be performed by an electronic device, and the electronic device may be a cloud terminal or other devices. As shown in fig. 1, the method mainly comprises the following steps:
s101, determining the form of a cloud terminal product and rendering attributes corresponding to the cloud terminal product according to the pre-acquired scene information of the virtual scene.
In the embodiment of the application, the form of the cloud terminal product and the rendering attribute corresponding to the cloud terminal product can be determined according to the scene information of the pre-acquired virtual scene, so that the form of the cloud terminal product can be determined according to the actual condition of the virtual scene, and more immersive experience can be provided for a user. In the embodiment of the application, the cloud terminal product can be in a form of a mobile phone, a tablet, a computer or other forms, and the embodiment of the application is not particularly limited. The rendering attributes corresponding to the form of the cloud terminal product include screen proportion, resolution, rendering size typical values, and the like, and the embodiment of the application is not particularly limited. Taking a mobile phone, a tablet and a computer as examples, the form of the cloud terminal product and the rendering attribute corresponding to the cloud terminal product are shown in table 1, and three major classes P ①②③/T①②/C①②③ and 8 minor classes are determined in total. In a specific implementation manner, the determination may be performed according to actual situations.
TABLE 1
In an optional implementation manner, the determining, according to the pre-acquired scene information of the virtual scene, a form of a cloud terminal product and a rendering attribute corresponding to the cloud terminal product includes:
and identifying the scene information based on a pre-trained scene classification model, and determining the form of the cloud terminal product and the rendering attribute corresponding to the cloud terminal product.
In the embodiment of the application, the pre-trained scene classification model can be used for identifying the scene information, so that the form of the cloud terminal product and the rendering attribute corresponding to the cloud terminal product are determined, the scene identification speed can be improved through the scene classification model, and the immersive experience degree of the user can be increased. In the training process of the scene classification model, as shown in fig. 2, a machine learning algorithm can be utilized to perform recognition training on heavy-point scene photos such as "one-hand holding mobile phone", "two-hand holding tablet", "desktop office", and the like. The process of scene recognition may include the steps of:
Step one, data acquisition. The environmental sensor using AR/VR hardware comprises a camera image, a gyroscope, an accelerometer and the like, and performs environmental data acquisition. These data will form a rich scene data set (dataset) for training of scene classification models.
Training a scene classification model. Training the collected data by using a machine learning algorithm, and culturing an intelligent scene classification model. This model will learn to recognize different smart scenarios, such as whether the user is holding a cell phone in one hand, holding a tablet in two hands, or doing a desktop office. During model training, image and environmental data of various different forms of devices will be incorporated to ensure that the model has a wide range of scene recognition capabilities.
And thirdly, deploying a model. The trained scene classification model is deployed on a cloud terminal device (server side) so as to perform real-time scene recognition.
And step four, identifying the real-time scene. When a user starts a scene recognition module on the cloud mobile phone/cloud terminal equipment, the cloud terminal can rapidly recognize the current scene. The scene information is passed to a rendering system for corresponding rendering adjustments based on the recognition results.
For those users who do not select to enable the scene recognition module, the desired scene mode may be manually selected, thereby activating the corresponding rendering settings while communicating scene information to the rendering system.
In the embodiment of the application, the virtual scene identified according to the scene classification model is assumed to be a single-hand held mobile phone, so that the form of the cloud terminal product required in the virtual scene can be determined to be the mobile phone, and meanwhile, various attributes of the mobile phone, such as screen proportion, size and the like, can be determined according to the corresponding rendering attribute of the cloud terminal product required in the virtual scene, so as to provide immersive experience for a user.
In an optional implementation manner, after determining the form of the cloud terminal product and the rendering attribute corresponding to the cloud terminal product according to the scene information of the pre-acquired virtual scene, the method further includes:
And determining the computing power resource corresponding to the cloud terminal product from the existing computing power resources according to the form of the cloud terminal product and the rendering attribute corresponding to the cloud terminal product.
In the embodiment of the application, after the form of the cloud terminal product and the rendering attribute corresponding to the cloud terminal product are determined, the computing power resource corresponding to the form of the cloud terminal product can be determined from the existing computing power resources so as to generate the rendering data corresponding to the cloud terminal product. And dynamically adjusting the resource configuration of the cloud terminal according to the scene recognition result so as to provide the optimal performance and user experience.
In practical applications, two steps may be included, first allocation and then actual determination. Step one, the server side can dynamically allocate computing power resources according to the identification result of the current scene so as to meet the requirements of users. This includes resource specifications such as the number of central processing unit (Central Processing Unit, CPU) cores, memory capacity, and the number of virtual graphics processing units (Virtual Graphics Processing Unit, vGPU). The user can select the required configuration by himself at the time of ordering, ensuring that the best performance and user experience are obtained in different scenarios. For example, table 2 below:
TABLE 2
Product morphology CPU (Nuclear) Memory (G) vGPUs
Mobile phone 4 8 2
Flat plate 4 8 2
Computer with a computer program 8 16 4
And step two, dynamic allocation of resources, namely determining starting form and resource specification according to the scene conditions transmitted by scene identification. Compared with static resource allocation in the related art, the embodiment of the application can dynamically allocate computing power resources according to scene requirements, and the performance and efficiency are improved as much as possible.
In the embodiment of the application, the resolution in the rendering attribute can be determined according to the network state, in the related art, the server side can realize dynamic adjustment of the resolution according to the network state of the user, for example, from 720p@30fps to 1080p@fps, wherein FPS is short for FRAMES PER seconds, is the transmission frame number per Second of a picture, 1080P is a display format reaching 1920×1080 resolution under progressive scanning, and P means progressive scanning (Progressive scanning). The embodiment of the application can further realize dynamic switching according to the form and the layout of a scene on the basis of traditional dynamic adjustment of the resolution, and can realize dynamic adjustment of the resolution according to the network condition, thereby realizing polymorphic compatibility of a set of calculation forces.
S102, generating rendering data of the cloud terminal product displayed in the virtual scene according to the form of the cloud terminal product and rendering attributes corresponding to the cloud terminal product.
According to the embodiment of the application, the rendering data of the cloud terminal product displayed in the virtual scene can be generated according to the form of the cloud terminal product and the rendering attribute corresponding to the cloud terminal product. Compared with the fixed rendering attributes (resolution, interface layout and the like) in the related art, the embodiment of the application can generate rendering data according to different cloud terminal product forms and rendering attributes corresponding to the cloud terminal product, so that the cloud terminal product can be flexibly displayed in different forms under a virtual scene, and more self-adaptive and convenient experience can be provided for users.
In an optional implementation manner, the generating the rendering data of the cloud terminal product displayed in the virtual scene according to the form of the cloud terminal product and the rendering attribute corresponding to the cloud terminal product may include the following steps:
In the embodiment of the application, in order to support different modes, the optimization of the resource file, which comprises images, icons, text resources and the like, can be pre-arranged so as to adapt to different resolutions. These optimized resource files may provide clearer and attractive interface elements at different resolutions. The pre-configured resource file can be directly called when the rendering data is generated so as to improve efficiency.
And 2, generating the rendering data corresponding to the form of the cloud terminal product according to the rendering attribute corresponding to the form of the cloud terminal product and the resource file, wherein the rendering data comprises interface element data. In order to better support the morphology of different cloud terminal products, code logic for automatically adapting layout and implementing user interface elements can be performed. According to the rendering attributes and the loaded layout file, the size, position and number of the interface elements can be automatically adjusted to adapt to different modes. In the embodiment of the application, the corresponding interface element data can be generated according to the rendering attribute and the related resource file, so as to provide immersive experience for the user.
In an optional implementation manner, the rendering data includes video frame data, and the generating, according to the form of the cloud terminal product and the rendering attribute corresponding to the cloud terminal product, the rendering data displayed by the cloud terminal product in the virtual scene may include the following steps:
In the process of generating video frame data, image acquisition and preprocessing can be performed first, the 2D video stream output by a cloud terminal operating system can be acquired, for example, an acquisition end can be connected with the cloud terminal operating system through virtual network computing (Virtual Network Computing, VNC) to realize the access of a graphical interface. And dynamically setting the display resolution and the display proportion of the cloud terminal through VNC connection, and acquiring the 2D video stream. The video stream is then pre-processed (compressed) and the video data may be compressed into a format suitable for transmission, such as h.264 encoding, using a fast forward moving picture experts group (Fast Forward Moving Picture Experts Group, ffmpeg) multimedia coding tool. In the embodiment of the present application, other methods may be used to perform compression processing on the video stream, and other transmission formats may be used.
And step 2, receiving curved surface parameters sent by the client, wherein the curved surface parameters are used for determining a projection matrix and a view matrix. In an embodiment of the present application, the curved surface parameters may include relevant parameters in the projection matrix and the view matrix, such as a view angle, a distance between the near clipping plane and the far clipping plane, a position of a human eye, and the like.
And 3, performing perspective transformation on each frame of image in the 2D video stream by using a perspective matrix according to the form of the cloud terminal product, the rendering attribute corresponding to the cloud terminal product and the curved surface parameter, and generating curved video frame data, wherein the perspective matrix is obtained by calculating a projection matrix and a view matrix. In the embodiment of the application, perspective transformation can be performed on the image of each frame by using a perspective matrix, so that curved surface transformation of the 2D image is realized. Such a separate approach may allow the rendering engine to more easily modify the projection matrix and the view matrix to achieve various effects, such as perspective projection, orthogonal projection, camera movement, rotation, etc., without having to recalculate the entire transformation, helping to increase maintainability and flexibility of the rendering pipeline.
In a specific implementation, projection matrix perspective _matrix may be created first, using perspective projection to simulate the field of view of the human eye, so as to present a realistic image on a curved surface. There are three parameters in total:
(1) fov field angle, which indicates the size of the visible region.
(2) Aspect_ratio: aspect ratio of the screen (16.0/9.0 in the case of 16:9). It should be noted that, the cloud terminal product has different forms, and the corresponding rendering attributes have different aspect ratios of the screens, as shown in the above table 1, the aspect ratio of the mobile phone is 9:16, the tablet is 4:3, and the computer is 16:9.
(3) Near and far: distance between the near clipping plane and the far clipping plane.
A view matrix is then created that defines the position and orientation of the human eye to determine the viewing angle at which the scene is viewed. The curved screen is not bent up and down, only is bent left and right, and the whole calculation is as follows, and three parameters are all:
(1) eye, the position of the human eye.
(2) Center, target point observed by human eyes.
(3) Up is the upward direction of the human eye.
By calculating the projection matrix and the view matrix separately in the two processes, the camera and the rendering effect can be better controlled and managed. Finally, the scene needs to be mapped from the world coordinate system to the coordinate system in the AR/VR screen, and by multiplying the projection matrix and the view matrix, the three-dimensional (3D) scene can be mapped correctly onto the screen in a single transformation step. The projection matrix is multiplied by the view matrix to obtain a perspective matrix perspective _view_matrix:
perspective_view_matrix=perspective_matrix*view_matrix
in the embodiment of the present application, as shown in fig. 3, the server is responsible for running the actual cloud terminal operating system, and performs management in a containerized manner. At the transport level, it abstracts the operating system into an instruction stream and a video stream. The server can receive the dynamic request from the client and present the user interface of the operating system in the form of video stream, so as to adapt to different morphological requirements.
In an alternative implementation, after the rendering data is sent to the client, the method further includes receiving head tracking data, eye tracking data, and hand tracking data sent by the client;
Dynamically rendering the cloud terminal product based on the head tracking data, the eyeball tracking data and the hand tracking data to obtain dynamic rendering data;
And sending the dynamic rendering data to the client.
In the above optional implementation manner, the dynamically rendering the cloud terminal product based on the head tracking data, the eyeball tracking data and the hand tracking data to obtain dynamic rendering data may include the following steps:
And step 1, acquiring the head position and the viewpoint of the user according to the head tracking data and the eye tracking data, wherein the server side can acquire the head movement of the user by utilizing the head tracking data, including direction, rotation and inclination, so that the position and the direction of the cloud terminal product can be continuously updated to ensure that the view angle of the user in the virtual environment is consistent with the head movement. The eye tracking data can then be used to determine the gaze point of the user, thereby enabling gaze interaction and gaze triggering functions.
And step 2, acquiring the hand position and hand motion of the user according to the hand tracking data, wherein the hand position and motion of the user can be acquired through the hand tracking data, and the hand motion of the user comprises gestures of the user.
And 3, according to the head position and the viewpoint of the user, the hand position and the hand action of the user, adjusting the rendering of the cloud terminal product based on the pre-acquired simulation model, and obtaining dynamic rendering data. In the embodiment of the present application, as shown in fig. 4, a simulation model matched with the real environment may be rendered, and the physical position of the user's palm or desk may be measured or tracked by using the sensors (depth sensor, color data) of the AR/VR device. And then, according to the measured data of the real world object, rendering the 3D model and the interface of the cloud terminal to the viewpoint and the position of the virtual camera so as to realize matching with the real environment. And rendering a simulation model according to the actual size of the physical world. In order to realize interaction between the user and the cloud terminal product, the cloud terminal product can be correspondingly updated through the head position and the viewpoint of the user, the hand position and the hand action of the user, and the rendering of the virtual scene can be adjusted in real time. The dynamic rendering data may include parameters such as position, size, angle, or transparency of the rendering object (cloud end product) to reflect the user's interactions and actions, thereby creating a more immersive and realistic virtual experience, so that the user obtains a simulated cloud end product at the visual level. As shown in fig. 5, the interaction between the user and the cloud terminal product can be achieved through dynamic rendering.
For example, through the head tracking data, the eye tracking data and the hand tracking data, the acquired actions made by the user change the position of the cloud terminal product (for example, the cloud computer) for the user, for example, according to a simulation model, it is determined that the cloud computer moves from the hand of the user to the table, then the position of the cloud computer needs to be updated, and the generated dynamic rendering data is sent to the client, so that the user can see the cloud computer on the table in the virtual scene.
And S103, sending the rendering data to a client.
In the embodiment of the application, the server can send the generated rendering data to the client so that the cloud terminal product is displayed in the virtual scene, the applicability of the cloud terminal product can be improved, the user experience is more convenient and faster, and better user experience is brought.
The server side can determine the form of the cloud terminal product and the rendering attribute corresponding to the cloud terminal product according to the pre-acquired scene information of the virtual scene, generate rendering data of the cloud terminal product displayed in the virtual scene according to the form of the cloud terminal product and the rendering attribute corresponding to the cloud terminal product, and send the rendering data to the client side. The cloud terminal product experience method and device can enable the experience of the user using the cloud terminal product in the virtual scene to be self-adaptive and convenient, and the immersion of the user can be improved.
Fig. 6 illustrates another method for displaying a cloud terminal product according to an exemplary embodiment of the present application, where the method 600 may be performed by an electronic device, which may be an XR handle, an AR helmet, or the like, applied to a client. As shown in fig. 6, the method mainly comprises the following steps:
S601, sending scene identification information of the virtual scene to a server.
The scene identification information is used for determining the form of the cloud terminal product.
In the embodiment of the application, the client can send the scene identification information of the virtual scene to the server, so that the server can determine the form of the cloud terminal product, and the immersion of the user is improved. In practical application, the scene identification information sent by the client may be a specific scene of the virtual scene, and the server may perform scene identification to determine a form of the cloud terminal product. The client can run inside the AR/VR computing device, scene recognition is directly achieved by utilizing the local computing capability, the form of the cloud terminal product to be presented is determined, the form comprises a mobile phone, a tablet, a computer and the like, and then the scene recognition information is sent to the server.
S602, receiving rendering data corresponding to the form of the cloud terminal product, which is sent by the server.
In the embodiment of the application, the client can receive the rendering data corresponding to the form of the cloud terminal product, which is sent by the server, so that the memory of the client can be saved, and the efficiency can be improved.
In an optional implementation manner, after the sending the scene identification information of the virtual scene to the server, the method further includes:
and sending the pre-acquired curved surface parameters to the server, wherein the curved surface parameters are used for determining a projection matrix and a view matrix.
In the embodiment of the application, the client may send the curved surface parameters for generating the video frame data to the server, where the curved surface parameters include relevant parameters of the projection matrix and the view matrix, for example, parameters such as a view angle, a distance between the near clipping plane and the far clipping plane, and a position of a human eye, so that the server can curve the video stream.
And S603, displaying the rendering data in the virtual scene.
In the embodiment of the application, the client can render the 2D/3D model of the cloud terminal product in real time, so that the cloud terminal product is displayed in the virtual scene. In general, the rendering data corresponding to the cloud terminal can be combined with the original display data of the virtual scene through the image engine to display the cloud terminal, or the rendering data can be directly displayed through the image engine, and in practical application, other modes can be adopted to display the rendering data.
In practical applications, in the case where the rendering data is video frame data, the client may leave enough pixel areas in the cloud end product model, and embed the video frame data into the model. In the embodiment of the application, the video frame data is obtained through the projection matrix and the view matrix formed by the curved surface parameters and the rendering attributes of the cloud terminal product, and the curved surface conversion of the 2D video stream image is realized through the projection matrix and the view matrix, so that the advantage of large view angle of the AR/VR product can be better utilized, and better experience is brought to users.
In an alternative implementation, after the rendering data is presented in the virtual scene, the method further comprises the steps of:
In the embodiment of the application, the client can call the built-in sensor of the AR/VR equipment to monitor the movement of the head, including the direction, the rotation and the inclination, so as to obtain the head tracking data. Meanwhile, an eyeball tracking technology is adopted to track the movement and the fixation point of eyes of a user, eye tracking data are obtained, and a client can transmit head and eyeball tracking information to a server so as to realize real-time monitoring of the head and eyes. The client can also monitor the position, the action and the gesture of the hand by using the hand tracking device or the camera to obtain hand tracking data, and then transmit the hand tracking data to the server to reflect the interaction and the action of the user.
And step 2, receiving dynamic rendering data sent by the server, wherein the client can receive the dynamic rendering data sent by the server, and the dynamic rendering data is updated by the server according to the head tracking data, the eye tracking data, the hand tracking data and the position, the size, the angle or the transparency of the cloud terminal product, and can reflect the interaction and the action of a user.
And step 3, displaying the dynamic rendering data in the virtual scene. The client can display the dynamic rendering data in the virtual scene through the image engine based on the dynamic rendering data, so that more immersive and vivid virtual experience is created, and a user can obtain a simulated cloud terminal device in a visual level.
In the embodiment of the application, the client can send the scene identification information for determining the form of the cloud terminal product to the server, then receive the rendering data corresponding to the form of the cloud terminal product sent by the server, and finally display the rendering data in the virtual scene. The cloud terminal product can be displayed in various forms such as mobile phones, tablets and computers in virtual equipment such as AR/VR, applicability of the cloud terminal product is improved, and better user experience can be provided.
According to the display method of the cloud terminal product, the execution main body can be the display device of the cloud terminal product. In the embodiment of the application, the display device of the cloud terminal product provided by the embodiment of the application is described by taking the display method of the cloud terminal product executed by the display device of the cloud terminal product as an example.
Fig. 7 is a schematic structural diagram of a display device for a cloud terminal product according to an exemplary embodiment of the present application, where the message display device may be applied to a server and may implement all or part of the content in the embodiment shown in fig. 1, and the display device for a cloud terminal product includes a determining module 701, a generating module 702, and a transmitting module 703.
In the embodiment of the application, a determining module 701 is configured to determine a form of a cloud terminal product and a rendering attribute corresponding to the cloud terminal product according to pre-acquired scene information of a virtual scene, a generating module 702 is configured to generate rendering data displayed in the virtual scene by the cloud terminal product according to the form of the cloud terminal product and the rendering attribute corresponding to the cloud terminal product, and a transmitting module 703 is configured to send the rendering data to a client.
In an optional implementation manner, the determining module 701 is specifically configured to, when determining the form of the cloud terminal product and the rendering attribute corresponding to the cloud terminal product according to the scene information of the pre-acquired virtual scene:
and identifying the scene information based on a pre-trained scene classification model, and determining the form of the cloud terminal product and the rendering attribute corresponding to the cloud terminal product.
In an optional implementation manner, the determining module 701 is further configured to determine, from existing computing resources, a computing resource corresponding to the cloud terminal product according to a morphology of the cloud terminal product and a rendering attribute corresponding to the cloud terminal product.
In an optional implementation manner, the generating module 702 is specifically configured to, when being configured to generate the rendering data of the cloud terminal product displayed in the virtual scene according to the form of the cloud terminal product and the rendering attribute corresponding to the cloud terminal product:
Calling a corresponding pre-configured resource file according to the form of the cloud terminal product and the rendering attribute corresponding to the form of the cloud terminal product;
And generating the rendering data corresponding to the form of the cloud terminal product according to the rendering attribute corresponding to the form of the cloud terminal product and the resource file, wherein the rendering data comprises interface element data.
In an optional implementation manner, the rendering data includes video frame data, and the generating module 702 is specifically configured to, when generating the rendering data of the cloud terminal product displayed in the virtual scene according to the form of the cloud terminal product and the rendering attribute corresponding to the cloud terminal product:
Acquiring a two-dimensional (2D) video stream related to the cloud terminal product and compressing the 2D video stream;
receiving curved surface parameters sent by the client, wherein the curved surface parameters are used for determining a projection matrix and a view matrix;
And performing perspective transformation on each frame of image in the 2D video stream by using a perspective matrix according to the form of the cloud terminal product, the rendering attribute corresponding to the cloud terminal product and the curved surface parameter, and generating the curved video frame data, wherein the perspective matrix is obtained by calculating the projection matrix and the view matrix.
In an optional implementation manner, the generating module 702 is further configured to receive head tracking data, eyeball tracking data, and hand tracking data sent by the client, dynamically render the cloud terminal product based on the head tracking data, the eyeball tracking data, and the hand tracking data to obtain dynamic rendering data, and send the dynamic rendering data to the client.
In the above optional implementation manner, when the generating module 702 is configured to dynamically render the cloud terminal product based on the head tracking data, the eyeball tracking data, and the hand tracking data, the generating module is specifically configured to:
acquiring the head position and the viewpoint of a user according to the head tracking data and the eye tracking data, and acquiring the hand position and the hand action of the user according to the hand tracking data;
And adjusting the rendering of the cloud terminal product based on the pre-acquired simulation model according to the head position and the viewpoint of the user and the hand position and the hand action of the user, and obtaining dynamic rendering data.
The display device of the cloud terminal product in the embodiment of the application can be electronic equipment, and can also be a component in the electronic equipment, such as an integrated circuit or a chip. The electronic device may be a cloud terminal.
The message display device in the embodiment of the application can be a device with an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The display device for cloud terminal products provided by the embodiment of the present application can implement each process implemented by the method embodiment shown in fig. 1, and in order to avoid repetition, details are not repeated here.
According to the display method of the cloud terminal product, which is provided by the embodiment of the application, the execution main body can be a display device of the cloud terminal product. In the embodiment of the application, the display device of another cloud terminal product provided by the embodiment of the application is described by taking the example that the display device of another cloud terminal product executes the display method of another cloud terminal product.
Fig. 8 is a schematic structural diagram of another display device for a cloud terminal product according to an exemplary embodiment of the present application, where the display device for a cloud terminal product is applied to a client, and may implement all or part of the content in the embodiment shown in fig. 6, and the display device for a cloud terminal product includes a sending module 801, a receiving module 802, and a display module 803.
In the embodiment of the application, a sending module 801 is configured to send scene identification information of a virtual scene to a server, where the scene identification information is used to determine a form of a cloud terminal product, a receiving module 802 is configured to receive rendering data corresponding to the form of the cloud terminal product sent by the server, and a display module 803 is configured to display the rendering data in the virtual scene.
In an optional implementation manner, the sending module 801 is further configured to send pre-acquired surface parameters to the server, where the surface parameters are used to determine a projection matrix and a view matrix.
In an alternative implementation manner, the sending module 801 is further configured to send the acquired head tracking data, eye tracking data, and hand tracking data to the server, the receiving module 802 is further configured to receive dynamic rendering data sent by the server, and the displaying module 803 is further configured to display the dynamic rendering data in the virtual scene.
The display device of the cloud terminal product in the embodiment of the application can be electronic equipment, and can also be a component in the electronic equipment, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be other devices than a terminal. By way of example, the electronic device may be an augmented reality (augmented reality, AR), virtual Reality (VR) device, and embodiments of the application are not particularly limited.
The display device for cloud terminal products provided by the embodiment of the present application can implement each process implemented by the method embodiment shown in fig. 6, and in order to avoid repetition, details are not repeated here.
Optionally, as shown in fig. 9, the embodiment of the present application further provides an electronic device 900, including a processor 901 and a memory 902, where the memory 902 stores a program or an instruction that can be executed on the processor 901, and the program or the instruction when executed by the processor 701 implements the above-mentioned method for displaying a cloud terminal product shown in fig. 1 or each step of the method for displaying a cloud terminal product shown in fig. 6, and the steps can achieve the same technical effects, so that repetition is avoided and no further description is given here.
The electronic device in the embodiment of the application includes the mobile electronic device and the non-mobile electronic device.
The embodiment of the application also provides a computer readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements the above-mentioned method for displaying a cloud terminal product shown in fig. 1 or each process of the method for displaying a cloud terminal product shown in fig. 6, and can achieve the same technical effect, so that repetition is avoided, and no further description is given here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes computer readable storage medium such as computer readable memory ROM, random access memory RAM, magnetic or optical disk, etc.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running a program or instructions to realize the above-mentioned various processes of the cloud terminal product display method shown in fig. 1 or the cloud terminal product display method shown in fig. 6, and the same technical effects can be achieved, so that repetition is avoided, and no further description is provided here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
An embodiment of the present application provides a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement each process of the above-mentioned method for displaying a cloud terminal product shown in fig. 1 or the method for displaying a cloud terminal product shown in fig. 6, and the same technical effect can be achieved, so that repetition is avoided, and details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a computer software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.

Claims (10)

1.一种云终端产品的展示方法,应用于服务端,其特征在于,包括:1. A method for displaying cloud terminal products, applied to a server, characterized by comprising: 根据预获取的虚拟场景的场景信息,确定云终端产品的形态以及所述云终端产品对应的渲染属性;Determining the form of the cloud terminal product and the rendering attributes corresponding to the cloud terminal product according to the pre-acquired scene information of the virtual scene; 根据所述云终端产品的形态以及所述云终端产品对应的渲染属性,生成所述云终端产品在所述虚拟场景中展示的渲染数据;Generating rendering data displayed by the cloud terminal product in the virtual scene according to the form of the cloud terminal product and the rendering attributes corresponding to the cloud terminal product; 将所述渲染数据发送给客户端。The rendering data is sent to the client. 2.根据权利要求1所述的方法,其特征在于,所述根据预获取的虚拟场景的场景信息,确定云终端产品的形态以及所述云终端产品对应的渲染属性,包括:2. The method according to claim 1, characterized in that the step of determining the form of the cloud terminal product and the rendering attributes corresponding to the cloud terminal product according to the pre-acquired scene information of the virtual scene comprises: 基于预训练的场景分类模型对所述场景信息进行识别,确定所述云终端产品的形态以及所述云终端产品对应的渲染属性。The scene information is identified based on a pre-trained scene classification model to determine the form of the cloud terminal product and the rendering attributes corresponding to the cloud terminal product. 3.根据权利要求1所述的方法,其特征在于,在所述根据预获取的虚拟场景的场景信息,确定云终端产品的形态以及所述云终端产品对应的渲染属性之后,所述方法还包括:3. The method according to claim 1, characterized in that after determining the form of the cloud terminal product and the rendering attributes corresponding to the cloud terminal product according to the pre-acquired scene information of the virtual scene, the method further comprises: 根据所述云终端产品的形态以及所述云终端产品对应的渲染属性,从现有算力资源中,确定与所述云终端产品对应的算力资源。According to the form of the cloud terminal product and the rendering attributes corresponding to the cloud terminal product, computing resources corresponding to the cloud terminal product are determined from existing computing resources. 4.根据权利要求1所述的方法,其特征在于,所述根据所述云终端产品的形态以及所述云终端产品对应的渲染属性,生成所述云终端产品在所述虚拟场景中展示的渲染数据,包括:4. The method according to claim 1, characterized in that generating rendering data displayed by the cloud terminal product in the virtual scene according to the form of the cloud terminal product and the rendering attributes corresponding to the cloud terminal product comprises: 根据所述云终端产品的形态以及所述云终端产品的形态对应的渲染属性,调用对应的预配置的资源文件;According to the form of the cloud terminal product and the rendering properties corresponding to the form of the cloud terminal product, calling the corresponding pre-configured resource file; 根据所述云终端产品的形态对应的渲染属性以及所述资源文件,生成所述云终端产品的形态对应的所述渲染数据,其中,所述渲染数据包括界面元素数据。The rendering data corresponding to the form of the cloud terminal product is generated according to the rendering properties corresponding to the form of the cloud terminal product and the resource file, wherein the rendering data includes interface element data. 5.根据权利要求1所述的方法,其特征在于,所述渲染数据包括视频帧数据,所述根据所述云终端产品的形态以及所述云终端产品对应的渲染属性,生成所述云终端产品在所述虚拟场景中展示的渲染数据,包括:5. The method according to claim 1, wherein the rendering data comprises video frame data, and the generating of the rendering data displayed by the cloud terminal product in the virtual scene according to the form of the cloud terminal product and the rendering attributes corresponding to the cloud terminal product comprises: 获取与所述云终端产品相关的二维2D视频流并对所述2D视频流进行压缩处理;Acquire a two-dimensional 2D video stream related to the cloud terminal product and compress the 2D video stream; 接收客户端发送的曲面参数,其中,所述曲面参数用于确定投影矩阵和视图矩阵;Receiving surface parameters sent by the client, wherein the surface parameters are used to determine a projection matrix and a view matrix; 根据所述云终端产品的形态、所述云终端产品对应的渲染属性以及所述曲面参数,对所述2D视频流中的每一帧图像使用透视矩阵进行透视变换,生成曲面化的所述视频帧数据,其中,所述透视矩阵由所述投影矩阵和所述视图矩阵计算获得。According to the shape of the cloud terminal product, the rendering properties corresponding to the cloud terminal product and the surface parameters, each frame image in the 2D video stream is transformed using a perspective matrix to generate the curved video frame data, wherein the perspective matrix is calculated by the projection matrix and the view matrix. 6.根据权利要求1所述的方法,其特征在于,在所述将所述渲染数据发送给客户端之后,所述方法还包括:6. The method according to claim 1, characterized in that after sending the rendering data to the client, the method further comprises: 接收所述客户端发送的头部追踪数据、眼球追踪数据和手部追踪数据;Receiving the head tracking data, eye tracking data and hand tracking data sent by the client; 基于所述头部追踪数据、所述眼球追踪数据和所述手部追踪数据,对所述云终端产品进行动态渲染,获得动态渲染数据;Based on the head tracking data, the eye tracking data and the hand tracking data, dynamically render the cloud terminal product to obtain dynamic rendering data; 将所述动态渲染数据发送给所述客户端。The dynamic rendering data is sent to the client. 7.根据权利要求6所述的方法,其特征在于,所述基于所述头部追踪数据、所述眼球追踪数据和所述手部追踪数据,对所述云终端产品进行动态渲染,获得动态渲染数据,包括:7. The method according to claim 6, characterized in that the dynamically rendering the cloud terminal product based on the head tracking data, the eye tracking data and the hand tracking data to obtain the dynamic rendering data comprises: 根据所述头部追踪数据以及所述眼睛追踪数据,获取用户的头部位置和视点根据所述手部追踪数据,获取所述用户的手部位置和手部动作;Obtaining the user's head position and viewpoint according to the head tracking data and the eye tracking data; and obtaining the user's hand position and hand movement according to the hand tracking data; 根据用户的头部位置和视点、所述用户的手部位置和手部动作,基于预获取的仿真模型,调整对所述云终端产品的渲染,获得动态渲染数据。According to the user's head position and viewpoint, the user's hand position and hand movement, and based on the pre-acquired simulation model, the rendering of the cloud terminal product is adjusted to obtain dynamic rendering data. 8.一种云终端产品的展示方法,应用于客户端,其特征在于,包括:8. A method for displaying cloud terminal products, applied to a client, characterized by comprising: 向服务端发送虚拟场景的场景识别信息,其中,所述场景识别信息用于确定云终端产品的形态;Sending scene identification information of the virtual scene to the server, wherein the scene identification information is used to determine the form of the cloud terminal product; 接收所述服务端发送的与所述云终端产品的形态对应的渲染数据;Receiving rendering data corresponding to the form of the cloud terminal product sent by the server; 将所述渲染数据展示在所述虚拟场景中。The rendering data is displayed in the virtual scene. 9.根据权利要求8所述的方法,其特征在于,在所述向服务端发送虚拟场景的场景识别信息之后,所述方法还包括:9. The method according to claim 8, characterized in that after sending the scene identification information of the virtual scene to the server, the method further comprises: 将预获取的曲面参数发送给所述服务端,其中,所述曲面参数用于确定投影矩阵和视图矩阵。The pre-acquired surface parameters are sent to the server, wherein the surface parameters are used to determine a projection matrix and a view matrix. 10.根据权利要求8所述的方法,其特征在于,在所述将所述渲染数据展示在所述虚拟场景中之后,所述方法还包括:10. The method according to claim 8, characterized in that after displaying the rendering data in the virtual scene, the method further comprises: 将获取的头部追踪数据、眼睛追踪数据以及手部追踪数据发送给所述服务端;Sending the acquired head tracking data, eye tracking data and hand tracking data to the server; 接收所述服务端发送的动态渲染数据;Receiving dynamic rendering data sent by the server; 将所述动态渲染数据展示在所述虚拟场景中。The dynamic rendering data is displayed in the virtual scene.
CN202411279801.6A 2024-09-12 2024-09-12 How to display cloud terminal products Pending CN119277123A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411279801.6A CN119277123A (en) 2024-09-12 2024-09-12 How to display cloud terminal products

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411279801.6A CN119277123A (en) 2024-09-12 2024-09-12 How to display cloud terminal products

Publications (1)

Publication Number Publication Date
CN119277123A true CN119277123A (en) 2025-01-07

Family

ID=94105041

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411279801.6A Pending CN119277123A (en) 2024-09-12 2024-09-12 How to display cloud terminal products

Country Status (1)

Country Link
CN (1) CN119277123A (en)

Similar Documents

Publication Publication Date Title
JP7604669B2 (en) Special effects display method, device, equipment and medium
CN108282648B (en) A VR rendering method, device, wearable device and readable storage medium
CN111414225B (en) Three-dimensional model remote display method, first terminal, electronic device and storage medium
EP4248405A1 (en) Personalized avatar real-time motion capture
WO2022093939A1 (en) Side-by-side character animation from realtime 3d body motion capture
WO2019114185A1 (en) App remote control method and related devices
WO2018014766A1 (en) Generation method and apparatus and generation system for augmented reality module, and storage medium
CN110568923A (en) unity 3D-based virtual reality interaction method, device, equipment and storage medium
CN111294665A (en) Video generation method and device, electronic equipment and readable storage medium
WO2022237116A1 (en) Image processing method and apparatus
CN109582122A (en) Augmented reality information providing method, device and electronic equipment
CN105554430A (en) Video call method, system and device
CN116485983A (en) Texture generation method of virtual object, electronic device and storage medium
CN107861711B (en) Page adaptation method and device
CN108513090B (en) Method and device for group video session
CN113206993A (en) Method for adjusting display screen and display device
CN112565883A (en) Video rendering processing system and computer equipment for virtual reality scene
CN110691010B (en) Cross-platform and cross-terminal VR/AR product information display system
CN108665510A (en) Rendering method and device of continuous shooting image, storage medium and terminal
CN116258738A (en) Image processing method, device, electronic device and storage medium
CN109934929A (en) The method, apparatus of image enhancement reality, augmented reality show equipment and terminal
EP4485357A2 (en) Image processing method and apparatus, electronic device, and storage medium
CN111524240A (en) Scene switching method, device and augmented reality device
CN119277123A (en) How to display cloud terminal products
CN103870971B (en) The method and its system of a kind of three-dimensional website of structure based on mobile platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination