[go: up one dir, main page]

CN118118717A - Screen sharing method, device, equipment and medium - Google Patents

Screen sharing method, device, equipment and medium Download PDF

Info

Publication number
CN118118717A
CN118118717A CN202211477797.5A CN202211477797A CN118118717A CN 118118717 A CN118118717 A CN 118118717A CN 202211477797 A CN202211477797 A CN 202211477797A CN 118118717 A CN118118717 A CN 118118717A
Authority
CN
China
Prior art keywords
sensor data
screen sharing
target
sharing mode
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211477797.5A
Other languages
Chinese (zh)
Inventor
李蕾
崔新宇
段孝超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202211477797.5A priority Critical patent/CN118118717A/en
Publication of CN118118717A publication Critical patent/CN118118717A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • H04N21/43076Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of the same content streams on multiple devices, e.g. when family members are watching the same movie on different devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/144Processing image signals for flicker reduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/4363Adapting the video stream to a specific local network, e.g. a Bluetooth® network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides a screen sharing method, a device, equipment and a medium, wherein the method comprises the following steps: responding to sensor data sent by a host side, and determining a screen sharing mode selected by the host side; if the screen sharing mode selected by the anchor terminal is the target sharing mode, performing debouncing processing on the sensor data to obtain optimized sensor data; rendering the display picture according to the optimized sensor data to obtain a target image picture; and sending the target image frames to the anchor side and the audience side so that the anchor side and the audience side display the target image frames. The application can reduce or even avoid discomfort such as dizziness and the like caused by image shake to human eyes.

Description

屏幕共享方法、装置、设备和介质Screen sharing method, device, equipment and medium

技术领域Technical Field

本申请实施例涉及数据处理技术领域,尤其涉及一种屏幕共享方法、装置、设备和介质。The embodiments of the present application relate to the field of data processing technology, and in particular to a screen sharing method, apparatus, device and medium.

背景技术Background technique

随着扩展现实(Extended Reality,XR)设备的发展与普及,用户使用XR设备时可以通过屏幕共享方式,将XR设备的屏幕分享给其他用户,从而提高XR设备在多人交互过程中的体验。其中,屏幕共享顾名思义是将屏幕的显示内容原模原样的分享给其他人。With the development and popularization of Extended Reality (XR) devices, users can share the screen of XR devices with other users through screen sharing when using XR devices, thereby improving the experience of XR devices in multi-person interaction. Among them, screen sharing, as the name suggests, is to share the screen display content with others as it is.

目前,将XR设备的屏幕分享给其他用户时,通常是用户佩戴XR设备,并利用作为屏幕共享客户端的终端设备,向云端服务器上传与XR设备屏幕内容的相关数据。进而,云端服务器基于接收到的数据调用图形处理资源进行渲染、音视频编码等操作。之后,云端服务器将处理后得到的数据下发给用户以及其他用户,使得各用户端的显示设备根据云端服务器下发的数据进行解码、渲染等操作,并在屏幕上显示对应的图像画面以及音频数据等,具体流程可参见图1。At present, when sharing the screen of an XR device with other users, the user usually wears the XR device and uses the terminal device as the screen sharing client to upload data related to the screen content of the XR device to the cloud server. Then, the cloud server calls the graphics processing resources based on the received data for rendering, audio and video encoding and other operations. After that, the cloud server sends the processed data to the user and other users, so that the display devices of each user end perform decoding, rendering and other operations according to the data sent by the cloud server, and display the corresponding image screen and audio data on the screen. The specific process can be seen in Figure 1.

然而,当用户XR设备屏幕的图像画面在三维空间上发生晃动,比如俯仰、侧倾(翻滚)以及偏航等的高频晃动时,这种晃动会使得图像画面产生不自然的抖动。并且,这种抖动画面也会毫无差别的通过云端服务器渲染后实时下发给用户以及其他用户。所以,用户以及其他用户观看到的抖动画面后,受到前庭视觉不匹配导致的晕动症影响,会明显感觉到眩晕等不适。However, when the image on the screen of the user's XR device shakes in three-dimensional space, such as high-frequency shaking of pitch, roll (roll), and yaw, this shaking will cause the image to shake unnaturally. Moreover, this shaking picture will be rendered by the cloud server in real time and sent to the user and other users without any difference. Therefore, after the user and other users see the shaking picture, they will be affected by motion sickness caused by vestibular vision mismatch and will obviously feel dizzy and other discomfort.

发明内容Summary of the invention

本申请提供一种屏幕共享方法、装置、设备和介质,能够减少甚至避免因图像抖动给人眼带来的眩晕等不适。The present application provides a screen sharing method, apparatus, device and medium, which can reduce or even avoid discomfort such as dizziness caused to the human eye due to image jitter.

第一方面,本申请实施例提供了一种屏幕共享方法,包括:In a first aspect, an embodiment of the present application provides a screen sharing method, including:

响应于主播端发送的传感器数据,确定所述主播端选择的屏幕共享模式;In response to sensor data sent by the host terminal, determining a screen sharing mode selected by the host terminal;

如果所述主播端选择的屏幕共享模式为目标共享模式,则对所述传感器数据进行去抖动处理,得到优化后的传感器数据;If the screen sharing mode selected by the anchor terminal is the target sharing mode, de-jittering the sensor data to obtain optimized sensor data;

根据所述优化后的传感器数据,对显示画面进行渲染处理,得到目标图像画面;Rendering the display screen according to the optimized sensor data to obtain a target image screen;

将所述目标图像画面发送给所述主播端和观众端,以使所述主播端以及所述观众端显示所述目标图像画面。The target image screen is sent to the host end and the audience end so that the host end and the audience end display the target image screen.

第二方面,本申请实施例提供了一种屏幕共享装置,包括:In a second aspect, an embodiment of the present application provides a screen sharing device, including:

模式确定模块,用于响应于主播端发送的传感器数据,确定所述主播端选择的屏幕共享模式;A mode determination module, configured to determine the screen sharing mode selected by the anchor terminal in response to sensor data sent by the anchor terminal;

去抖处理模块,用于如果所述主播端选择的屏幕共享模式为目标共享模式,则对所述传感器数据进行去抖动处理,得到优化后的传感器数据;A de-jitter processing module, configured to perform de-jitter processing on the sensor data to obtain optimized sensor data if the screen sharing mode selected by the anchor terminal is the target sharing mode;

图像渲染模块,用于根据所述优化后的传感器数据,对显示画面进行渲染处理,得到目标图像画面;An image rendering module, used to render a display image according to the optimized sensor data to obtain a target image;

图像分发模块,用于将所述目标图像画面发送给所述主播端和观众端,以使所述主播端以及所述观众端显示所述目标图像画面。The image distribution module is used to send the target image to the host terminal and the audience terminal so that the host terminal and the audience terminal display the target image.

第三方面,本申请实施例提供了一种电子设备,包括:In a third aspect, an embodiment of the present application provides an electronic device, including:

处理器和存储器,所述存储器用于存储计算机程序,所述处理器用于调用并运行所述存储器中存储的计算机程序,以执行第一方面实施例或其各实现方式中所述的屏幕共享方法。A processor and a memory, wherein the memory is used to store a computer program, and the processor is used to call and run the computer program stored in the memory to execute the screen sharing method described in the first aspect embodiment or its various implementations.

第四方面,本申请实施例提供了一种计算机可读存储介质,用于存储计算机程序,所述计算机程序使得计算机执行如第一方面实施例或其各实现方式中所述的屏幕共享方法。In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium for storing a computer program, wherein the computer program enables a computer to execute the screen sharing method as described in the embodiment of the first aspect or its various implementations.

第五方面,本申请实施例提供了一种包含程序指令的计算机程序产品,当所述程序指令在电子设备上运行时,使得所述电子设备执行如第一方面实施例或其各实现方式中所述的屏幕共享方法。In a fifth aspect, an embodiment of the present application provides a computer program product comprising program instructions, which, when executed on an electronic device, enables the electronic device to execute the screen sharing method as described in the embodiment of the first aspect or its various implementations.

本申请实施例公开的技术方案,至少具有如下有益效果:The technical solution disclosed in the embodiments of the present application has at least the following beneficial effects:

在接收到主播端发送的传感器数据时,确定主播端选择的屏幕共享模式,如果屏幕共享模式为目标共享模式,则对传感器数据进行去抖动处理,得到优化后的传感器数据,并基于优化后的传感器数据,对显示画面进行渲染处理得到目标图像画面,然后将目标图像画面发送给主播端和观众端,使得主播端以及观众端显示目标图像画面。本申请通过在屏幕共享时确定主播端选择的屏幕共享模式,当屏幕共享模式为目标共享模式时对传感器数据进行去抖动处理,以将不稳定传感器数据中细微且高频的不自然抖动滤除,得到稳定的传感器数据,进而基于稳定的传感器数据对显示画面进行渲染处理时,可以渲染得到稳定的图像画面,从而基于该稳定的图像画面能够减少甚至避免因图像抖动给人眼带来的眩晕等不适。When receiving the sensor data sent by the host, the screen sharing mode selected by the host is determined. If the screen sharing mode is the target sharing mode, the sensor data is de-jittered to obtain optimized sensor data, and based on the optimized sensor data, the display screen is rendered to obtain the target image screen, and then the target image screen is sent to the host and the audience, so that the host and the audience display the target image screen. The present application determines the screen sharing mode selected by the host during screen sharing, and de-jitters the sensor data when the screen sharing mode is the target sharing mode, so as to filter out the subtle and high-frequency unnatural jitters in the unstable sensor data and obtain stable sensor data. Then, when the display screen is rendered based on the stable sensor data, a stable image screen can be rendered, so that the discomfort such as dizziness caused by image jitter to the human eye can be reduced or even avoided based on the stable image screen.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings required for use in the description of the embodiments will be briefly introduced below. Obviously, the drawings described below are only some embodiments of the present application. For ordinary technicians in this field, other drawings can be obtained based on these drawings without paying any creative work.

图1是相关技术中提供的一种屏幕共享方法的示意图;FIG1 is a schematic diagram of a screen sharing method provided in the related art;

图2是本申请实施例提供的一种屏幕共享方法的流程示意图;FIG2 is a flow chart of a screen sharing method provided in an embodiment of the present application;

图3是本申请实施例提供的一种共享模式选择界面的示意图;FIG3 is a schematic diagram of a sharing mode selection interface provided in an embodiment of the present application;

图4是本申请实施例提供的一种用户在球面模型中向外观看的示意图;FIG4 is a schematic diagram of a user looking outward in a spherical model provided by an embodiment of the present application;

图5是本申请实施例提供的另一种屏幕共享方法的流程示意图;FIG5 is a flow chart of another screen sharing method provided in an embodiment of the present application;

图6是本申请实施例提供的一种对6DoF数据进行去抖动处理示意图;FIG6 is a schematic diagram of de-jittering processing for 6DoF data provided by an embodiment of the present application;

图7是本申请实施例提供的一种屏幕共享装置的示意性框图;FIG7 is a schematic block diagram of a screen sharing device provided in an embodiment of the present application;

图8是本申请实施例提供的一种电子设备的示意性框图。FIG8 is a schematic block diagram of an electronic device provided in an embodiment of the present application.

具体实施方式Detailed ways

下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。根据本申请中的实施例,本领域普通技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本申请保护的范围。The following will be combined with the drawings in the embodiments of the present application to clearly and completely describe the technical solutions in the embodiments of the present application. Obviously, the described embodiments are only part of the embodiments of the present application, not all of the embodiments. According to the embodiments in the present application, all other embodiments obtained by ordinary technicians in this field without making creative work are within the scope of protection of this application.

需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或服务器不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。It should be noted that the terms "first", "second", etc. in the specification and claims of the present application and the above-mentioned drawings are used to distinguish similar objects, and are not necessarily used to describe a specific order or sequence. It should be understood that the data used in this way can be interchangeable where appropriate, so that the embodiments of the present application described herein can be implemented in an order other than those illustrated or described herein. In addition, the terms "including" and "having" and any of their variations are intended to cover non-exclusive inclusions, for example, a process, method, system, product or server that includes a series of steps or units is not necessarily limited to those steps or units that are clearly listed, but may include other steps or units that are not clearly listed or inherent to these processes, methods, products or devices.

本申请适用于XR设备屏幕共享场景,考虑到目前共享XR设备屏幕的图像画面时,该图像画面因用户运动在三维空间上会发生晃动,比如俯仰、侧倾以及偏航等的高频晃动,由于这种晃动会使得图像画面产生不自然抖动,且这种抖动画面也会毫无差别的通过云端服务器渲染后下发给用户,从而当用户观看到抖动画面后,受到前庭视觉不匹配导致的晕动症影响,会明显感觉到眩晕等不适。因此,本申请设计了一种屏幕共享方案,以通过该方案能够减少甚至避免因图像抖动给人眼带来的眩晕等不适。This application is applicable to the XR device screen sharing scenario. Considering that when the image of the XR device screen is currently shared, the image will shake in three-dimensional space due to the user's movement, such as high-frequency shaking of pitch, roll, and yaw. This shaking will cause the image to shake unnaturally, and this shaking image will be rendered by the cloud server without any difference and sent to the user. Therefore, when the user watches the shaking image, he will be affected by motion sickness caused by vestibular vision mismatch and will obviously feel dizziness and other discomfort. Therefore, this application designs a screen sharing solution, which can reduce or even avoid the discomfort such as dizziness caused by image shaking to the human eye.

为了便于理解本申请实施例,在描述本申请各个实施例之前,首先对本申请所有实施例中所涉及到的一些概念进行适当的解释说明,具体如下:In order to facilitate understanding of the embodiments of the present application, before describing each embodiment of the present application, some concepts involved in all embodiments of the present application are first appropriately explained, as follows:

1)虚拟现实(Virtual Reality,VR),创建和体验虚拟世界的技术,计算生成一种虚拟环境,是一种多源信息(本文中提到的虚拟现实至少包括视觉感知,此外还可以包括听觉感知、触觉感知、运动感知,甚至还包括味觉感知、嗅觉感知等),实现虚拟环境的融合的、交互式的三维动态视景和实体行为的仿真,使用户沉浸到模拟的虚拟现实环境中,实现在诸如地图、游戏、视频、教育、医疗、模拟、协同训练、销售、协助制造、维护和修复等多种虚拟环境的应用。1) Virtual Reality (VR), a technology for creating and experiencing a virtual world, generates a virtual environment by computation, and is a multi-source information (the virtual reality mentioned in this article includes at least visual perception, and can also include auditory perception, tactile perception, motion perception, and even taste perception, olfactory perception, etc.), realizing the fusion of virtual environment, interactive three-dimensional dynamic vision and simulation of entity behavior, allowing users to immerse themselves in a simulated virtual reality environment, and realizing applications in various virtual environments such as maps, games, videos, education, medical care, simulation, collaborative training, sales, assisted manufacturing, maintenance and repair.

2)虚拟现实设备(VR设备),实现虚拟现实效果的终端,通常可以提供为眼镜、头盔式显示器(Head Mount Display,HMD)、隐形眼镜的形态,以用于实现视觉感知和其他形式的感知,当然虚拟现实设备实现的形态不限于此,根据实际需要可以进一步小型化或大型化。2) Virtual reality devices (VR devices), terminals for realizing virtual reality effects, can usually be provided in the form of glasses, helmet-mounted displays (Head Mount Display, HMD), and contact lenses to realize visual perception and other forms of perception. Of course, the form of virtual reality devices is not limited to this, and can be further miniaturized or enlarged according to actual needs.

可选的,本申请实施例中记载的虚拟现实设备可以包括但不限于如下几个类型:Optionally, the virtual reality devices described in the embodiments of the present application may include but are not limited to the following types:

2.1)电脑端虚拟现实(PCVR)设备,利用PC端进行虚拟现实功能的相关计算以及数据输出,外接的电脑端虚拟现实设备利用PC端输出的数据实现虚拟现实的效果。2.1) PC-based virtual reality (PCVR) devices use the PC to perform related calculations and data output for virtual reality functions. External PC-based virtual reality devices use the data output by the PC to achieve virtual reality effects.

2.2)移动虚拟现实设备,支持以各种方式(如设置有专门的卡槽的头戴式显示器)设置移动终端(如智能手机),通过与移动终端有线或无线方式的连接,由移动终端进行虚拟现实功能的相关计算,并输出数据至移动虚拟现实设备,例如通过移动终端的APP观看虚拟现实视频。2.2) Mobile virtual reality devices support the configuration of mobile terminals (such as smart phones) in various ways (such as head-mounted displays with dedicated card slots). Through wired or wireless connection with the mobile terminal, the mobile terminal performs relevant calculations of virtual reality functions and outputs data to the mobile virtual reality device, such as watching virtual reality videos through the mobile terminal's APP.

2.3)一体机虚拟现实设备,具备用于进行虚拟功能的相关计算的处理器,因而具备独立的虚拟现实输入和输出的功能,不需要与PC端或移动终端连接,使用自由度高。2.3) All-in-one virtual reality devices have a processor for performing relevant calculations of virtual functions, and thus have independent virtual reality input and output functions. They do not need to be connected to a PC or mobile terminal and have a high degree of freedom in use.

3)增强现实(Augmented Reality,AR):一种在相机采集图像的过程中,实时地计算相机在现实世界(或称三维世界、真实世界)中的相机姿态参数,根据该相机姿态参数在相机采集的图像上添加虚拟元素的技术。虚拟元素包括但不限于:图像、视频和三维模型。AR技术的目标是在屏幕上把虚拟世界套接在现实世界上进行互动。3) Augmented Reality (AR): A technology that calculates the camera's posture parameters in the real world (or three-dimensional world, real world) in real time during the process of the camera capturing images, and adds virtual elements to the images captured by the camera based on the camera posture parameters. Virtual elements include but are not limited to: images, videos, and three-dimensional models. The goal of AR technology is to integrate the virtual world with the real world on the screen for interaction.

4)混合现实(Mixed Reality,MR):将计算机创建的感官输入(例如,虚拟对象)与来自物理布景的感官输入或其表示集成的模拟布景,一些MR布景中,计算机创建的感官输入可以适应于来自物理布景的感官输入的变化。另外,用于呈现MR布景的一些电子系统可以监测相对于物理布景的取向和/或位置,以使虚拟对象能够与真实对象(即来自物理布景的物理元素或其表示)交互。例如,系统可监测运动,使得虚拟植物相对于物理建筑物看起来是静止的。4) Mixed Reality (MR): A simulated setting that integrates computer-created sensory input (e.g., virtual objects) with sensory input from a physical setting or its representation. In some MR settings, the computer-created sensory input can adapt to changes in sensory input from the physical setting. In addition, some electronic systems used to present MR settings can monitor the orientation and/or position relative to the physical setting so that virtual objects can interact with real objects (i.e., physical elements from the physical setting or their representations). For example, the system can monitor movement so that virtual plants appear stationary relative to physical buildings.

5)扩展现实(Extended Reality,XR)是指由计算机技术和可穿戴设备生成的所有真实和虚拟组合环境以及人机交互,其包含了虚拟现实(VR)、增强现实(AR)以及混合现实(MR)等多种形式。5) Extended Reality (XR) refers to all real and virtual combined environments and human-computer interactions generated by computer technology and wearable devices, which include virtual reality (VR), augmented reality (AR) and mixed reality (MR) and other forms.

在介绍了本申请实施例涉及到的一些概念之后,下面结合附图对本申请实施例提供的一种屏幕共享方法进行具体说明。After introducing some concepts involved in the embodiments of the present application, a screen sharing method provided in the embodiments of the present application is specifically described below with reference to the accompanying drawings.

图2是本申请实施例提供的一种屏幕共享方法的流程示意图。本申请实施例适用于使用XR设备进行屏幕共享的场景,该屏幕共享方法可由屏幕共享装置来执行。该屏幕共享装置可由硬件和/或软件组成,并可集成于电子设备中。在本申请实施例中,电子设备可选为服务器,比如云端服务器等,此处对其不做限制。FIG2 is a flow chart of a screen sharing method provided in an embodiment of the present application. The embodiment of the present application is applicable to a scenario in which an XR device is used for screen sharing, and the screen sharing method can be performed by a screen sharing device. The screen sharing device can be composed of hardware and/or software and can be integrated into an electronic device. In an embodiment of the present application, the electronic device can be selected as a server, such as a cloud server, etc., which is not limited here.

如图2所示,该方法可以包括以下步骤:As shown in FIG. 2 , the method may include the following steps:

S101,响应于主播端发送的传感器数据,确定主播端选择的屏幕共享模式。S101, in response to sensor data sent by the anchor terminal, determining a screen sharing mode selected by the anchor terminal.

在本申请实施例中,主播端是指主播端设备,且该主播端设备是由主播用户操控的。其中,主播端设备为XR设备,且该XR设备可选为VR设备、AR设备或者MR设备等,本申请对其不做具体限制。In the embodiment of the present application, the host end refers to the host end device, and the host end device is controlled by the host user. Among them, the host end device is an XR device, and the XR device can be a VR device, an AR device, or an MR device, etc., and this application does not make specific restrictions on it.

考虑到XR设备通常会配套手持设备一起使用,从而用户可以利用手持设备与XR设备进行交互。其中,手持设备可选为手柄或者手部控制器等。因此,本申请中主播用户使用的主播端设备可为XR设备和手持设备。当然,如果XR设备具备其他的交互功能,比如手势交互功能、语音交互功能或者眼动追踪功能时,则本申请主播用户可选的使用的主播端设备仅为XR设备,此处对其不做具体限制。Considering that XR devices are usually used together with handheld devices, users can use handheld devices to interact with XR devices. Among them, the handheld device can be selected as a handle or a hand controller. Therefore, the anchor end device used by the anchor user in this application can be an XR device and a handheld device. Of course, if the XR device has other interactive functions, such as gesture interaction function, voice interaction function or eye tracking function, the anchor end device that the anchor user of this application can choose to use is only an XR device, and there is no specific restriction on it here.

应理解的是,本申请中传感器数据具体为6自由度数据。It should be understood that the sensor data in the present application is specifically 6-DOF data.

其中,自由度是指用户在3D空间中可以移动的方向数,而方向数即为自由度。在本申请中表征自由度的方向数总共为6个。The degree of freedom refers to the number of directions in which a user can move in a 3D space, and the number of directions is the degree of freedom. In this application, the number of directions representing the degree of freedom is 6 in total.

本申请中6自由度(Degree of Freedom,简称为DoF)是指用户具备在X,Y以及Z轴上旋转的能力之外,还具备在X,Y以及Z轴上移动的能力。即,6DoF包括:平移自由度和旋转自由度。其中,平移自由度分为向前/后、向上/下、向左/右3种,旋转自由分为纵摇(Pitch,俯仰)、横摇(Roll,侧倾)、垂摇(Yaw,偏航)3种。换言之,6自由度可由3种类型的平移自由度和3种类型的旋转自由度构成。In this application, 6 degrees of freedom (DoF) means that the user has the ability to rotate on the X, Y and Z axes, as well as the ability to move on the X, Y and Z axes. That is, 6DoF includes: translational freedom and rotational freedom. Among them, translational freedom is divided into three types: forward/backward, up/down, left/right, and rotational freedom is divided into three types: pitch, roll, and yaw. In other words, 6 degrees of freedom can be composed of three types of translational freedom and three types of rotational freedom.

考虑到人体的运动大致可分为旋转和平移两大类,而6DoF正好是由旋转自由度和平移自由度构成,因此支持6DoF的XR设备几乎可以模拟用户的所有头部运动等。Considering that human body movements can be roughly divided into two categories: rotation and translation, and 6DoF is composed of rotational freedom and translational freedom, XR devices that support 6DoF can simulate almost all of the user's head movements.

也就是说,无论对象的任何可能性运动有多复杂,都可通过平移和旋转的组合来表示,即都可以通过6DoF数据进行表达。That is to say, no matter how complex any possible movement of the object is, it can be represented by a combination of translation and rotation, that is, it can be expressed by 6DoF data.

如前所述,当主播用户使用的主播端设备为XR设备和手持设备时,本申请中传感器数据为XR设备的6DoF数据,以及手持设备的6DoF数据。当主播用户使用的主播端设备为XR设备,则本申请中传感器数据为XR设备的6DoF数据。As mentioned above, when the anchor user uses an XR device and a handheld device, the sensor data in this application is the 6DoF data of the XR device and the 6DoF data of the handheld device. When the anchor user uses an XR device, the sensor data in this application is the 6DoF data of the XR device.

可选的,当主播用户需要向其他用户共享主播端设备屏幕时,首先可创建一个用于屏幕共享的虚拟房间,然后将作为观众的其他用户邀请进该虚拟房间,以为后续共享屏幕奠定基础。Optionally, when the anchor user needs to share the anchor device screen with other users, he can first create a virtual room for screen sharing, and then invite other users as viewers into the virtual room to lay the foundation for subsequent screen sharing.

这里邀请的观众用户,其实是观众端设备。在本实施例中,观众端设备可以是任意具有显示屏的硬件设备,比如XR设备、个人电脑、智能手机或者笔记本电脑等,此处对其不做限制。The audience users invited here are actually audience-end devices. In this embodiment, the audience-end device can be any hardware device with a display screen, such as an XR device, a personal computer, a smart phone, or a laptop computer, etc., and there is no restriction on it here.

在本申请实施例中,创建虚拟房间,包括如下情况:In the embodiment of the present application, creating a virtual room includes the following situations:

第一种情况,如果主播端设备上安装有屏幕共享客户端,则主播用户可操控主播端设备利用该屏幕共享客户端向服务器发送创建虚拟房间的申请指令。进而,服务器接收到主播端设备发送的申请指令时,为主播端设备分配一个虚拟房间,并将虚拟房间标识返回给主播端设备。从而主播端设备基于服务器返回的虚拟房间标识进入虚拟房间,以完成虚拟房间创建。其中,虚拟房间标识是指能够唯一确定虚拟房间的信息,比如虚拟房间号或者虚拟房间ID等,此处对其不做限制。In the first case, if a screen sharing client is installed on the host device, the host user can control the host device to use the screen sharing client to send an application instruction to the server to create a virtual room. Then, when the server receives the application instruction sent by the host device, it allocates a virtual room to the host device and returns the virtual room identifier to the host device. The host device then enters the virtual room based on the virtual room identifier returned by the server to complete the creation of the virtual room. Among them, the virtual room identifier refers to information that can uniquely identify the virtual room, such as the virtual room number or virtual room ID, etc., which is not limited here.

第二种情况,如果主播端设备上未安装屏幕共享客户端,则主播用户可操控主播端设备利用作为屏幕共享客户端的终端设备,向服务器发送创建虚拟房间的申请指令。进而,服务器接收到主播端设备发送的申请指令时,为主播端设备分配一个虚拟房间,并将虚拟房间标识返回给主播端设备。从而主播端设备基于服务器返回的虚拟房间标识进入虚拟房间,以完成虚拟房间创建。In the second case, if the screen sharing client is not installed on the host device, the host user can control the host device to use the terminal device as the screen sharing client to send an application instruction to the server to create a virtual room. Then, when the server receives the application instruction sent by the host device, it allocates a virtual room to the host device and returns the virtual room identifier to the host device. The host device then enters the virtual room based on the virtual room identifier returned by the server to complete the creation of the virtual room.

应理解的是,上述终端设备可选为个人电脑、智能手机或者笔记本电脑等智能终端。It should be understood that the above-mentioned terminal device can be selected as an intelligent terminal such as a personal computer, a smart phone or a laptop computer.

进而,主播用户基于主播端设备可将虚拟房间标识分享给其他用户操控的观众端设备,以使观众端设备基于该虚拟房间标识进入创建的虚拟房间;或者,主播用户基于主播端设备主动邀请其他用户操控的观众端设备进入创建的虚拟房间。Furthermore, the host user can share the virtual room identifier with the audience devices controlled by other users based on the host device, so that the audience devices can enter the created virtual room based on the virtual room identifier; or, the host user can actively invite the audience devices controlled by other users to enter the created virtual room based on the host device.

当主播端设备以及观众端设备进入到同一个虚拟房间之后,主播用户可操控主播端设备在该虚拟房间内进行屏幕共享操作。When the host device and the audience device enter the same virtual room, the host user can control the host device to perform screen sharing operations in the virtual room.

可选的,在开始屏幕共享之前,主播端设备首先根据检测到屏幕共享指令,在主播端设备屏幕中显示共享模式选择界面,该共享模式选择界面包括:提示信息、目标共享模式控件以及普通共享模式控件,具体如图3所示。其中,提示信息可选为“请选择您想要的屏幕共享模式”。当检测到主播用户触发了目标共享模式控件时,说明主播用户完成屏幕共享模式的设置操作,此时主播端设备关闭共享模式选择界面。或者,当到达指定的显示时长、且未检测到主播用户的触发指令,则默认为主播用户选择目标共享模式,并关闭共享模式选择界面。Optionally, before starting screen sharing, the host-end device first displays a sharing mode selection interface on the host-end device screen based on the detected screen sharing instruction. The sharing mode selection interface includes: prompt information, target sharing mode controls, and normal sharing mode controls, as shown in Figure 3. Among them, the prompt information can be selected as "Please select the screen sharing mode you want." When it is detected that the host user triggers the target sharing mode control, it means that the host user has completed the setting operation of the screen sharing mode, and the host-end device closes the sharing mode selection interface. Alternatively, when the specified display time is reached and the trigger instruction of the host user is not detected, the target sharing mode is selected by default for the host user, and the sharing mode selection interface is closed.

应理解的是,本申请中目标共享模式是指去抖动共享模式。该去抖动共享模式主要用于将6DoF数据中存在的不自然抖动滤除掉,得到平稳的6DoF数据。It should be understood that the target sharing mode in the present application refers to the de-jitter sharing mode. The de-jitter sharing mode is mainly used to filter out unnatural jitters in 6DoF data to obtain smooth 6DoF data.

进而,主播用户可根据引导信息利用主播端设备将共享内容,比如演讲稿或者直播视频等数据上传给服务器,使得服务器可以基于该共享内容向同一虚拟房间中的观众端设备进行屏幕共享操作。Furthermore, the anchor user can use the anchor-end device to upload shared content, such as speech scripts or live video data to the server according to the guidance information, so that the server can perform screen sharing operations to the audience-end devices in the same virtual room based on the shared content.

上传完成之后,主播端设备可以开始共享屏幕。在共享屏幕时,主播用户通过操控主播端设备调整屏幕上显示的图像画面。与此同时,主播端设备会实时获取自身的6DoF数据,并将获取到的6DoF数据实时发送给服务器,以使服务器基于该6DoF数据执行图像渲染以及图像分享操作,以实现屏幕共享。After the upload is complete, the host device can start sharing the screen. When sharing the screen, the host user adjusts the image displayed on the screen by controlling the host device. At the same time, the host device will obtain its own 6DoF data in real time and send the obtained 6DoF data to the server in real time, so that the server can perform image rendering and image sharing operations based on the 6DoF data to achieve screen sharing.

在本申请实施例中,主播端设备获取自身的6DoF数据时,可通过控制自身中的传感器实时采集自身的6DoF数据。In an embodiment of the present application, when the anchor end device obtains its own 6DoF data, it can collect its own 6DoF data in real time by controlling the sensors in itself.

应理解的是,主播端设备中的传感器可以为传感器,比如,九轴传感器,或者惯性测量单元(Inertial measurement unit,IMU)等,此处对其不做具体限制。It should be understood that the sensor in the anchor-end device can be a sensor, such as a nine-axis sensor, or an inertial measurement unit (IMU), etc., and no specific limitation is made here.

另外,主播端设备将获取到的6DoF数据实时发送给服务器时,如果主播端设备为XR设备和手持设备,则手持设备首先将自身的6DoF数据发送给XR设备,进而XR设备按照预先制定好的传输格式,将自身的6DoF数据和手持设备的6DoF数据一起发送给服务器。如果主播端设备为XR设备,则XR设备按照预先指定好的传输格式,将自身的6DoF数据发送给服务器。In addition, when the host device sends the acquired 6DoF data to the server in real time, if the host device is an XR device and a handheld device, the handheld device first sends its own 6DoF data to the XR device, and then the XR device sends its own 6DoF data and the handheld device's 6DoF data to the server in a pre-specified transmission format. If the host device is an XR device, the XR device sends its own 6DoF data to the server in a pre-specified transmission format.

其中,预先制定好的传输格式,可如下:Among them, the pre-defined transmission format may be as follows:

其中,TransQuaternion rotation表示一个四元数数据,TransVector3 position表示一个三维坐标数据,具体如下所示:Among them, TransQuaternion rotation represents a quaternion data, and TransVector3 position represents a three-dimensional coordinate data, as shown below:

考虑到在实际使用中,通常需要将四元数转换为欧拉角矩阵数据,因此本申请可通过如下公式将上述四元数转换为欧拉角矩阵数据:Considering that in actual use, it is usually necessary to convert the quaternion into Euler angle matrix data, the present application can convert the above quaternion into Euler angle matrix data by the following formula:

其中,q0,q1,q2以及q3为四元数,分别对应TransQuaternion rotation中的x,y,z,w。Among them, q 0 , q 1 , q 2 and q 3 are quaternions, corresponding to x, y, z, w in TransQuaternion rotation respectively.

考虑到实际使用中,主播用户操控主播端设备时身体会不自觉运动,而身体运动会导致主播端设备屏幕的图像画面在三维空间上发生晃动,并且这种晃动会使得图像画面产生不自然的抖动。因此,为了减少甚至避免因图像抖动造成的模拟晕动症,导致用户观看设备显示的图像画面时给人眼带来的眩晕等不适,主播用户在共享屏幕时通常会选择去抖动共享模式,使得服务器基于该去抖动共享模式进行去抖动处理,进而基于去抖后的数据对显示画面进行渲染处理,以得到稳定的图像画面,并将该图像画面分享给不同用户端。Considering that in actual use, the host user's body will move unconsciously when controlling the host-side device, and the body movement will cause the image on the host-side device screen to shake in three-dimensional space, and this shaking will cause the image to shake unnaturally. Therefore, in order to reduce or even avoid simulated motion sickness caused by image shaking, which causes dizziness and other discomfort to the user's eyes when watching the image displayed on the device, the host user usually selects the de-shaking sharing mode when sharing the screen, so that the server performs de-shaking processing based on the de-shaking sharing mode, and then renders the display screen based on the de-shaking data to obtain a stable image, and share the image with different user terminals.

基于上述原因,当服务器接收到主播端设备发送的6DoF数据之后,可确定主播端设备选择的屏幕共享模式是去抖动共享模式还是普通共享模式,进而根据确定的屏幕共享模式进行对应的图像渲染以及图像分享等操作。Based on the above reasons, after the server receives the 6DoF data sent by the anchor device, it can determine whether the screen sharing mode selected by the anchor device is the de-jitter sharing mode or the normal sharing mode, and then perform corresponding image rendering and image sharing operations according to the determined screen sharing mode.

其中,服务器确定主播端设备选择的屏幕共享模式时,可通过查询数据列表,确定与主播端设备对应的屏幕共享模式标识。进而,根据该屏幕共享模式标识,确定主播端设备选择的屏幕共享模式。其中,屏幕共享模式标识用于表征主播用户选择的屏幕共享模式。When the server determines the screen sharing mode selected by the host device, it can determine the screen sharing mode identifier corresponding to the host device by querying the data list. Then, according to the screen sharing mode identifier, the screen sharing mode selected by the host device is determined. The screen sharing mode identifier is used to represent the screen sharing mode selected by the host user.

示例性的,假设屏幕共享模式标识“true”用于标识去抖动共享模式,屏幕共享模式标识“false”用于标识普通共享模式,那么当确定与该主播端设备对应的屏幕共享模式标识为“true”时,可以确定该主播端设备选择了去抖动共享模式。反之,当确定与该主播端设备对应的屏幕共享模式标识为“false”时,可以确定该主播端设备选择了普通共享模式。For example, assuming that the screen sharing mode identifier "true" is used to identify the de-jitter sharing mode, and the screen sharing mode identifier "false" is used to identify the normal sharing mode, then when it is determined that the screen sharing mode identifier corresponding to the host-end device is "true", it can be determined that the host-end device has selected the de-jitter sharing mode. Conversely, when it is determined that the screen sharing mode identifier corresponding to the host-end device is "false", it can be determined that the host-end device has selected the normal sharing mode.

需要说明的是,上述提及的模拟晕动症,是由于用户视觉上观察到的状态与身体的真实状态不一致所引发的。例如,用户在坐着或站着来通过手柄操控角色移动时,视觉上该用户得到的信息是“我在移动”,而负责感知身体状态的中耳前庭器官却给大脑发出了“我没动”的信号,这种矛盾的信号会让大脑认为“自己”处于一个不正常而且危险的状态。这时大脑就会产生强烈的眩晕感,以此警告用户尽快脱离目前状态。It should be noted that the simulated motion sickness mentioned above is caused by the inconsistency between the state observed by the user visually and the actual state of the body. For example, when the user is sitting or standing and controlling the movement of the character through the handle, the visual information obtained by the user is "I am moving", but the vestibular organ in the middle ear responsible for sensing the body's state sends a signal to the brain that "I am not moving". This contradictory signal will make the brain think that "itself" is in an abnormal and dangerous state. At this time, the brain will produce a strong sense of dizziness to warn the user to get out of the current state as soon as possible.

S102,如果主播端选择的屏幕共享模式为目标共享模式,则对传感器数据进行去抖动处理,得到优化后的传感器数据。S102: If the screen sharing mode selected by the anchor terminal is the target sharing mode, de-jitter processing is performed on the sensor data to obtain optimized sensor data.

在确定主播端设备选择的屏幕共享模式为目标共享模式(去抖动共享模式)时,本申请服务器可对主播端设备发送的6DoF数据进行去抖动处理,以将6DoF数据中的不自然抖动滤除,得到稳定的6DoF数据,即优化后的传感器数据,以为后续得到稳定的图像画面提供条件。When it is determined that the screen sharing mode selected by the anchor-end device is the target sharing mode (de-jitter sharing mode), the server of this application can de-jitter the 6DoF data sent by the anchor-end device to filter out unnatural jitters in the 6DoF data and obtain stable 6DoF data, that is, optimized sensor data, to provide conditions for obtaining stable image pictures later.

可选的,本申请中服务器可采用预设的去抖动处理规则,对6DoF数据进行去抖动处理,得到优化后的6DoF数据。示例性的,可以利用数据平滑算法对6DoF数据进行平滑处理,以将6DoF数据中的细微且高频的不自然抖动滤除掉,得到优化后的6DoF数据。Optionally, the server in the present application may use a preset de-jitter processing rule to de-jitter the 6DoF data to obtain optimized 6DoF data. Exemplarily, the 6DoF data may be smoothed using a data smoothing algorithm to filter out subtle and high-frequency unnatural jitters in the 6DoF data to obtain optimized 6DoF data.

其中,数据平滑算法可选为滑动平均法、指数滑动平均法或者卷积平滑算法(Savitzky-Golay滤波,简称SG滤波发)等等,此处对其不做具体限制。The data smoothing algorithm may be a sliding average method, an exponential sliding average method, or a convolution smoothing algorithm (Savitzky-Golay filtering, SG filtering for short), etc., and no specific limitation is imposed on it here.

S103,根据优化后的传感器数据,对显示画面进行渲染处理,得到目标图像画面。S103, rendering the display picture according to the optimized sensor data to obtain a target image picture.

应理解的是,本申请中目标图像画面可选为纹理图像。It should be understood that in the present application, the target image picture may be selected as a texture image.

可选的,服务器可将优化后的6DoF数据导入到渲染资源中进行指定视口图像的渲染操作。或者,服务器还可将优化后的6DoF数据导入到云渲染服务器中进行指定视口图像的渲染操作等,此处对其不做限制。Optionally, the server may import the optimized 6DoF data into a rendering resource to perform a rendering operation of a specified viewport image. Alternatively, the server may also import the optimized 6DoF data into a cloud rendering server to perform a rendering operation of a specified viewport image, etc., which is not limited here.

通常,虚拟空间中显示的图像是一个球面模型,用户相当于站立在球心向外观看,由于人眼的视角有限,所以在同一时间,用户只能看到360度球面的一小部分。当用户转动视角时,才能看到球面的其他部分图像,具体如图4所示。其中,用户在同一时间看到的360度球面的一部分即为视口,而该视口上显示的图像即为视口图像。Usually, the image displayed in the virtual space is a spherical model. The user is equivalent to standing at the center of the sphere and looking outward. Due to the limited viewing angle of the human eye, the user can only see a small part of the 360-degree sphere at the same time. When the user rotates the viewing angle, the other parts of the sphere can be seen, as shown in Figure 4. Among them, the part of the 360-degree sphere that the user sees at the same time is the viewport, and the image displayed on the viewport is the viewport image.

应理解的是,由于6DoF数据可以确定主播用户视线的位置和角度,因此服务器可基于6DoF数据确定的视线位置和角度,进而基于该视线位置和角度确定对应的视口,即指定视口。从而服务器可对指定视口对应的显示画面进行图像渲染处理,得到指定视口图像画面。It should be understood that since 6DoF data can determine the position and angle of the anchor user's line of sight, the server can determine the corresponding viewport, i.e., the designated viewport, based on the line of sight position and angle determined by the 6DoF data. Thus, the server can perform image rendering processing on the display screen corresponding to the designated viewport to obtain the designated viewport image screen.

在一些可实现方案中,本申请服务器对优化后的6DoF数据进行渲染处理,得到目标图像画面时,可通过调用Steamvr平台开放的应用接口比如openvr接口,以按照该接口的导入规则导入优化后的6DoF数据。进而,Steamvr平台根据导入的优化后的6DoF数据对对应视口的显示画面进行渲染操作,得到对应视口的纹理图像。In some possible implementations, the server of the present application renders the optimized 6DoF data, and when obtaining the target image, the optimized 6DoF data can be imported according to the import rules of the interface by calling the application interface opened by the Steamvr platform, such as the openvr interface. Then, the Steamvr platform renders the display screen of the corresponding viewport according to the imported optimized 6DoF data to obtain the texture image of the corresponding viewport.

其中,上述openvr接口对外提供的DriverPose_t数据结构中包括以下信息:The DriverPose_t data structure provided by the above openvr interface includes the following information:

其中,导入优化后的6DoF数据主要是导入上述数据结构中的如下字段中:Among them, the imported optimized 6DoF data is mainly imported into the following fields in the above data structure:

double vecPosition[3];double vecPosition[3];

double vecVelocity[3];double vecVelocity[3];

double vecAcceleration[3];double vecAcceleration[3];

vr::HmdQuaternion_t qRotation;vr::HmdQuaternion_t qRotation;

double vecAngularVelocity[3];double vecAngularVelocity[3];

double vecAngularAcceleration[3];double vecAngularAcceleration[3];

此外,将优化后的6DoF数据导入openvr接口之后,需要在openvr的SubmitLayer接口中实现将优化后的6DoF数据(Rotation和Position)转换为旋转矩阵。In addition, after importing the optimized 6DoF data into the openvr interface, it is necessary to convert the optimized 6DoF data (Rotation and Position) into a rotation matrix in the SubmitLayer interface of openvr.

在本申请中,旋转矩阵的获取,可通过调用glm::quat_cast(rotationMatrix);这一通用函数实现。In this application, the rotation matrix can be obtained by calling the general function glm::quat_cast(rotationMatrix).

S104,将目标图像画面发送给主播端和观众端,以使主播端以及所述观众端显示所述目标图像画面。S104, sending the target image to the host terminal and the audience terminal, so that the host terminal and the audience terminal display the target image.

可选的,本申请服务器可对渲染得到的目标图像画面进行编码打包等处理,进而通过SFU(Selective Forwarding Unit)数据转发方式,将打包数据同步转发给处于同一虚拟房间内的主播端设备和观众端设备,使得主播端设备和观众端设备接收到服务器发送的打包数据之后,通过对打包数据进行解码以及渲染等操作,以在主播端设备和观众端设备中显示稳定的图像画面,从而基于该稳定的图像画面能够减少甚至避免因图像抖动给人眼带来的眩晕等不适。Optionally, the server of the present application may encode and package the rendered target image, and then synchronously forward the packaged data to the host-end device and the audience-end device in the same virtual room through the SFU (Selective Forwarding Unit) data forwarding method, so that after the host-end device and the audience-end device receive the packaged data sent by the server, they decode and render the packaged data to display a stable image on the host-end device and the audience-end device, thereby reducing or even avoiding discomfort such as dizziness caused by image jitter to the human eye based on the stable image.

在本申请实施例中,SFU是一种路由和转发网页即时通信(Web Real-TimeCommunication,WebRTC)客户端音视频数据流的服务端程序。SFU服务器最核心的功能就是与每一个WebRTC Peer客户端建立链接,分别接收来自他们的音视频数据,并实现one-to-many的能力(即把一个客户端的流转发到其他WebRTC Peer客户端)。In the embodiment of the present application, SFU is a server program that routes and forwards audio and video data streams of Web Real-Time Communication (WebRTC) clients. The core function of the SFU server is to establish a link with each WebRTC Peer client, receive audio and video data from them respectively, and implement one-to-many capabilities (i.e. forwarding a client's stream to other WebRTC Peer clients).

在本申请的另一实现场景中,如果在接收到主播端设备发送的6DoF数据后,确定主播端设备选择的屏幕共享模式为普通共享模式。那么,服务器则按照现有的屏幕共享方式,基于接收到的6DoF数据进行渲染处理得到目标图像画面,并将目标图像画面发送给主播端设备以及观众端设备。In another implementation scenario of the present application, if after receiving the 6DoF data sent by the host end device, it is determined that the screen sharing mode selected by the host end device is the normal sharing mode, then the server renders and processes the received 6DoF data according to the existing screen sharing method to obtain the target image, and sends the target image to the host end device and the viewer end device.

可以理解的是,本申请通过在现有屏幕共享方案的基础上,增设去抖动共享模式,以利用该去抖动共享模式对主播端设备上传的存在不自然抖动的6DoF数据进行平滑处理,以滤除6DoF数据中存在的细微且高频抖动,得到稳定的6DoF数据。进而,基于该稳定的6DoF数据对显示画面进行渲染处理后,可以得到稳定的图像画面,从而主播用户以及观众用户观看各自设备显示的稳定图像画面时,能够减少甚至避免由于图像抖动造成的模拟晕动症,导致用户观看设备显示的图像画面时带来的眩晕等不适。It is understandable that this application adds a de-jitter sharing mode on the basis of the existing screen sharing solution, and uses the de-jitter sharing mode to smooth the 6DoF data uploaded by the anchor end device with unnatural jitter, so as to filter out the subtle and high-frequency jitter in the 6DoF data and obtain stable 6DoF data. Furthermore, after rendering the display screen based on the stable 6DoF data, a stable image screen can be obtained, so that when the anchor user and the audience user watch the stable image screen displayed by their respective devices, it can reduce or even avoid simulated motion sickness caused by image jitter, which causes dizziness and other discomfort when the user watches the image screen displayed by the device.

本申请实施例提供的屏幕共享方法,在接收到主播端发送的传感器数据时,确定主播端选择的屏幕共享模式,如果确定主播端选择的屏幕共享模式为目标共享模式,则对传感器数据进行去抖动处理,得到优化后的传感器数据,并基于优化后的传感器数据,对显示画面进行渲染处理得到目标图像画面,然后将目标图像画面发送给主播端和观众端,使得主播端以及观众端显示目标图像画面。本申请通过在屏幕共享时确定主播端选择的屏幕共享模式,当屏幕共享模式为目标共享模式时对传感器数据进行去抖动处理,以将不稳定传感器数据中细微且高频的不自然抖动滤除,得到稳定的传感器数据,进而基于稳定的传感器数据对显示画面进行渲染处理时,可以渲染得到稳定的图像画面,从而基于该稳定的图像画面能够减少甚至避免因图像抖动给人眼带来的眩晕等不适。另外,当观众端和/或主播端为能够提供虚拟空间的XR设备,那么本申请在降低或者消除用户的眩晕感的基础上,还可以延长用户沉浸在虚拟空间内的时间,以及提高用户的重复使用率。The screen sharing method provided in the embodiment of the present application determines the screen sharing mode selected by the host end when receiving the sensor data sent by the host end. If it is determined that the screen sharing mode selected by the host end is the target sharing mode, the sensor data is de-jittered to obtain the optimized sensor data, and based on the optimized sensor data, the display screen is rendered to obtain the target image screen, and then the target image screen is sent to the host end and the audience end, so that the host end and the audience end display the target image screen. The present application determines the screen sharing mode selected by the host end during screen sharing, and de-jitters the sensor data when the screen sharing mode is the target sharing mode, so as to filter out the subtle and high-frequency unnatural jitter in the unstable sensor data, and obtain stable sensor data. Then, when the display screen is rendered based on the stable sensor data, a stable image screen can be rendered, so that the discomfort such as dizziness caused by image jitter to the human eye can be reduced or even avoided based on the stable image screen. In addition, when the audience end and/or the host end is an XR device that can provide a virtual space, the present application can not only reduce or eliminate the user's dizziness, but also extend the time the user is immersed in the virtual space and increase the user's reuse rate.

通过上述介绍可知,本申请通过根据主播端选择的去抖动共享模式,对传感器数据进行去抖动处理,进而基于优化的传感器数据渲染得到稳定的目标图像画面,从而降低渲染出的图像画面存在抖动的现象,由此实现减少甚至避免用户因图像抖动造成用户的眩晕感。From the above introduction, it can be seen that the present application performs de-jitter processing on the sensor data according to the de-jitter sharing mode selected by the anchor end, and then renders a stable target image based on the optimized sensor data, thereby reducing the jitter phenomenon in the rendered image, thereby reducing or even avoiding the user's dizziness caused by image jitter.

在上述实施例的基础上,下面结合图5,对本申请中对6DoF数据进行去抖动处理过程进行进一步解释说明,Based on the above embodiment, the de-jittering process of 6DoF data in the present application is further explained in conjunction with FIG. 5 .

如图5所示,该方法可以包括如下步骤:As shown in FIG5 , the method may include the following steps:

S201,响应于主播端发送的传感器数据,确定主播端选择的屏幕共享模式。S201, in response to sensor data sent by the host end, determining a screen sharing mode selected by the host end.

S202,如果主播端选择的屏幕共享模式为目标共享模式,则获取目标平滑算法。S202: If the screen sharing mode selected by the anchor end is the target sharing mode, a target smoothing algorithm is obtained.

S203,确定目标平滑算法对应的目标阈值范围。S203, determining a target threshold range corresponding to the target smoothing algorithm.

本申请中目标平滑算法,可选为常规数据平滑算法中的任一算法,比如滑动平均法、指数滑动平均法或者卷积平滑算法(Savitzky-Golay滤波,简称SG滤波发)等等。The target smoothing algorithm in the present application can be selected as any of the conventional data smoothing algorithms, such as the sliding average method, the exponential sliding average method or the convolution smoothing algorithm (Savitzky-Golay filtering, referred to as SG filtering), etc.

也就是说,服务器可以随机从配置的所有常规数据平滑算法中选择一个算法,作为目标平滑算法。That is, the server can randomly select an algorithm from all configured regular data smoothing algorithms as the target smoothing algorithm.

考虑到不同平滑算法执行平滑处理时所依据的阈值范围存在差别,因此服务器从常规数据平滑算法中选择出目标平滑算法之后,还需确定与该目标平滑算法对应的目标阈值范围。Considering that different smoothing algorithms have different threshold ranges based on which they perform smoothing processing, after the server selects a target smoothing algorithm from conventional data smoothing algorithms, it is necessary to determine a target threshold range corresponding to the target smoothing algorithm.

可选的,本申请可通过查询平滑算法与阈值范围的对应关系中,确定与目标平滑算法对应的目标阈值范围。其中,平滑算法与阈值范围的对应关系中,阈值范围可以是平滑算法默认的一个阈值范围,或者也可以是用户比如生产方基于去抖动精度自定义设置的一个阈值范围,此处对其不做具体限制。Optionally, the present application can determine the target threshold range corresponding to the target smoothing algorithm by querying the correspondence between the smoothing algorithm and the threshold range. Among them, in the correspondence between the smoothing algorithm and the threshold range, the threshold range can be a default threshold range of the smoothing algorithm, or it can also be a threshold range customized by the user, such as the manufacturer, based on the de-jitter accuracy, and there is no specific restriction on it here.

S204,根据目标平滑算法以及目标阈值范围,对传感器数据进行去抖动处理,得到优化后的传感器数据。S204, performing de-jitter processing on the sensor data according to the target smoothing algorithm and the target threshold range to obtain optimized sensor data.

可选的,本申请服务器首先将接收到的6DoF数据,与目标阈值范围进行比较,确定6DoF数据是否位于目标阈值范围内。如果确定6DoF数据位于目标阈值范围内,说明该6DoF数据存在不自然的抖动,此时需要对该6DoF数据进行优化处理。如果确定6DoF数据不位于目标阈值范围内,说明该6DoF数据为稳定数据,此时则无需对该6DoF数据进行优化处理,从而保持视口位置不变。Optionally, the server of this application first compares the received 6DoF data with the target threshold range to determine whether the 6DoF data is within the target threshold range. If it is determined that the 6DoF data is within the target threshold range, it means that the 6DoF data has unnatural jitter, and the 6DoF data needs to be optimized. If it is determined that the 6DoF data is not within the target threshold range, it means that the 6DoF data is stable data, and there is no need to optimize the 6DoF data, so as to keep the viewport position unchanged.

本申请对存在不自然抖动的6DoF数据进行去抖动处理时,首先根据目标阈值范围计算一平均值,之后将该平均值作为平滑值,使得服务器利用目标平滑算法基于该平滑值,可对存在不自然抖动的6DoF数据进行去抖动处理,得到稳定的6DoF数据。When the present application performs de-jitter processing on 6DoF data with unnatural jitter, an average value is first calculated based on the target threshold range, and then the average value is used as a smoothing value, so that the server can use a target smoothing algorithm based on the smoothing value to de-jitter the 6DoF data with unnatural jitter and obtain stable 6DoF data.

其中,利用目标平滑算法基于平滑值,对存在不自然抖动的6DoF数据进行去抖动处理时,可首先确定该6DoF数据与平滑值的大小关系,如果该6DoF数据大于平滑值,则将6DoF数据调整为第一数值。如果该6DoF数据小于或等于平滑值,则将6DoF数据调整为第二数值。When de-jittering 6DoF data with unnatural jitter is processed based on the smoothing value using the target smoothing algorithm, the magnitude relationship between the 6DoF data and the smoothing value may be first determined, and if the 6DoF data is greater than the smoothing value, the 6DoF data is adjusted to a first value. If the 6DoF data is less than or equal to the smoothing value, the 6DoF data is adjusted to a second value.

其中,第一数值可选为1,第二数值可选为0。The first value can be selected as 1, and the second value can be selected as 0.

示例性的,假设目标阈值范围为[a,b],6DoF数据为k,那么当k不位于[a,b]时,服务器基于[a,b]计算平均值为:进而,比较k与/>当k大于/>时,将6DoF数据调整为1,当k小于/>时,将6DoF数据调整为0,具体如图6所示。For example, assuming that the target threshold range is [a, b] and the 6DoF data is k, when k is not located at [a, b], the server calculates the average value based on [a, b] as: Furthermore, compare k with/> When k is greater than/> When k is less than / > , the 6DoF data is adjusted to 0, as shown in FIG6 .

也就是说,本申请基于目标阈值范围计算的平滑值,对6DoF数据进行去抖动处理时,通过将大于平滑值的6DoF数据调整为第一数值,将小于或等于平滑值的6DoF数据调整为第二数值,使得优化的6DoF数据能够保持一个同一方向,而非正反方向来回调节的数据效果,从而可以尽量使得视口图像保持在一个稳定的范围内,这样就可以减少甚至避免由于用户小范围的运动抖动造成的图像画面抖动而带给用户人眼的眩晕感。That is to say, when the present application performs de-jittering on the 6DoF data based on the smoothing value calculated based on the target threshold range, the 6DoF data greater than the smoothing value is adjusted to a first value, and the 6DoF data less than or equal to the smoothing value is adjusted to a second value, so that the optimized 6DoF data can maintain the same direction, rather than the data effect of adjusting back and forth in the positive and negative directions, so that the viewport image can be kept within a stable range as much as possible, thus reducing or even avoiding the dizziness of the user's eyes caused by image jitter due to small-range motion jitter of the user.

S205,根据优化后的传感器数据,对显示画面进行渲染处理,得到目标图像画面。S205, rendering the display picture according to the optimized sensor data to obtain a target image picture.

S206,将目标图像画面发送给主播端和观众端,以使主播端以及观众端显示目标图像画面。S206, sending the target image to the host terminal and the audience terminal, so that the host terminal and the audience terminal display the target image.

本申请实施例提供的屏幕共享方法,在接收到主播端发送的传感器数据时,确定主播端选择的屏幕共享模式,如果确定主播端选择的屏幕共享模式为目标共享模式,则对传感器数据进行去抖动处理,得到优化后的传感器数据,并基于优化后的传感器数据,对显示画面进行渲染处理得到目标图像画面,然后将目标图像画面发送给主播端和观众端,使得主播端以及观众端显示目标图像画面。本申请通过在屏幕共享时确定主播端选择的屏幕共享模式,当屏幕共享模式为目标共享模式时对传感器数据进行去抖动处理,以将不稳定传感器数据中细微且高频的不自然抖动滤除,得到稳定的传感器数据,进而基于稳定的传感器数据对显示画面进行渲染处理时,可以渲染得到稳定的图像画面,从而基于该稳定的图像画面能够减少甚至避免因图像抖动给人眼带来的眩晕等不适。另外,当观众端和/或主播端为能够提供虚拟空间的XR设备,那么本申请在降低或者消除用户的眩晕感的基础上,还可以延长用户沉浸在虚拟空间内的时间,以及提高用户的重复使用率。The screen sharing method provided in the embodiment of the present application determines the screen sharing mode selected by the host end when receiving the sensor data sent by the host end. If it is determined that the screen sharing mode selected by the host end is the target sharing mode, the sensor data is de-jittered to obtain the optimized sensor data, and based on the optimized sensor data, the display screen is rendered to obtain the target image screen, and then the target image screen is sent to the host end and the audience end, so that the host end and the audience end display the target image screen. The present application determines the screen sharing mode selected by the host end during screen sharing, and de-jitters the sensor data when the screen sharing mode is the target sharing mode, so as to filter out the subtle and high-frequency unnatural jitter in the unstable sensor data, and obtain stable sensor data. Then, when the display screen is rendered based on the stable sensor data, a stable image screen can be rendered, so that the discomfort such as dizziness caused by image jitter to the human eye can be reduced or even avoided based on the stable image screen. In addition, when the audience end and/or the host end is an XR device that can provide a virtual space, the present application can not only reduce or eliminate the user's dizziness, but also extend the time the user is immersed in the virtual space and increase the user's reuse rate.

下面参照附图7,对本申请实施例提出的一种屏幕共享装置进行详细描述。图7是本申请实施例提供的一种屏幕共享装置的示意性框图。A screen sharing device provided in an embodiment of the present application is described in detail below with reference to FIG7 . FIG7 is a schematic block diagram of a screen sharing device provided in an embodiment of the present application.

如图7所示,该屏幕共享装置300包括:模式确定模块310、去抖处理模块320、图像渲染模块330和图像分发模块340。As shown in FIG. 7 , the screen sharing device 300 includes: a mode determination module 310 , a de-jitter processing module 320 , an image rendering module 330 and an image distribution module 340 .

其中,模式确定模块310,用于响应于主播端发送的传感器数据,确定所述主播端选择的屏幕共享模式;The mode determination module 310 is used to determine the screen sharing mode selected by the host terminal in response to the sensor data sent by the host terminal;

去抖处理模块320,用于如果所述主播端选择的屏幕共享模式为目标共享模式,则对所述传感器数据进行去抖动处理,得到优化后的传感器数据;A de-jitter processing module 320 is used to de-jitter the sensor data to obtain optimized sensor data if the screen sharing mode selected by the anchor terminal is the target sharing mode;

图像渲染模块330,用于根据所述优化后的传感器数据,对显示画面进行渲染处理,得到目标图像画面;An image rendering module 330 is used to render a display image according to the optimized sensor data to obtain a target image;

图像分发模块340,用于将所述目标图像画面发送给所述主播端和观众端,以使所述主播端以及所述观众端显示所述目标图像画面。The image distribution module 340 is used to send the target image to the host terminal and the audience terminal so that the host terminal and the audience terminal display the target image.

本申请实施例的一种可选实现方式,去抖处理模块320,包括:In an optional implementation of the embodiment of the present application, the de-jitter processing module 320 includes:

算法获取单元,获取目标平滑算法;An algorithm acquisition unit, which acquires a target smoothing algorithm;

范围确定单元,用于确定所述目标平滑算法对应的目标阈值范围;A range determination unit, used to determine a target threshold range corresponding to the target smoothing algorithm;

去抖单元,用于根据所述目标平滑算法以及所述目标阈值范围,对所述传感器数据进行去抖动处理,得到优化后的传感器数据。A de-jitter unit is used to perform de-jitter processing on the sensor data according to the target smoothing algorithm and the target threshold range to obtain optimized sensor data.

本申请实施例的一种可选实现方式,去抖单元,具体用于:In an optional implementation of the embodiment of the present application, the de-jitter unit is specifically used to:

确定所述传感器数据是否位于所述目标阈值范围内;determining whether the sensor data is within the target threshold range;

如果位于所述目标阈值范围内,则根据所述目标阈值范围确定平滑值;If it is within the target threshold range, determining a smoothing value according to the target threshold range;

控制所述目标平滑算法利用所述平滑值,对所述传感器数据进行去抖动处理,得到优化后的传感器数据。The target smoothing algorithm is controlled to utilize the smoothing value to perform a de-jittering process on the sensor data to obtain optimized sensor data.

本申请实施例的一种可选实现方式,去抖单元,还用于:In an optional implementation of the embodiment of the present application, the de-jitter unit is further configured to:

根据所述目标阈值范围确定平均值,并将所述平均值确定平滑值。An average value is determined according to the target threshold range, and a smoothing value is determined by using the average value.

本申请实施例的一种可选实现方式,去抖单元,还用于:In an optional implementation of the embodiment of the present application, the de-jitter unit is further configured to:

如果所述传感器数据大于所述平滑值,则将所述传感器数据调整为第一数值;If the sensor data is greater than the smoothed value, adjusting the sensor data to a first value;

如果所述传感器数据小于或等于所述平滑值,则将所述传感器数据调整为第二数值。If the sensor data is less than or equal to the smoothed value, the sensor data is adjusted to a second value.

本申请实施例的一种可选实现方式,模式确定模块310,具体用于:In an optional implementation of the embodiment of the present application, the mode determination module 310 is specifically configured to:

根据记录的屏幕共享模式标识,确定所述主播端选择的屏幕共享模式;Determining the screen sharing mode selected by the host terminal according to the recorded screen sharing mode identifier;

其中,所述屏幕共享模式标识用于表征所述主播端选择的屏幕共享模式。The screen sharing mode identifier is used to represent the screen sharing mode selected by the anchor terminal.

本申请实施例的一种可选实现方式,所述传感器数据为6自由度数据。In an optional implementation of the embodiment of the present application, the sensor data is 6-DOF data.

本申请实施例提供的屏幕共享装置,在接收到主播端发送的传感器数据时,确定主播端选择的屏幕共享模式,如果确定主播端选择的屏幕共享模式为目标共享模式,则对传感器数据进行去抖动处理,得到优化后的传感器数据,并基于优化后的传感器数据,对显示画面进行渲染处理得到目标图像画面,然后将目标图像画面发送给主播端和观众端,使得主播端以及观众端显示目标图像画面。本申请通过在屏幕共享时确定主播端选择的屏幕共享模式,当屏幕共享模式为目标共享模式时对传感器数据进行去抖动处理,以将不稳定传感器数据中细微且高频的不自然抖动滤除,得到稳定的传感器数据,进而基于稳定的传感器数据对显示画面进行渲染处理时,可以渲染得到稳定的图像画面,从而基于该稳定的图像画面能够减少甚至避免因图像抖动给人眼带来的眩晕等不适。另外,当观众端和/或主播端为能够提供虚拟空间的XR设备,那么本申请在降低或者消除用户的眩晕感的基础上,还可以延长用户沉浸在虚拟空间内的时间,以及提高用户的重复使用率。The screen sharing device provided in the embodiment of the present application determines the screen sharing mode selected by the host end when receiving the sensor data sent by the host end. If it is determined that the screen sharing mode selected by the host end is the target sharing mode, the sensor data is de-jittered to obtain the optimized sensor data, and based on the optimized sensor data, the display screen is rendered to obtain the target image screen, and then the target image screen is sent to the host end and the audience end, so that the host end and the audience end display the target image screen. The present application determines the screen sharing mode selected by the host end during screen sharing, and de-jitters the sensor data when the screen sharing mode is the target sharing mode, so as to filter out the subtle and high-frequency unnatural jitter in the unstable sensor data, and obtain stable sensor data. Then, when the display screen is rendered based on the stable sensor data, a stable image screen can be rendered, so that the discomfort such as dizziness caused to the human eye by image jitter can be reduced or even avoided based on the stable image screen. In addition, when the audience end and/or the host end is an XR device that can provide a virtual space, the present application can not only reduce or eliminate the user's dizziness, but also prolong the time the user is immersed in the virtual space and improve the user's reuse rate.

应理解的是,装置实施例与前述方法实施例可以相互对应,类似的描述可以参照方法实施例。为避免重复,此处不再赘述。具体地,图7所示的装置300可以执行图2对应的方法实施例,并且装置300中的各个模块的前述和其它操作和/或功能分别为了实现图2中的各个方法中的相应流程,为了简洁,在此不再赘述。It should be understood that the device embodiment and the aforementioned method embodiment may correspond to each other, and similar descriptions may refer to the method embodiment. To avoid repetition, no further description is given here. Specifically, the device 300 shown in FIG7 can execute the method embodiment corresponding to FIG2, and the aforementioned and other operations and/or functions of each module in the device 300 are respectively for implementing the corresponding processes in each method in FIG2, and for the sake of brevity, no further description is given here.

上文中结合附图从功能模块的角度描述了本申请实施例的装置300。应理解,该功能模块可以通过硬件形式实现,也可以通过软件形式的指令实现,还可以通过硬件和软件模块组合实现。具体地,本申请实施例中的第一方面方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路和/或软件形式的指令完成,结合本申请实施例公开的第一方面方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。可选地,软件模块可以位于随机存储器,闪存、只读存储器、可编程只读存储器、电可擦写可编程存储器、寄存器等本领域的成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述第一方面方法实施例中的步骤。In the above, the device 300 of the embodiment of the present application is described from the perspective of the functional module in conjunction with the accompanying drawings. It should be understood that the functional module can be implemented in hardware form, can be implemented by instructions in software form, and can also be implemented by a combination of hardware and software modules. Specifically, the steps of the first aspect method embodiment in the embodiment of the present application can be completed by the hardware integrated logic circuit and/or software form instructions in the processor, and the steps of the first aspect method disclosed in conjunction with the embodiment of the present application can be directly embodied as a hardware decoding processor to perform, or a combination of hardware and software modules in the decoding processor to perform. Optionally, the software module can be located in a mature storage medium in the field such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, an electrically erasable programmable memory, a register, etc. The storage medium is located in a memory, and the processor reads the information in the memory, and completes the steps in the above-mentioned first aspect method embodiment in conjunction with its hardware.

图8是本申请实施例提供的一种电子设备的示意性框图。如图8所示,该电子设备800可包括:FIG8 is a schematic block diagram of an electronic device provided in an embodiment of the present application. As shown in FIG8 , the electronic device 800 may include:

存储器410和处理器420,该存储器410用于存储计算机程序,并将该程序代码传输给该处理器420。换言之,该处理器420可以从存储器410中调用并运行计算机程序,以实现本申请实施例中的屏幕共享方法。The memory 410 and the processor 420, the memory 410 is used to store the computer program and transmit the program code to the processor 420. In other words, the processor 420 can call and run the computer program from the memory 410 to implement the screen sharing method in the embodiment of the present application.

例如,该处理器420可用于根据该计算机程序中的指令执行上述屏幕共享方法实施例。For example, the processor 420 may be configured to execute the above screen sharing method embodiment according to instructions in the computer program.

在本申请的一些实施例中,该处理器420可以包括但不限于:In some embodiments of the present application, the processor 420 may include but is not limited to:

通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(FieldProgrammable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等等。General-purpose processor, digital signal processor (DSP), application-specific integrated circuit (ASIC), field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc.

在本申请的一些实施例中,该存储器410包括但不限于:In some embodiments of the present application, the memory 410 includes but is not limited to:

易失性存储器和/或非易失性存储器。其中,非易失性存储器可以是只读存储器(Read-Only Memory,ROM)、可编程只读存储器(Programmable ROM,PROM)、可擦除可编程只读存储器(Erasable PROM,EPROM)、电可擦除可编程只读存储器(Electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(Random Access Memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(Static RAM,SRAM)、动态随机存取存储器(Dynamic RAM,DRAM)、同步动态随机存取存储器(Synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(Double DataRate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(Enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synch link DRAM,SLDRAM)和直接内存总线随机存取存储器(Direct Rambus RAM,DR RAM)。Volatile memory and/or non-volatile memory. Among them, the non-volatile memory can be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM) or a flash memory. The volatile memory can be a random access memory (RAM), which is used as an external cache. By way of example but not limitation, many forms of RAM are available, such as static random access memory (SRAM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), double data rate synchronous dynamic random access memory (DDR SDRAM), enhanced synchronous dynamic random access memory (ESDRAM), synchronous link dynamic random access memory (SLDRAM) and direct memory bus random access memory (DR RAM).

在本申请的一些实施例中,该计算机程序可以被分割成一个或多个模块,该一个或者多个模块被存储在该存储器410中,并由该处理器420执行,以完成本申请提供的屏幕共享方法。该一个或多个模块可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述该计算机程序在该电子设备中的执行过程。In some embodiments of the present application, the computer program may be divided into one or more modules, which are stored in the memory 410 and executed by the processor 420 to complete the screen sharing method provided by the present application. The one or more modules may be a series of computer program instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer program in the electronic device.

如图8所示,该电子设备400还可包括:As shown in FIG8 , the electronic device 400 may further include:

收发器430,该收发器430可连接至该处理器420或存储器410。The transceiver 430 may be connected to the processor 420 or the memory 410 .

其中,处理器420可以控制该收发器430与其他设备进行通信,具体地,可以向其他设备发送信息或数据,或接收其他设备发送的信息或数据。收发器430可以包括发射机和接收机。收发器430还可以进一步包括天线,天线的数量可以为一个或多个。The processor 420 may control the transceiver 430 to communicate with other devices, specifically, to send information or data to other devices, or to receive information or data sent by other devices. The transceiver 430 may include a transmitter and a receiver. The transceiver 430 may further include an antenna, and the number of antennas may be one or more.

应当理解,该电子设备中的各个组件通过总线系统相连,其中,总线系统除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。It should be understood that the various components in the electronic device are connected via a bus system, wherein the bus system includes not only a data bus but also a power bus, a control bus and a status signal bus.

本申请还提供了一种计算机存储介质,其上存储有计算机程序,该计算机程序被计算机执行时使得该计算机能够执行上述方法实施例的方法。The present application also provides a computer storage medium on which a computer program is stored. When the computer program is executed by a computer, the computer is enabled to execute the method of the above method embodiment.

本申请实施例还提供一种包含程序指令的计算机程序产品,当所述程序指令在电子设备上运行时,使得所述电子设备执行上述方法实施例的方法。An embodiment of the present application also provides a computer program product including program instructions. When the program instructions are executed on an electronic device, the electronic device executes the method of the above method embodiment.

当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。该计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行该计算机程序指令时,全部或部分地产生按照本申请实施例该的流程或功能。该计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。该计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,该计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。该计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。该可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如数字视频光盘(digital video disc,DVD))、或者半导体介质(例如固态硬盘(solid state disk,SSD))等。When software is used for implementation, it can be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the process or function according to the embodiment of the present application is generated in whole or in part. The computer can be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices. The computer instructions can be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions can be transmitted from a website site, computer, server or data center by wired (e.g., coaxial cable, optical fiber, digital subscriber line (digital subscriber line, DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) mode to another website site, computer, server or data center. The computer-readable storage medium can be any available medium that a computer can access or a data storage device such as a server or data center that includes one or more available media integration. The available medium can be a magnetic medium (e.g., a floppy disk, a hard disk, a tape), an optical medium (e.g., a digital video disc (digital video disc, DVD)), or a semiconductor medium (e.g., a solid state drive (solid state disk, SSD)), etc.

本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的模块及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those of ordinary skill in the art will appreciate that the modules and algorithm steps of each example described in conjunction with the embodiments disclosed herein can be implemented in electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Professional and technical personnel can use different methods to implement the described functions for each specific application, but such implementation should not be considered to be beyond the scope of this application.

在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,该模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个模块或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或模块的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in the present application, it should be understood that the disclosed systems, devices and methods can be implemented in other ways. For example, the device embodiments described above are only schematic. For example, the division of the module is only a logical function division. There may be other division methods in actual implementation, such as multiple modules or components can be combined or integrated into another system, or some features can be ignored or not executed. Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some interfaces, indirect coupling or communication connection of devices or modules, which can be electrical, mechanical or other forms.

作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。例如,在本申请各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。The modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical modules, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the scheme of this embodiment. For example, each functional module in each embodiment of the present application may be integrated into a processing module, or each module may exist physically separately, or two or more modules may be integrated into one module.

以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以该权利要求的保护范围为准。The above is only a specific implementation of the present application, but the protection scope of the present application is not limited thereto. Any person skilled in the art who is familiar with the present technical field can easily think of changes or substitutions within the technical scope disclosed in the present application, which should be included in the protection scope of the present application. Therefore, the protection scope of the present application should be based on the protection scope of the claims.

Claims (11)

1. A screen sharing method, comprising:
responding to sensor data sent by a host, and determining a screen sharing mode selected by the host;
If the screen sharing mode selected by the anchor terminal is a target sharing mode, performing debouncing processing on the sensor data to obtain optimized sensor data;
Rendering the display picture according to the optimized sensor data to obtain a target image picture;
And sending the target image frames to the anchor side and the audience side so that the anchor side and the audience side display the target image frames.
2. The method of claim 1, wherein the debouncing the sensor data to obtain optimized sensor data comprises:
acquiring a target smoothing algorithm;
determining a target threshold range corresponding to the target smoothing algorithm;
And performing debouncing processing on the sensor data according to the target smoothing algorithm and the target threshold range to obtain optimized sensor data.
3. The method according to claim 2, wherein the performing a de-dithering process on the sensor data according to the target smoothing algorithm and the target threshold range to obtain optimized sensor data includes:
Determining whether the sensor data is within the target threshold range;
if the value is within the target threshold range, determining a smooth value according to the target threshold range;
And controlling the target smoothing algorithm to perform debouncing processing on the sensor data by using the smoothing value to obtain optimized sensor data.
4. A method according to claim 3, wherein determining a smoothed value from the target threshold range comprises:
and determining an average value according to the target threshold range, and determining a smooth value from the average value.
5. A method according to claim 3, wherein controlling the target smoothing algorithm to perform a de-dithering process on the sensor data using the smoothed value to obtain optimized sensor data comprises:
If the sensor data is greater than the smoothed value, adjusting the sensor data to a first value;
and if the sensor data is less than or equal to the smoothed value, adjusting the sensor data to a second value.
6. The method of claim 1, wherein the determining the screen sharing mode selected by the anchor comprises:
determining a screen sharing mode selected by the anchor terminal according to the recorded screen sharing mode identification;
Wherein the screen sharing mode identification is used for representing the screen sharing mode selected by the anchor.
7. The method of any one of claims 1-6, wherein the sensor data is 6 degrees of freedom data.
8. A screen sharing apparatus, comprising:
The mode determining module is used for responding to the sensor data sent by the anchor terminal and determining a screen sharing mode selected by the anchor terminal;
the debouncing processing module is used for debouncing the sensor data to obtain optimized sensor data if the screen sharing mode selected by the anchor terminal is a target sharing mode;
the image rendering module is used for rendering the display picture according to the optimized sensor data to obtain a target image picture;
And the image distribution module is used for sending the target image frames to the anchor side and the audience side so as to enable the anchor side and the audience side to display the target image frames.
9. An electronic device, comprising:
A processor and a memory for storing a computer program, the processor for invoking and running the computer program stored in the memory to perform the screen sharing method of any of claims 1 to 7.
10. A computer-readable storage medium storing a computer program that causes a computer to execute the screen sharing method according to any one of claims 1 to 7.
11. A computer program product comprising program instructions which, when run on an electronic device, cause the electronic device to perform the screen sharing method of any of claims 1 to 7.
CN202211477797.5A 2022-11-23 2022-11-23 Screen sharing method, device, equipment and medium Pending CN118118717A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211477797.5A CN118118717A (en) 2022-11-23 2022-11-23 Screen sharing method, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211477797.5A CN118118717A (en) 2022-11-23 2022-11-23 Screen sharing method, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN118118717A true CN118118717A (en) 2024-05-31

Family

ID=91214196

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211477797.5A Pending CN118118717A (en) 2022-11-23 2022-11-23 Screen sharing method, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN118118717A (en)

Similar Documents

Publication Publication Date Title
KR102582375B1 (en) Detection and display of mixed 2d/3d content
US10643394B2 (en) Augmented reality
US11620780B2 (en) Multiple device sensor input based avatar
CN109976690B (en) AR glasses remote interaction method and device and computer readable medium
EP3691280B1 (en) Video transmission method, server, vr playback terminal and computer-readable storage medium
US11302063B2 (en) 3D conversations in an artificial reality environment
CN110413108B (en) Processing method, device, system, electronic equipment and storage medium of virtual screen
US11709370B2 (en) Presentation of an enriched view of a physical setting
CN114651448B (en) Information processing system, information processing method and program
EP3671653A1 (en) Generating and signaling transition between panoramic images
CN117319790A (en) Photography methods, devices, equipment and media based on virtual reality space
CN110519247A (en) A kind of one-to-many virtual reality display method and device
CN115150555A (en) Video recording method, device, equipment and medium
CN106445121A (en) Virtual reality device and terminal interaction method and apparatus
WO2024193568A1 (en) Interaction method and apparatus, and device, medium and program
WO2024060959A1 (en) Method and apparatus for adjusting viewing picture in virtual environment, and storage medium and device
CN118118717A (en) Screen sharing method, device, equipment and medium
CN118349152A (en) Method, device, equipment, medium and program for resetting cover image of virtual object
US12101197B1 (en) Temporarily suspending spatial constraints
CN116212361B (en) Virtual object display method and device and head-mounted display device
US20250104371A1 (en) Augmented Call Spawn Configuration for Digital Human Representations in an Artificial Reality Environment
CN117115237A (en) Virtual reality position switching method, device, storage medium and equipment
CN119788795A (en) Video recording method, device, equipment and storage medium
CN118229934A (en) Virtual object display method, device, storage medium, equipment and program product
CN118161858A (en) Task adjustment method, device, storage medium and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination