[go: up one dir, main page]

CN118433367A - Image processing method, device, medium, head mounted display device and program product - Google Patents

Image processing method, device, medium, head mounted display device and program product Download PDF

Info

Publication number
CN118433367A
CN118433367A CN202310118260.8A CN202310118260A CN118433367A CN 118433367 A CN118433367 A CN 118433367A CN 202310118260 A CN202310118260 A CN 202310118260A CN 118433367 A CN118433367 A CN 118433367A
Authority
CN
China
Prior art keywords
data
image
head
display device
mounted display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310118260.8A
Other languages
Chinese (zh)
Inventor
张秀志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202310118260.8A priority Critical patent/CN118433367A/en
Publication of CN118433367A publication Critical patent/CN118433367A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/139Format conversion, e.g. of frame-rate or size
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The application provides an image processing method, an image processing device, a medium, a head-mounted display device and a program product, wherein the method is applied to the head-mounted display device and comprises the following steps: collecting camera data and inertial data; transmitting the camera data and the inertial data to a computing device; acquiring rendering image coding data, wherein the rendering image coding data is obtained by a computing device determining a rendering image of the head-mounted display device according to the camera data and the inertia data and coding the rendering image; acquiring preset display frame rate, current inertial data and display time information corresponding to a screen display image corresponding to a previously determined historical rendering image; and displaying a target screen display image corresponding to the rendered image according to the rendered image coding data, the preset display frame rate, the current inertia data and the display time information, so that the calculated amount of the head-mounted display device is reduced, the sizes and the weights of the radiator and the battery are further reduced, and the miniaturization and the light weight of the head-mounted display device are facilitated.

Description

图像处理方法、装置、介质、头戴显示设备和程序产品Image processing method, device, medium, head mounted display device and program product

技术领域Technical Field

本发明涉及计算机技术领域,更具体地,涉及一种图像处理方法、装置、介质、设备和程序产品。The present invention relates to the field of computer technology, and more specifically, to an image processing method, apparatus, medium, device and program product.

背景技术Background technique

头戴式显示器(Head Mounted Display,HMD),即头显,通过各种头显,向眼睛发送光学信号,可以实现虚拟现实(VR)、增强现实(AR)、混合现实(MR)等不同效果。Head-mounted display (HMD), also known as headset, sends optical signals to the eyes through various headsets to achieve different effects such as virtual reality (VR), augmented reality (AR), and mixed reality (MR).

目前,头显的小型化和轻量化是研究的重点方向。然而,由于头显需要进行大量的计算,因此,对处理器的计算能力、散热器的散热能力和电池的续航能力具有很高的要求,导致散热器、电池所占的结构空间较大,不利于头显的小型化和轻量化。At present, the miniaturization and lightweighting of head-mounted displays are the key research directions. However, since head-mounted displays need to perform a lot of calculations, they have high requirements for the computing power of the processor, the heat dissipation capacity of the radiator, and the battery life. As a result, the structural space occupied by the radiator and battery is large, which is not conducive to the miniaturization and lightweighting of head-mounted displays.

发明内容Summary of the invention

本申请实施例提供了一种图像处理方法、装置、介质、头戴显示设备和程序产品,可以减少头戴显示设备的计算量,从而降低头戴显示设备的功耗,进而减小散热器和电池的尺寸和重量,有利于头戴显示设备的小型化和轻量化。The embodiments of the present application provide an image processing method, apparatus, medium, head-mounted display device, and program product, which can reduce the amount of calculation of the head-mounted display device, thereby reducing the power consumption of the head-mounted display device, and further reducing the size and weight of the radiator and battery, which is conducive to the miniaturization and lightweight of the head-mounted display device.

第一方面,本申请实施例提供了一种图像处理方法,应用于头戴显示设备,所述方法包括:In a first aspect, an embodiment of the present application provides an image processing method, which is applied to a head mounted display device, and the method includes:

采集摄像头数据和惯性数据;Collect camera data and inertial data;

将所述摄像头数据和所述惯性数据发送至计算装置;sending the camera data and the inertial data to a computing device;

获取渲染图像编码数据,所述渲染图像编码数据是所述计算装置根据所述摄像头数据和所述惯性数据确定所述头戴显示设备的渲染图像,并对所述渲染图像进行编码得到的;Acquire rendered image encoding data, where the rendered image encoding data is obtained by the computing device determining a rendered image of the head mounted display device according to the camera data and the inertial data, and encoding the rendered image;

获取预设显示帧率、当前惯性数据,以及前一次确定出的历史渲染图像对应的屏幕显示图像对应的显示时间信息;Obtaining a preset display frame rate, current inertial data, and display time information corresponding to a screen display image corresponding to a previously determined historical rendering image;

根据所述渲染图像编码数据、所述预设显示帧率、所述当前惯性数据,以及所述显示时间信息显示所述渲染图像对应的目标屏幕显示图像。A target screen display image corresponding to the rendered image is displayed according to the rendered image encoding data, the preset display frame rate, the current inertia data, and the display time information.

第二方面,本申请实施例提供了一种图像处理方法,应用于计算装置,所述方法包括:In a second aspect, an embodiment of the present application provides an image processing method, which is applied to a computing device, and the method includes:

获取头戴显示设备的摄像头数据和惯性数据;Get camera data and inertial data of the head-mounted display device;

根据所述摄像头数据和惯性数据确定所述头戴显示设备的渲染图像;Determining a rendered image of the head mounted display device based on the camera data and the inertial data;

对所述渲染图像进行编码,得到渲染图像编码数据,并将所述渲染图像编码数据发送至所述头戴显示设备,以使所述头戴显示设备获取预设显示帧率、当前惯性数据,以及前一次确定出的历史渲染图像对应的屏幕显示图像对应的显示时间信息,并根据所述渲染图像编码数据、所述预设显示帧率、所述当前惯性数据,以及所述显示时间信息显示所述渲染图像对应的目标屏幕显示图像。The rendered image is encoded to obtain rendered image encoded data, and the rendered image encoded data is sent to the head-mounted display device, so that the head-mounted display device obtains a preset display frame rate, current inertia data, and display time information corresponding to a screen display image corresponding to a historical rendered image determined last time, and displays a target screen display image corresponding to the rendered image according to the rendered image encoded data, the preset display frame rate, the current inertia data, and the display time information.

第三方面,本申请实施例提供了一种图像处理装置,应用于头戴显示设备,包括:In a third aspect, an embodiment of the present application provides an image processing device, which is applied to a head mounted display device, comprising:

采集模块,用于采集摄像头数据和惯性数据;Acquisition module, used to collect camera data and inertial data;

发送模块,用于将所述摄像头数据和所述惯性数据发送至计算装置;A sending module, used for sending the camera data and the inertial data to a computing device;

第一获取模块,用于获取渲染图像编码数据,所述渲染图像编码数据是所述计算装置对根据所述摄像头数据和所述惯性数据确定所述头戴显示设备的渲染图像,并对所述渲染图像进行编码得到的;A first acquisition module is used to acquire rendered image encoding data, where the rendered image encoding data is obtained by the computing device determining a rendered image of the head mounted display device according to the camera data and the inertial data, and encoding the rendered image;

第二获取模块,用于获取预设显示帧率、当前惯性数据,以及前一次确定出的历史渲染图像对应的屏幕显示图像对应的显示时间信息;A second acquisition module is used to acquire a preset display frame rate, current inertial data, and display time information corresponding to a screen display image corresponding to a previously determined historical rendering image;

显示模块,用于根据所述渲染图像编码数据、所述预设显示帧率、所述当前惯性数据,以及所述显示时间信息显示所述渲染图像对应的目标屏幕显示图像。The display module is used to display a target screen display image corresponding to the rendered image according to the rendered image encoding data, the preset display frame rate, the current inertia data, and the display time information.

第四方面,本申请实施例提供了一种图像处理装置,应用于计算装置,包括:In a fourth aspect, an embodiment of the present application provides an image processing device, applied to a computing device, comprising:

获取模块,用于获取头戴显示设备的摄像头数据和惯性数据;An acquisition module is used to acquire camera data and inertial data of a head mounted display device;

确定模块,用于根据所述摄像头数据和惯性数据确定所述头戴显示设备的渲染图像;A determination module, configured to determine a rendered image of the head mounted display device according to the camera data and the inertial data;

编码模块,用于对所述渲染图像进行编码,得到渲染图像编码数据,并将所述渲染图像编码数据发送至所述头戴显示设备,以使所述头戴显示设备获取预设显示帧率、当前惯性数据,以及前一次确定出的历史渲染图像对应的屏幕显示图像对应的显示时间信息,并根据所述渲染图像编码数据、所述预设显示帧率、所述当前惯性数据,以及所述显示时间信息显示所述渲染图像对应的目标屏幕显示图像。The encoding module is used to encode the rendered image to obtain rendered image encoded data, and send the rendered image encoded data to the head-mounted display device, so that the head-mounted display device obtains a preset display frame rate, current inertia data, and display time information corresponding to a screen display image corresponding to a historical rendered image determined last time, and displays a target screen display image corresponding to the rendered image according to the rendered image encoded data, the preset display frame rate, the current inertia data, and the display time information.

第五方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序适于处理器进行加载,以执行如上述第一方面所述的图像处理方法,或者执行如上述第二方面所述的图像处理方法。In a fifth aspect, an embodiment of the present application provides a computer-readable storage medium, which stores a computer program, and the computer program is suitable for a processor to load to execute the image processing method as described in the first aspect above, or to execute the image processing method as described in the second aspect above.

第六方面,本申请实施例提供了一种头戴显示设备,所述头戴显示设备包括处理器和存储器,所述存储器中存储有计算机程序,所述处理器通过调用所述存储器中存储的所述计算机程序,用于执行如上述第一方面所述的图像处理方法。In a sixth aspect, an embodiment of the present application provides a head-mounted display device, comprising a processor and a memory, wherein the memory stores a computer program, and the processor executes the image processing method described in the first aspect above by calling the computer program stored in the memory.

第七方面,本申请实施例提供了一种计算装置,所述计算装置包括处理器和存储器,所述存储器中存储有计算机程序,所述处理器通过调用所述存储器中存储的所述计算机程序,用于执行如上述第二方面所述的图像处理方法。In the seventh aspect, an embodiment of the present application provides a computing device, which includes a processor and a memory, wherein the memory stores a computer program, and the processor executes the image processing method described in the second aspect above by calling the computer program stored in the memory.

第八方面,本申请实施例提供了一种计算机程序产品,包括计算机程序,该计算机程序被处理器执行时实现上述第一方面所述的图像处理方法,或者实现上述第二方面所述的图像处理方法。In an eighth aspect, an embodiment of the present application provides a computer program product, including a computer program, which, when executed by a processor, implements the image processing method described in the first aspect above, or implements the image processing method described in the second aspect above.

本申请实施例提供了一种图像处理方法、装置、介质、头戴显示设备和程序产品,该方法应用于头戴显示设备,包括:采集摄像头数据和惯性数据;将所述摄像头数据和所述惯性数据发送至计算装置;获取渲染图像编码数据,所述渲染图像编码数据是所述计算装置根据所述摄像头数据和所述惯性数据确定所述头戴显示设备的渲染图像,并对所述渲染图像进行编码得到的;获取预设显示帧率、当前惯性数据,以及前一次确定出的历史渲染图像对应的屏幕显示图像对应的显示时间信息;据所述渲染图像编码数据、所述预设显示帧率、所述当前惯性数据,以及所述显示时间信息显示所述渲染图像对应的目标屏幕显示图像。本申请通过在头戴显示设备采集到摄像头数据和惯性数据后,将摄像头数据和惯性数据发送至计算装置,由计算装置完成进行追踪计算和图像渲染,头戴显示设备只需获取到渲染图像后预测目标屏幕显示图像并显示目标屏幕显示图像,而无需进行追踪计算和图像渲染,从而减少头戴显示设备的计算量,并降低头戴显示设备的功耗,进而减小散热器和电池的尺寸和重量,有利于头戴显示设备的小型化和轻量化。The embodiments of the present application provide an image processing method, apparatus, medium, head-mounted display device and program product, wherein the method is applied to a head-mounted display device, and comprises: collecting camera data and inertial data; sending the camera data and the inertial data to a computing device; obtaining rendering image coding data, wherein the rendering image coding data is obtained by the computing device determining a rendering image of the head-mounted display device according to the camera data and the inertial data, and encoding the rendering image; obtaining a preset display frame rate, current inertial data, and display time information corresponding to a screen display image corresponding to a historical rendering image determined previously; and displaying a target screen display image corresponding to the rendering image according to the rendering image coding data, the preset display frame rate, the current inertial data, and the display time information. In the present application, after the head-mounted display device collects camera data and inertial data, the camera data and inertial data are sent to a computing device, and the computing device completes tracking calculations and image rendering. The head-mounted display device only needs to obtain the rendered image and then predict the target screen display image and display the target screen display image without performing tracking calculations and image rendering, thereby reducing the amount of calculation of the head-mounted display device and reducing the power consumption of the head-mounted display device, thereby reducing the size and weight of the radiator and battery, which is conducive to the miniaturization and lightweight of the head-mounted display device.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1是本申请实施例提供的图像处理方法的流程示意图。FIG1 is a flow chart of an image processing method provided in an embodiment of the present application.

图2是本申请实施例提供的图像处理方法的第一应用场景示意图。FIG. 2 is a schematic diagram of a first application scenario of the image processing method provided in an embodiment of the present application.

图3是本申请实施例提供的图像处理方法的第二应用场景示意图。FIG3 is a schematic diagram of a second application scenario of the image processing method provided in an embodiment of the present application.

图4是本申请实施例提供的图像处理方法的第三应用场景示意图。FIG. 4 is a schematic diagram of a third application scenario of the image processing method provided in an embodiment of the present application.

图5是本申请实施例提供的图像处理方法的第四应用场景示意图。FIG. 5 is a schematic diagram of a fourth application scenario of the image processing method provided in an embodiment of the present application.

图6是本申请实施例提供的图像处理方法的另一流程示意图。FIG. 6 is another schematic flow chart of the image processing method provided in an embodiment of the present application.

图7是本申请实施例提供的图像处理方法的第五应用场景示意图。FIG. 7 is a schematic diagram of a fifth application scenario of the image processing method provided in an embodiment of the present application.

图8是本申请实施例提供的图像处理装置的示意性框图。FIG. 8 is a schematic block diagram of an image processing device provided in an embodiment of the present application.

图9是本申请实施例提供的图像处理装置的示意性框图。FIG. 9 is a schematic block diagram of an image processing device provided in an embodiment of the present application.

图10是本申请实施例提供的头戴显示设备的示意性框图。FIG. 10 is a schematic block diagram of a head mounted display device provided in an embodiment of the present application.

具体实施方式Detailed ways

下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。针对本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The following will describe the technical solutions in the embodiments of the present application in conjunction with the drawings in the embodiments of the present application. Obviously, the described embodiments are part of the embodiments of the present application, not all of the embodiments. For the embodiments in the present application, all other embodiments obtained by ordinary technicians in this field without creative work are within the scope of protection of this application.

除非另外定义,本申请使用的技术术语或者科学术语应当为本公开所属领域内具有一般技能的人士所理解的通常意义。本公开中使用的“第一”、“第二”以及类似的词语是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请实施例的实施例例如能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。Unless otherwise defined, the technical terms or scientific terms used in this application should be the common sense understood by people of ordinary skill in the field to which this disclosure belongs. "First", "second" and similar words used in this disclosure are used to distinguish similar objects, and need not be used to describe a specific order or sequential order. It should be understood that the data used in this way can be interchangeable in appropriate circumstances, so that the embodiment of the embodiment of the present application described here can be implemented in a sequence other than those illustrated or described here. In addition, the terms "including" and "having" and any of their variations are intended to cover non-exclusive inclusions, for example, the process, method, system, product or equipment comprising a series of steps or units need not be limited to those steps or units clearly listed, but can include other steps or units that are not clearly listed or inherent to these processes, methods, products or equipment.

首先,下面对本申请实施例中的部分用语进行解释说明,以便于本领域技术人员理解。First, some terms in the embodiments of the present application are explained below to facilitate understanding by those skilled in the art.

虚拟现实(VirtualReality,VR),顾名思义,就是虚拟和现实相互结合。从理论上来讲,VR技术是一种可以创建和体验虚拟世界的计算机仿真系统,它利用计算机生成一种模拟环境,使用户沉浸到该环境中。虚拟现实技术就是利用现实生活中的数据,通过计算机技术产生的电子信号,将其与各种输出设备结合使其转化为能够让人们感受到的现象,这些现象可以是现实中真真切切的物体,也可以是我们肉眼所看不到的物质,通过三维模型表现出来。因为这些现象不是我们直接所能看到的,而是通过计算机技术模拟出来的现实中的世界,故称为虚拟现实。Virtual Reality (VR), as the name implies, is the combination of virtuality and reality. Theoretically, VR technology is a computer simulation system that can create and experience a virtual world. It uses computers to generate a simulated environment and immerse users in it. Virtual reality technology uses real-life data, electronic signals generated by computer technology, and combines them with various output devices to transform them into phenomena that people can feel. These phenomena can be real objects in reality or substances that are invisible to our naked eyes, expressed through three-dimensional models. Because these phenomena are not what we can see directly, but are simulated in the real world through computer technology, they are called virtual reality.

增强现实(AugmentedReality,AR)技术是一种将虚拟信息与真实世界巧妙融合的技术,广泛运用了多媒体、三维建模、实时跟踪及注册、智能交互、传感等多种技术手段,将计算机生成的文字、图像、三维模型、音乐、视频等虚拟信息模拟仿真后,应用到真实世界中,两种信息互为补充,从而实现对真实世界的“增强”。Augmented Reality (AR) technology is a technology that cleverly integrates virtual information with the real world. It widely uses a variety of technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, and sensing. It simulates computer-generated virtual information such as text, images, three-dimensional models, music, and videos, and applies them to the real world. The two types of information complement each other, thereby achieving "enhancement" of the real world.

混合现实技术(MixedReality,MR)是虚拟现实技术的进一步发展,该技术通过在现实场景呈现虚拟场景信息,在现实世界、虚拟世界和用户之间搭起一个交互反馈的信息回路,以增强用户体验的真实感。Mixed Reality (MR) is a further development of virtual reality technology. This technology presents virtual scene information in real scenes, builds an interactive feedback information loop between the real world, the virtual world and the user, so as to enhance the realism of the user experience.

SLAM(simultaneouslocalizationandmapping,同时定位与建图)是自动驾驶与增强现实领域中常用的技术。主要研究装置通过各种传感器在未知环境中的感知与定位问题。SLAM (simultaneous localization and mapping) is a commonly used technology in the field of autonomous driving and augmented reality. It mainly studies the perception and positioning of devices in unknown environments through various sensors.

IMU(InertialMeasurementUnit,惯性测量单元),主要用来检测和测量加速度与旋转运动的传感器。IMU传感器可以包括加速度计、陀螺仪和磁力计中的一种或多种的组合,想象一个笛卡尔坐标系,具有x轴、y轴和z轴,IMU能够测量各轴方向的线性运动,以及围绕各轴的旋转运动。IMU (Inertial Measurement Unit) is a sensor mainly used to detect and measure acceleration and rotational motion. IMU sensors can include a combination of one or more of accelerometers, gyroscopes, and magnetometers. Imagine a Cartesian coordinate system with an x-axis, y-axis, and z-axis. IMU can measure linear motion in each axis direction, as well as rotational motion around each axis.

6DoF追踪:六自由度追踪,Sixdegreesoffreedomtracking。物体可以在三维空间中移动的六个自由度(6DOF)。六个角度是(1)向前/向后,(2)向上/向下,(3)左/右,(4)偏航,(5)俯仰和(6)滚动。使用允许6DOF的VR系统,可以在有限的空间内自由移动,这使用户可以充分利用所有6个自由度:偏航,俯仰,侧倾,前/后,上/下和左/右。这使得视野更加逼真和真实。6DoF tracking: Six degrees of freedom tracking. Objects can move in three-dimensional space with six degrees of freedom (6DOF). The six angles are (1) forward/backward, (2) up/down, (3) left/right, (4) yaw, (5) pitch, and (6) roll. Using a VR system that allows 6DOF, you can move freely in a limited space, which allows users to take full advantage of all 6 degrees of freedom: yaw, pitch, roll, forward/backward, up/down, and left/right. This makes the field of view more realistic and real.

帧率(Framerate)是以帧称为单位的位图图像连续出现在显示器上的频率(速率)。Frame rate is the frequency (rate) at which bitmap images in units of frames appear continuously on the display.

随着虚拟现实技术的迅速发展,虚拟现实头戴显示设备技术也得到迅速发展,比如,在头戴显示设备追踪、眼球追踪,面部追踪等追踪技术方面有很大技术提升,光学方面也在快速发展,头戴显示设备的小型化和轻量化是下一步发展的重点方向。头戴显示设备的小型化和轻量化,意味着要求头戴显示设备有更小的尺寸和更轻的重量,然而,由于目前头戴显示设备需要进行头戴显示设备追踪,眼球追踪,表情追踪,手势追踪等追踪计算,以及需要图像渲染,其中,追踪算法和图像渲染均需要很强的处理器计算能力,导致头戴显示设备的使用过程中功耗较大,从而使得目前头戴显示设备的电池和散热器尺寸较大,导致头戴显示设备尺寸和重量较大,不利于头戴显示设备小型化和轻量化发展。为解决这一问题,本申请实施例提供一种图像处理方法、装置、介质、头戴显示设备和程序产品,通过有头戴显示设备负责数据采集,计算装置负责追踪和渲染,减少头戴显示设备计算量,以减少头戴显示设备的功耗,从而减小电池以及散热器的尺寸和重量,进而较小头戴显示设备的尺寸和重量,有利于头戴显示设备的小型化和轻量化发展。With the rapid development of virtual reality technology, virtual reality head-mounted display technology has also developed rapidly. For example, there have been great technical improvements in tracking technologies such as head-mounted display device tracking, eye tracking, and facial tracking. The optical aspect is also developing rapidly. The miniaturization and lightweight of head-mounted display devices are the key directions for the next development. The miniaturization and lightweight of head-mounted display devices mean that the head-mounted display devices are required to have smaller size and lighter weight. However, since the current head-mounted display devices need to perform tracking calculations such as head-mounted display device tracking, eye tracking, expression tracking, gesture tracking, and image rendering, among which the tracking algorithm and image rendering both require strong processor computing power, resulting in high power consumption during the use of head-mounted display devices, which makes the current head-mounted display device battery and radiator larger in size, resulting in large size and weight of head-mounted display devices, which is not conducive to the miniaturization and lightweight development of head-mounted display devices. To solve this problem, the embodiments of the present application provide an image processing method, apparatus, medium, head-mounted display device and program product. By having a head-mounted display device responsible for data acquisition and a computing device responsible for tracking and rendering, the computing workload of the head-mounted display device is reduced to reduce the power consumption of the head-mounted display device, thereby reducing the size and weight of the battery and the radiator, and further reducing the size and weight of the head-mounted display device, which is conducive to the miniaturization and lightweight development of the head-mounted display device.

以下分别进行详细说明。需说明的是,以下实施例的描述顺序不作为对实施例优先顺序的限定。It should be noted that the order of description of the following embodiments is not intended to limit the priority order of the embodiments.

请参阅图1,图1示出了本申请实施例所描述的图像处理方法的流程示意图,该图像处理方法可以应用于头戴显示设备,该方法主要包括步骤S101至步骤S105,说明如下:Please refer to FIG. 1 , which shows a schematic flow chart of an image processing method described in an embodiment of the present application. The image processing method can be applied to a head mounted display device. The method mainly includes steps S101 to S105, which are described as follows:

步骤S101,采集摄像头数据和惯性数据,Step S101, collecting camera data and inertial data,

步骤S102,将摄像头数据和惯性数据发送至计算装置。Step S102, sending the camera data and inertial data to a computing device.

其中,头戴显示设备可以为虚拟现实、增强现实或混合现实的头显,计算装置可以为服务器,例如本地服务器或者云端服务器,也可以为计算机、智能手机等设备。头戴显示设备和计算装置可以通过网络连接。请参阅图2,该头戴显示设备可以包括SLAM追踪摄像头、嘴巴表情追踪摄像头、MR摄像头以及眼球追踪摄像头,其中,可以通过SLAM追踪摄像头采集外部环境图像,嘴巴表情追踪摄像头采集使用者嘴部表情图像,MR摄像头采集外部环境图像,眼球追踪摄像头采集使用者眼部表情图像。该头戴显示设备还可以包括IMU传感器,该IMU传感器用于采集使用者的头戴显示设备动作数据。The head-mounted display device may be a head-mounted display of virtual reality, augmented reality or mixed reality, and the computing device may be a server, such as a local server or a cloud server, or may be a computer, a smart phone or other device. The head-mounted display device and the computing device may be connected via a network. Referring to FIG2 , the head-mounted display device may include a SLAM tracking camera, a mouth expression tracking camera, an MR camera and an eye tracking camera, wherein the SLAM tracking camera may be used to collect images of the external environment, the mouth expression tracking camera may be used to collect images of the user's mouth expression, the MR camera may be used to collect images of the external environment, and the eye tracking camera may be used to collect images of the user's eye expression. The head-mounted display device may also include an IMU sensor, which is used to collect the user's head-mounted display device motion data.

其中,头戴显示设备动作数据可以包括头戴显示设备的位置数据,该位置数据可以包括沿X、Y、Z三个直角坐标轴方向的位置信息、以及绕X、Y、Z三个直角坐标轴方向旋转的角度信息Pitch、Yaw和Roll,其中,Pitch是围绕X轴旋转的俯仰角,Yaw是围绕Y轴旋转的偏航角,Roll是围绕Z轴旋转的翻滚角。通常,将沿X、Y、Z三个直角坐标轴方向的位置信息和绕X、Y、Z三个直角坐标轴方向的角度信息Pitch、Yaw、Roll合称为六自由度信息。The head mounted display device motion data may include the position data of the head mounted display device, and the position data may include the position information along the three rectangular coordinate axes of X, Y, and Z, and the angle information Pitch, Yaw, and Roll rotating around the three rectangular coordinate axes of X, Y, and Z, wherein Pitch is the pitch angle rotating around the X axis, Yaw is the yaw angle rotating around the Y axis, and Roll is the roll angle rotating around the Z axis. Usually, the position information along the three rectangular coordinate axes of X, Y, and Z and the angle information Pitch, Yaw, and Roll rotating around the three rectangular coordinate axes of X, Y, and Z are collectively referred to as six degrees of freedom information.

在一些实施例中,该头戴显示设备包括多个摄像头,该方法还可以包括:调整多个摄像头的曝光时间,以对齐多个摄像头的曝光中心点;根据多个摄像头的曝光中心点确定摄像头数据的时间戳数据。In some embodiments, the head-mounted display device includes multiple cameras, and the method may also include: adjusting the exposure time of the multiple cameras to align the exposure center points of the multiple cameras; and determining the timestamp data of the camera data based on the exposure center points of the multiple cameras.

请参阅图3,容易理解的是,每个摄像头对应的环境光亮度不同,因此在拍摄时需要调节曝光时间,不同的摄像头可以存在不同的曝光时间段,为了提升数据对准的精度,可以选取每个摄像头的曝光中心点作为添加系统时间戳的依据。对于SLAM来说,不关注曝光时间,但是需要一个统一的时刻来处理数据。即需要对齐图像产生时刻。Please refer to Figure 3. It is easy to understand that the ambient light brightness corresponding to each camera is different, so the exposure time needs to be adjusted when shooting. Different cameras can have different exposure time periods. In order to improve the accuracy of data alignment, the exposure center point of each camera can be selected as the basis for adding the system timestamp. For SLAM, the exposure time is not concerned, but a unified moment is required to process the data. That is, the moment when the image is generated needs to be aligned.

具体地,时间戳(timestamp)通常是一个字符序列,唯一地标识某一时刻的时间。由于头戴显示设备采集摄像头数据和惯性数据均是一个持续实时的过程,因此,可通过时间戳唯一标识每个数据。Specifically, a timestamp is usually a character sequence that uniquely identifies a certain moment in time. Since the head mounted display device collects camera data and inertial data in a continuous real-time process, each data can be uniquely identified by a timestamp.

在本实施例中,该头戴显示设备包括惯性传感器,该方法还可以包括:根据惯性传感器的中断时间,确定惯性数据的时间戳。In this embodiment, the head mounted display device includes an inertial sensor, and the method may further include: determining a timestamp of the inertial data according to an interruption time of the inertial sensor.

IMU采集数据时,每采集一个数据会产生一个中断,可以选取IMU的中断时间点作为添加系统时间戳的依据,从而提升追踪精度。When IMU collects data, an interrupt will be generated for each data collected. The interrupt time point of IMU can be selected as the basis for adding system timestamp to improve tracking accuracy.

具体地,如图4所示,头戴显示设备和计算装置均可以包括无线模块(第一无线模块、第二无线模块),二者通过无线模块网络连接,无线模块可以是60GHzWi-Fi(无线网络通信技术)模块,UWB(UltraWideBand,超宽带)模块,5G模块等高带宽低延时传输。Specifically, as shown in Figure 4, the head-mounted display device and the computing device may both include wireless modules (a first wireless module, a second wireless module), and the two are connected through a wireless module network. The wireless module may be a 60GHz Wi-Fi (wireless network communication technology) module, a UWB (UltraWideBand) module, a 5G module, or other high-bandwidth, low-latency transmission module.

其中,5G即第五代移动通信技术,为了满足自从4G通信系统部署以来增加的无线数据业务的需求,已经作出努力来开发一种改进的5G通信系统。5G具有更快的传输速度、更大的传输容量以及极低的时延,减少了数据传输的时间,从而减少了延时。对于虚拟现实领域,为了让使用者获得更好的使用体验,需要延时越小越好,否则会有头晕等症状。Among them, 5G is the fifth generation of mobile communication technology. In order to meet the increased demand for wireless data services since the deployment of the 4G communication system, efforts have been made to develop an improved 5G communication system. 5G has faster transmission speeds, greater transmission capacity and extremely low latency, which reduces the time for data transmission and thus reduces latency. In the field of virtual reality, in order for users to have a better user experience, the latency needs to be as small as possible, otherwise there will be symptoms such as dizziness.

具体地,如图2所示,头戴显示设备还可以包括处理器,该处理器可以获取到摄像头采集的摄像头数据后,进行图像的合成及编码。比如,可以将四个SLAM追踪摄像头拍摄的环境图像合成为一个环境图像中,将两个眼球追踪摄像头和嘴巴表情追踪摄像头拍摄的图像合成一个图像,将多个MR摄像头拍摄的图像合成一个图像。然后,对合成后的合成图像进行图像编码,得到摄像头图像编码数据,并将该摄像头编码数据和IMU传感器数据发送至计算装置。其中,可以将合成图像编码为H.264码流,或者活动图像专家组-2(MovingPictureExpertsGroup-2,MPEG-2)、信源编码标准(AudioVideocodingStandard,AVS)等。Specifically, as shown in FIG2 , the head-mounted display device may further include a processor, which can synthesize and encode images after acquiring the camera data collected by the camera. For example, the environmental images captured by four SLAM tracking cameras can be synthesized into one environmental image, the images captured by two eye tracking cameras and a mouth expression tracking camera can be synthesized into one image, and the images captured by multiple MR cameras can be synthesized into one image. Then, the synthesized composite image is image encoded to obtain camera image encoding data, and the camera encoding data and IMU sensor data are sent to the computing device. Among them, the composite image can be encoded into an H.264 code stream, or Moving Picture Experts Group-2 (MPEG-2), source coding standard (Audio Video Coding Standard, AVS), etc.

譬如,请参阅图4,处理器可以包括SLAM图像采集模块、眼球表情图像采集模块、MR图像采集模块、惯性测量模块、第一图像编码模块和第一图像解码模块,其中,SLAM图像采集模块可以用于处理SLAM追踪摄像头采集到的数据,眼球表情图像采集模块可以用于处理眼球追踪摄像头采集到的数据,MR图像采集模块可以用于处理MR摄像头采集到的数据,惯性测量模块可以用于处理惯性传感器获取到的数据,第一图像编码模块可以用于对图像进行编码,第一图像解码模块可以用于对图像进行解码。该处理器还可以包括光学模块,可以用于将不同的图像放大到不同的距离。通过光学模块实现物距和像距的差异,光学模块相当于光学镜头。For example, referring to FIG4 , the processor may include a SLAM image acquisition module, an eye expression image acquisition module, an MR image acquisition module, an inertial measurement module, a first image encoding module, and a first image decoding module, wherein the SLAM image acquisition module may be used to process data collected by a SLAM tracking camera, the eye expression image acquisition module may be used to process data collected by an eye tracking camera, the MR image acquisition module may be used to process data collected by an MR camera, the inertial measurement module may be used to process data acquired by an inertial sensor, the first image encoding module may be used to encode images, and the first image decoding module may be used to decode images. The processor may also include an optical module, which may be used to magnify different images to different distances. The difference between object distance and image distance is achieved through the optical module, and the optical module is equivalent to an optical lens.

在一些实施例中,步骤S101主要可以包括:根据头戴显示设备的摄像头的第一预设采集帧率采集摄像头数据;根据第二预设采集帧率采集惯性数据,第二预设采集帧率大于所述头戴显示设备的摄像头的第一预设采集帧率。In some embodiments, step S101 may mainly include: collecting camera data according to a first preset collection frame rate of a camera of a head-mounted display device; collecting inertial data according to a second preset collection frame rate, wherein the second preset collection frame rate is greater than the first preset collection frame rate of the camera of the head-mounted display device.

其中,第二预设采集帧率为远高于头戴显示设备的预设显示帧率的帧率,头戴显示设备的预设显示帧率大于头戴显示设备的摄像头的第一预设采集帧率。譬如,第二预设采集帧率可以为1KHz,头戴显示设备的预设显示帧率可以为72Hz。从而,在后续头戴显示设备的显示时刻,可以根据前面多帧惯性数据预测头戴显示设备的当前位置。具体地,惯性数据的第一预设采集帧率越高,对头戴显示设备的位置追踪和预测的精度也越高。Among them, the second preset acquisition frame rate is a frame rate much higher than the preset display frame rate of the head-mounted display device, and the preset display frame rate of the head-mounted display device is greater than the first preset acquisition frame rate of the camera of the head-mounted display device. For example, the second preset acquisition frame rate may be 1KHz, and the preset display frame rate of the head-mounted display device may be 72Hz. Thus, at the subsequent display moment of the head-mounted display device, the current position of the head-mounted display device can be predicted based on the previous multiple frames of inertial data. Specifically, the higher the first preset acquisition frame rate of the inertial data, the higher the accuracy of the position tracking and prediction of the head-mounted display device.

容易理解的是,目前的头戴显示设备可以同时支持72Hz、90Hz和120Hz三种预设显示帧率,然而,摄像头的拍摄帧率可能只有30Hz,最高只有60Hz,为了满足头戴显示设备的显示效果,不能只渲染30帧或60帧,因此,可以根据第二预设采集帧率采集惯性数据并发送至计算装置,计算装置可以根据采集的惯性数据和摄像头数据进行时间轴的插帧渲染。譬如,若头戴显示设备的预设显示帧率为120Hz,摄像头的第一预设采集帧率为60Hz,惯性数据的第一预设采集帧率为1KHz,则会采集到1000帧图像帧的惯性数据,60帧图像帧的摄像头数据。此时,计算装置可以按照时间轴顺序进行,进行插帧渲染,即渲染120帧的图像。It is easy to understand that the current head-mounted display device can simultaneously support three preset display frame rates of 72Hz, 90Hz and 120Hz. However, the shooting frame rate of the camera may be only 30Hz, and the highest is only 60Hz. In order to meet the display effect of the head-mounted display device, it is not possible to render only 30 frames or 60 frames. Therefore, the inertial data can be collected according to the second preset acquisition frame rate and sent to the computing device. The computing device can perform interpolation rendering of the time axis according to the collected inertial data and camera data. For example, if the preset display frame rate of the head-mounted display device is 120Hz, the first preset acquisition frame rate of the camera is 60Hz, and the first preset acquisition frame rate of the inertial data is 1KHz, then 1000 frames of inertial data and 60 frames of camera data will be collected. At this time, the computing device can perform interpolation rendering in the order of the time axis, that is, render 120 frames of images.

容易理解的是,插帧渲染也可以由头戴显示设备的处理器执行。头戴显示设备可以将采集到惯性数据缓存下来,若获取到的渲染图像的帧数与头戴显示设备的帧率不匹配,则根据缓存的惯性数据,按照时间轴顺序进行插帧渲染。It is easy to understand that interpolation rendering can also be performed by the processor of the head-mounted display device. The head-mounted display device can cache the collected inertial data. If the number of frames of the acquired rendered image does not match the frame rate of the head-mounted display device, interpolation rendering is performed according to the cached inertial data in the timeline order.

步骤S103,获取渲染图像编码数据,渲染图像编码数据是计算装置根据摄像头数据和惯性数据确定头戴显示设备的渲染图像,并对渲染图像进行编码得到的。Step S103, obtaining the rendered image encoding data, where the rendered image encoding data is obtained by the computing device determining the rendered image of the head mounted display device according to the camera data and the inertial data, and encoding the rendered image.

具体地,头戴显示设备将摄像头数据和惯性数据发送至计算装置,计算装置可以根据摄像头数据和惯性数据实现头戴显示设备6DoF追踪,眼球及表情追踪,手势追踪等,并根据计算结果实时渲染左右眼的显示图像,然后对渲染图像进行编码后,发送至头戴显示设备。Specifically, the head-mounted display device sends the camera data and inertial data to the computing device, which can implement 6DoF tracking, eye and expression tracking, gesture tracking, etc. of the head-mounted display device based on the camera data and inertial data, and render the display images of the left and right eyes in real time based on the calculation results, and then encode the rendered images and send them to the head-mounted display device.

在一些实施例中,该方法还可以包括:若未获取到渲染图像编码数据,则根据前帧图像编码数据、前帧图像编码数据对应的惯性数据以及当前惯性数据,显示所述渲染图像对应的目标屏幕显示图像。In some embodiments, the method may further include: if the rendered image encoding data is not obtained, displaying a target screen display image corresponding to the rendered image according to the previous frame image encoding data, inertial data corresponding to the previous frame image encoding data, and current inertial data.

其中,前帧图像编码数据可以为前一次获取到的渲染图像编码数据。The previous frame image encoding data may be the rendered image encoding data acquired last time.

容易理解的是,由于头戴显示设备需要计算装置发送渲染图像编码数据,可能出现无线干扰、计算装置异常、渲染性能不足、渲染时间长等情况,导致出现图像丢帧的情况,此时头戴显示设备会显示与上一帧相同的图像。然而,若使用者头部发生转动,相同图像帧的光线则会落在使用者视网膜上的不同位置,导致看到的画面会发生抖动。因此,头戴显示设备可以根据前帧渲染图像和惯性数据预测该帧的目标屏幕显示图像,即根据惯性数据预测头戴显示设备动作变化,然后对前帧渲染图像进行空间上的移位和扭曲,得到该帧目标屏幕显示图像,以配合头戴显示设备动作变化,减小画面抖动,避免使用者由于画面抖动产生的眩晕,提高用户体验。It is easy to understand that since the head-mounted display device requires the computing device to send the rendered image encoding data, wireless interference, computing device abnormalities, insufficient rendering performance, long rendering time, etc. may occur, resulting in image frame loss. At this time, the head-mounted display device will display the same image as the previous frame. However, if the user's head turns, the light of the same image frame will fall on different positions on the user's retina, causing the seen picture to shake. Therefore, the head-mounted display device can predict the target screen display image of the frame based on the previous frame rendered image and inertial data, that is, predict the head-mounted display device movement change based on the inertial data, and then spatially shift and distort the previous frame rendered image to obtain the target screen display image of the frame, so as to cooperate with the head-mounted display device movement change, reduce picture jitter, avoid user dizziness caused by picture jitter, and improve user experience.

步骤S104,获取预设显示帧率、当前惯性数据,以及前一次确定出的历史渲染图像对应的屏幕显示图像对应的显示时间信息。Step S104, obtaining a preset display frame rate, current inertia data, and display time information corresponding to a screen display image corresponding to a previously determined historical rendering image.

步骤S105,根据所述渲染图像编码数据、所述预设显示帧率、所述当前惯性数据,以及所述显示时间信息显示所述渲染图像对应的目标屏幕显示图像。Step S105 , displaying a target screen display image corresponding to the rendered image according to the rendered image encoding data, the preset display frame rate, the current inertia data, and the display time information.

在步骤S105中,根据所述渲染图像编码数据、所述预设显示帧率、所述当前惯性数据,以及所述显示时间信息显示所述渲染图像对应的目标屏幕显示图像,包括:In step S105, displaying a target screen display image corresponding to the rendered image according to the rendered image encoding data, the preset display frame rate, the current inertia data, and the display time information includes:

根据所述渲染图像编码数据确定渲染图像以及所述渲染图像对应的渲染时间信息;Determine a rendered image and rendering time information corresponding to the rendered image according to the rendered image encoding data;

根据所述预设显示帧率和所述显示时间信息确定目标显示时间信息;Determine target display time information according to the preset display frame rate and the display time information;

根据所述渲染图像对应的渲染时间信息、目标显示时间信息和当前惯性数据显示所述渲染图像对应的目标屏幕显示图像。The target screen display image corresponding to the rendered image is displayed according to the rendering time information corresponding to the rendered image, the target display time information and the current inertial data.

在一些可选的实施例中,根据所述预设显示帧率和所述显示时间信息确定目标显示时间信息,包括:In some optional embodiments, determining the target display time information according to the preset display frame rate and the display time information includes:

基于所述预设显示帧率确定所述目标时间间隔信息,其中,所述目标时间间隔信息为所述预设显示帧率的倒数;Determining the target time interval information based on the preset display frame rate, wherein the target time interval information is the reciprocal of the preset display frame rate;

将所述显示时间信息与所述目标时间间隔信息之和确定为目标显示时间信息。The sum of the display time information and the target time interval information is determined as the target display time information.

具体地,头戴显示设备获取到渲染图像编码数据后,首先,可以对渲染图像编码数据进行解码,得到渲染图像以及渲染图像对应的渲染时间信息,该渲染时间信息用于指示该渲染图像的渲染时间,渲染时间信息可以为渲染时间戳。容易理解的是,由于渲染图像编码数据由计算装置发送至头戴显示设备,存在一定的时延,头戴显示设备获取到渲染图像编码后不会立即上屏,也存在一定的时延。为了消除时延,在后续步骤中,头戴显示设备可以根据该渲染时间信息、目标显示时间信息以及测量到的当前惯性数据,确定目标显示时间信息对应的渲染图像,然后通过显示驱动模块进行显示,以消除时延。Specifically, after the head-mounted display device obtains the rendered image coded data, first, the rendered image coded data can be decoded to obtain the rendered image and the rendering time information corresponding to the rendered image, and the rendering time information is used to indicate the rendering time of the rendered image, and the rendering time information can be a rendering timestamp. It is easy to understand that since the rendered image coded data is sent from the computing device to the head-mounted display device, there is a certain delay, and the head-mounted display device will not display the rendered image code immediately after obtaining it, and there is also a certain delay. In order to eliminate the delay, in the subsequent steps, the head-mounted display device can determine the rendered image corresponding to the target display time information based on the rendering time information, the target display time information and the measured current inertial data, and then display it through the display driver module to eliminate the delay.

容易理解的是,如图5所示,头戴显示设备在T时刻将摄像头数据和惯性数据发送到计算装置,计算装置可以根据点云数据库、摄像头数据和惯性数据预测得到T时刻的渲染图像,计算装置对渲染图像编码后发送到头戴显示设备后,实际上产生了一定的时延,已经到达T+Delay时刻,此时,头戴显示设备的位置可能已经发生偏移,此时,获取到的惯性数据是t=T+Delay时刻的惯性数据,头戴显示设备根据目标显示时间信息和头戴显示设备惯性测量单元数据预测T+Delay时刻的图像。It is easy to understand that, as shown in Figure 5, the head-mounted display device sends the camera data and inertial data to the computing device at time T. The computing device can predict the rendered image at time T based on the point cloud database, camera data and inertial data. After the computing device encodes the rendered image and sends it to the head-mounted display device, a certain delay is actually generated, and the time T+Delay has been reached. At this time, the position of the head-mounted display device may have shifted. At this time, the inertial data obtained is the inertial data at time t=T+Delay. The head-mounted display device predicts the image at time T+Delay based on the target display time information and the inertial measurement unit data of the head-mounted display device.

譬如,如图4所示,处理器可以包括图像预测模块,当头戴显示设备接收到渲染图像编码数据以后,第一图像解码模块得到渲染图像以及渲染图像的渲染时间信息,然后,图像预测模块可以根据渲染时间信息,当前时间信息,目标显示时间信息及当前惯性数据,确定目标屏幕显示图像。具体的,头戴显示设备采集数据并发送到计算装置渲染,计算装置再将渲染图像发送至头戴服务器,这个过程存在时间差,包括处理器、计算装置、无线模块等处理过程中,均会存在时延,为了减小时延,头戴显示设备可以基于采集到的当前惯性数据确定目标屏幕显示图像。比如,时延可能有10毫秒或20毫秒的时间,中间可能会缺失了一个动作帧,因此所以需要预测该动作帧的目标屏幕显示图像。For example, as shown in FIG4 , the processor may include an image prediction module. After the head-mounted display device receives the rendered image encoding data, the first image decoding module obtains the rendered image and the rendering time information of the rendered image. Then, the image prediction module may determine the target screen display image based on the rendering time information, the current time information, the target display time information and the current inertial data. Specifically, the head-mounted display device collects data and sends it to the computing device for rendering. The computing device then sends the rendered image to the head-mounted server. There is a time difference in this process, including the processor, the computing device, the wireless module, and other processing processes. There will be delays. In order to reduce the delay, the head-mounted display device can determine the target screen display image based on the collected current inertial data. For example, the delay may be 10 milliseconds or 20 milliseconds, and an action frame may be missing in the middle, so it is necessary to predict the target screen display image of the action frame.

具体地,可以根据渲染时间信息、当前时间信息、当前惯性数据、目标显示时间信息确定出目标显示时间信息对应的头戴显示设备运动数据,比如,头戴显示设备相较于渲染时刻可能旋转了一定角度,根据预测出的目标显示时间信息对应的头戴显示设备运动数据对渲染图像进行调整,得到目标屏幕显示图像,并通过显示屏幕显示该目标屏幕显示图像。上述所有的技术方案,可以采用任意结合形成本申请的可选实施例,在此不再一一赘述。Specifically, the head mounted display device motion data corresponding to the target display time information can be determined based on the rendering time information, the current time information, the current inertia data, and the target display time information. For example, the head mounted display device may rotate a certain angle compared to the rendering time. The rendered image is adjusted based on the predicted head mounted display device motion data corresponding to the target display time information to obtain the target screen display image, and the target screen display image is displayed on the display screen. All of the above technical solutions can be combined in any way to form optional embodiments of the present application, and will not be described one by one here.

本申请实施例提供了一种图像处理方法,应用于头戴显示设备,包括:采集摄像头数据和惯性数据;将所述摄像头数据和所述惯性数据发送至计算装置;获取渲染图像编码数据,所述渲染图像编码数据是所述计算装置根据所述摄像头数据和所述惯性数据确定所述头戴显示设备的渲染图像,并对所述渲染图像进行编码得到的;获取预设显示帧率、当前惯性数据,以及前一次确定出的历史渲染图像对应的屏幕显示图像对应的显示时间信息;据所述渲染图像编码数据、所述预设显示帧率、所述当前惯性数据,以及所述显示时间信息显示所述渲染图像对应的目标屏幕显示图像。本申请通过在头戴显示设备采集到摄像头数据和惯性数据后,将摄像头数据和惯性数据发送至计算装置,由计算装置完成进行追踪计算和图像渲染,头戴显示设备只需获取到渲染图像后预测目标屏幕显示图像并显示目标屏幕显示图像,而无需进行追踪计算和图像渲染,从而减少头戴显示设备的计算量,并降低头戴显示设备的功耗,进而减小散热器和电池的尺寸和重量,有利于头戴显示设备的小型化和轻量化。An embodiment of the present application provides an image processing method, which is applied to a head-mounted display device, including: collecting camera data and inertial data; sending the camera data and the inertial data to a computing device; obtaining rendering image encoding data, wherein the rendering image encoding data is obtained by the computing device determining a rendering image of the head-mounted display device according to the camera data and the inertial data, and encoding the rendering image; obtaining a preset display frame rate, current inertial data, and display time information corresponding to a screen display image corresponding to a historical rendering image determined last time; and displaying a target screen display image corresponding to the rendering image according to the rendering image encoding data, the preset display frame rate, the current inertial data, and the display time information. In the present application, after the head-mounted display device collects camera data and inertial data, the camera data and inertial data are sent to a computing device, and the computing device completes tracking calculations and image rendering. The head-mounted display device only needs to obtain the rendered image and then predict the target screen display image and display the target screen display image without performing tracking calculations and image rendering, thereby reducing the amount of calculation of the head-mounted display device and reducing the power consumption of the head-mounted display device, thereby reducing the size and weight of the radiator and battery, which is conducive to the miniaturization and lightweight of the head-mounted display device.

请参阅图6,图6示出了本申请实施例所描述的图像处理方法的流程示意图,该图像处理方法可以应用于计算装置,该方法主要包括步骤S201至步骤S203,说明如下:Please refer to FIG. 6 , which shows a schematic flow chart of an image processing method described in an embodiment of the present application. The image processing method can be applied to a computing device. The method mainly includes steps S201 to S203, which are described as follows:

步骤S201,获取头戴显示设备的摄像头数据和惯性数据。Step S201, obtaining camera data and inertial data of a head mounted display device.

其中,计算装置可以为本地服务器或者云端服务器,也可以为计算机或者智能手机,头戴显示设备可以为虚拟现实、增强现实或混合现实的头戴显示设备,头戴显示设备和计算装置可以通过网络连接。请参阅图2,该头戴显示设备可以包括SLAM追踪摄像头、嘴巴表情追踪摄像头、MR摄像头以及眼球追踪摄像头,其中,摄像头数据可以为通过SLAM追踪摄像头采集的外部环境图像的编码数据,通过嘴巴表情追踪摄像头采集的使用者嘴部表情图像的编码数据,通过MR摄像头采集的外部环境图像的编码数据,以及通过眼球追踪摄像头采集的使用者眼部表情图像的编码数据。该头戴显示设备还可以包括IMU传感器,惯性数据可以为IMU传感器采集的使用者的头戴显示设备动作数据。The computing device may be a local server or a cloud server, or a computer or a smart phone, the head mounted display device may be a head mounted display device of virtual reality, augmented reality or mixed reality, and the head mounted display device and the computing device may be connected via a network. Referring to FIG2 , the head mounted display device may include a SLAM tracking camera, a mouth expression tracking camera, an MR camera and an eye tracking camera, wherein the camera data may be the encoded data of the external environment image collected by the SLAM tracking camera, the encoded data of the user's mouth expression image collected by the mouth expression tracking camera, the encoded data of the external environment image collected by the MR camera, and the encoded data of the user's eye expression image collected by the eye tracking camera. The head mounted display device may also include an IMU sensor, and the inertial data may be the head mounted display device motion data of the user collected by the IMU sensor.

步骤S202,根据摄像头数据和惯性数据确定头戴显示设备的渲染图像。Step S202: determining a rendered image of a head mounted display device according to the camera data and the inertial data.

其中,计算装置可以对摄像头数据进行解码,得到SLAM追踪摄像头采集的外部环境图像、嘴巴表情追踪摄像头采集的使用者嘴部表情图像、MR摄像头采集的外部环境图像、以及眼球追踪摄像头采集的使用者眼部表情图像。计算装置可以将SLAM追踪摄像头采集的多张外部环境图像进行合成为一张全景环境图像,对MR摄像头采集多张外部环境图像合成为一张全景环境图像,将嘴部表情图像和两张眼球表情图像合成为一张表情图像。The computing device may decode the camera data to obtain an external environment image captured by the SLAM tracking camera, a user's mouth expression image captured by the mouth expression tracking camera, an external environment image captured by the MR camera, and an eye expression image of the user captured by the eye tracking camera. The computing device may synthesize multiple external environment images captured by the SLAM tracking camera into a panoramic environment image, synthesize multiple external environment images captured by the MR camera into a panoramic environment image, and synthesize the mouth expression image and two eye expression images into one expression image.

具体地,计算装置可以根据摄像头数据、惯性数据以及点云数据库实现头戴显示设备6DoF追踪,眼球及表情追踪和手势追踪等,然后,根据追踪结果对上述全景环境图像进行渲染,从而生成渲染图像。其中,渲染(Render)是指将属性和方法加入到对应的组件,然后将组件加入对应的容器的过程。例如,在PS(AdobePhotoshop)中,渲染是指将颜色,尺寸等属性加入到当前画布的过程,其中,可以把画布看成为容器,里面有很多小组件,每个组件对应有自己的属性。其中,点云数据库(pointclouddata)相当于空间环境中所有3D坐标点的集合。点云数据是指在一个三维坐标系统中的一组向量的集合。Specifically, the computing device can implement 6DoF tracking of the head-mounted display device, eye and expression tracking, gesture tracking, etc. according to the camera data, inertial data, and the point cloud database, and then render the above-mentioned panoramic environment image according to the tracking results to generate a rendered image. Among them, rendering refers to the process of adding properties and methods to corresponding components, and then adding components to corresponding containers. For example, in PS (Adobe Photoshop), rendering refers to the process of adding attributes such as color and size to the current canvas, in which the canvas can be regarded as a container with many small components, and each component has its own corresponding attributes. Among them, the point cloud database (pointclouddata) is equivalent to a collection of all 3D coordinate points in the spatial environment. Point cloud data refers to a collection of a set of vectors in a three-dimensional coordinate system.

其中,计算装置可以根据追踪结果分别渲染左眼视角和右眼视角对应的渲染图像。渲染顺序可以为先渲染左眼图像,后渲染右眼图像,且采用相同的方式进行渲染。The computing device may render the rendered images corresponding to the left eye perspective and the right eye perspective respectively according to the tracking result. The rendering order may be to render the left eye image first and then the right eye image, and the rendering may be performed in the same manner.

步骤S203,对渲染图像进行编码,得到渲染图像编码数据,并将渲染图像编码数据发送至头戴显示设备,以使头戴显示设备获取预设显示帧率、当前惯性数据,以及前一次确定出的历史渲染图像对应的屏幕显示图像对应的显示时间信息,并根据所述渲染图像编码数据、所述预设显示帧率、所述当前惯性数据,以及所述显示时间信息显示所述渲染图像对应的目标屏幕显示图像。Step S203, encode the rendered image to obtain rendered image encoded data, and send the rendered image encoded data to a head-mounted display device, so that the head-mounted display device obtains a preset display frame rate, current inertia data, and display time information corresponding to a screen display image corresponding to a previously determined historical rendered image, and displays a target screen display image corresponding to the rendered image according to the rendered image encoded data, the preset display frame rate, the current inertia data, and the display time information.

其中,计算装置可以对左眼渲染图像以及右眼渲染图像进行编码,发送至头戴显示设备,头戴显示设备根据渲染图像的渲染时间信息以及头戴显示设备测量出的当前惯性数据和目标显示时间信息确定出目标屏幕显示图像,然后头戴显示设备通过显示驱动显示目标屏幕显示图像。Among them, the computing device can encode the left-eye rendered image and the right-eye rendered image, and send them to the head-mounted display device. The head-mounted display device determines the target screen display image according to the rendering time information of the rendered image and the current inertial data and target display time information measured by the head-mounted display device, and then the head-mounted display device displays the target screen display image through the display driver.

在本实施例中,步骤S202具体可以包括:获取头戴显示设备的预设显示帧率,并根据头戴显示设备的预设显示帧率确定目标渲染帧数;根据摄像头数据、惯性数据和目标渲染帧数确定头戴显示设备的渲染图像。In this embodiment, step S202 may specifically include: obtaining a preset display frame rate of the head-mounted display device, and determining a target rendering frame number according to the preset display frame rate of the head-mounted display device; and determining a rendered image of the head-mounted display device according to the camera data, the inertial data, and the target rendering frame number.

容易理解的是,目前的头戴显示设备可以同时支持72Hz、90Hz和120Hz三种预设显示帧率,然而,摄像头的拍摄帧率可能只有30Hz,最高只有60Hz,为了满足头戴显示设备的显示效果,不能只渲染30帧或60帧,因此,IMU会根据远大于头戴显示设备的预设显示帧率的第二预设采集帧率,采集惯性数据发送至计算装置,计算装置可以根据采集的惯性数据和摄像头数据进行时间轴的插帧渲染。比如,头戴显示设备的预设显示帧率为120Hz,摄像头的拍摄帧率为60Hz,第二预设采集帧率为1KHz,则计算装置可以收到1000帧的图像帧对应的惯性数据,收到60帧的图像帧对应的摄像头数据,此时,根据时间轴,可以确定某一时刻没有摄像头数据,则可以根据该时刻前一帧的摄像头数据、该时刻之前的多帧惯性数据预测该时刻的渲染图像。具体地,计算装置可以按照时间轴顺序,根据上述方式进行插帧渲染,即渲染120帧或120帧以上的图像。It is easy to understand that the current head-mounted display device can simultaneously support three preset display frame rates of 72Hz, 90Hz and 120Hz. However, the camera's shooting frame rate may be only 30Hz, and the highest is only 60Hz. In order to meet the display effect of the head-mounted display device, it is not possible to render only 30 frames or 60 frames. Therefore, the IMU will collect inertial data and send it to the computing device according to the second preset acquisition frame rate that is much larger than the preset display frame rate of the head-mounted display device. The computing device can perform interpolation rendering of the time axis based on the collected inertial data and camera data. For example, if the preset display frame rate of the head-mounted display device is 120Hz, the camera's shooting frame rate is 60Hz, and the second preset acquisition frame rate is 1KHz, then the computing device can receive inertial data corresponding to 1000 frames of image frames and camera data corresponding to 60 frames of image frames. At this time, according to the time axis, it can be determined that there is no camera data at a certain moment, and the rendered image at that moment can be predicted based on the camera data of the previous frame at that moment and the multiple frames of inertial data before that moment. Specifically, the computing device may perform interpolation rendering according to the above method in the order of the timeline, that is, render 120 frames or more of images.

请参阅图7,计算装置还可以通过无线模块连接多套头戴显示设备,并且可以实现多个头戴显示设备在相同的环境坐标系下的追踪和使用。譬如,当多个使用者在相同环境一起玩游戏时,多个头戴显示设备可以同时与计算装置连接,并采集摄像头数据以及惯性数据发送至计算装置,计算装置可以根据多个头戴显示设备对应的摄像头数据、惯性数据、以及点云数据库实现对多个头戴显示设备的追踪,实现对多个头戴设备的同步。Please refer to FIG7 . The computing device can also connect multiple sets of head-mounted display devices through a wireless module, and can realize the tracking and use of multiple head-mounted display devices in the same environmental coordinate system. For example, when multiple users play games together in the same environment, multiple head-mounted display devices can be connected to the computing device at the same time, and collect camera data and inertial data and send them to the computing device. The computing device can track multiple head-mounted display devices according to the camera data, inertial data, and point cloud database corresponding to the multiple head-mounted display devices, and realize the synchronization of multiple head-mounted devices.

上述所有的技术方案,可以采用任意结合形成本申请的可选实施例,在此不再一一赘述。All of the above technical solutions can be arbitrarily combined to form optional embodiments of the present application, which will not be described one by one here.

本申请实施例提供了一种图像处理方法,应用于计算装置,包括:获取头戴显示设备的摄像头数据和惯性数据;根据摄像头数据和惯性数据确定头戴显示设备的渲染图像;对渲染图像进行编码,得到渲染图像编码数据,并将渲染图像编码数据发送至头戴显示设备,以使头戴显示设备获取预设显示帧率、当前惯性数据,以及前一次确定出的历史渲染图像对应的屏幕显示图像对应的显示时间信息,并根据所述渲染图像编码数据、所述预设显示帧率、所述当前惯性数据,以及所述显示时间信息显示所述渲染图像对应的目标屏幕显示图像。本申请通过在头戴显示设备采集到摄像头数据和惯性数据后,将摄像头数据和惯性数据发送至计算装置,由计算装置完成进行追踪计算和图像渲染,头戴显示设备只需获取到渲染图像后预测目标屏幕显示图像并显示目标屏幕显示图像,而无需进行追踪计算和图像渲染,从而减少头戴显示设备的计算量,并降低头戴显示设备的功耗,进而减小散热器和电池的尺寸和重量,有利于头戴显示设备的小型化和轻量化。The embodiment of the present application provides an image processing method, which is applied to a computing device, including: obtaining camera data and inertial data of a head-mounted display device; determining a rendering image of the head-mounted display device according to the camera data and inertial data; encoding the rendering image to obtain rendering image encoding data, and sending the rendering image encoding data to the head-mounted display device, so that the head-mounted display device obtains a preset display frame rate, current inertial data, and display time information corresponding to a screen display image corresponding to a historical rendering image determined previously, and displays a target screen display image corresponding to the rendering image according to the rendering image encoding data, the preset display frame rate, the current inertial data, and the display time information. In the present application, after the head-mounted display device collects the camera data and inertial data, the camera data and inertial data are sent to the computing device, and the computing device completes the tracking calculation and image rendering. The head-mounted display device only needs to predict the target screen display image and display the target screen display image after obtaining the rendering image, without the need for tracking calculation and image rendering, thereby reducing the amount of calculation of the head-mounted display device, and reducing the power consumption of the head-mounted display device, thereby reducing the size and weight of the radiator and the battery, which is conducive to the miniaturization and lightweight of the head-mounted display device.

上文详细描述了本申请的方法实施例,下文结合图8,详细描述本申请的装置实施例,应理解,装置实施例与方法实施例相互对应,类似的描述可以参照方法实施例。The method embodiment of the present application is described in detail above. The device embodiment of the present application is described in detail below in conjunction with Figure 8. It should be understood that the device embodiment and the method embodiment correspond to each other, and similar descriptions can refer to the method embodiment.

图8是根据本申请实施例的一种图像处理装置10的示意性结构图,如图8所示,该图像处理装置10可以包括:FIG8 is a schematic structural diagram of an image processing device 10 according to an embodiment of the present application. As shown in FIG8 , the image processing device 10 may include:

采集模块11,用于采集摄像头数据和惯性数据;The acquisition module 11 is used to acquire camera data and inertial data;

发送模块12,用于将摄像头数据和惯性数据发送至计算装置;A sending module 12, used for sending the camera data and the inertial data to the computing device;

第一获取模块13,用于获取渲染图像编码数据,渲染图像编码数据是计算装置对根据摄像头数据和惯性数据确定头戴显示设备的渲染图像,并对渲染图像进行编码得到的;A first acquisition module 13 is used to acquire rendered image coding data, where the rendered image coding data is obtained by a computing device determining a rendered image of a head mounted display device according to camera data and inertial data, and encoding the rendered image;

第二获取模块14,用于获取预设显示帧率、当前惯性数据,以及前一次确定出的历史渲染图像对应的屏幕显示图像对应的显示时间信息;The second acquisition module 14 is used to acquire a preset display frame rate, current inertial data, and display time information corresponding to a screen display image corresponding to a previously determined historical rendering image;

显示模块15,用于根据所述渲染图像编码数据、所述预设显示帧率、所述当前惯性数据,以及所述显示时间信息显示所述渲染图像对应的目标屏幕显示图像。The display module 15 is used to display a target screen display image corresponding to the rendered image according to the rendered image encoding data, the preset display frame rate, the current inertia data, and the display time information.

可选的,显示模块15还可以用于:若未获取到渲染图像编码数据,则根据前帧图像编码数据、前帧图像编码数据对应的惯性数据以及当前惯性数据显示所述渲染图像对应的目标屏幕显示图像。Optionally, the display module 15 may also be used to: if the rendered image coding data is not obtained, then display the target screen display image corresponding to the rendered image according to the previous frame image coding data, the inertial data corresponding to the previous frame image coding data and the current inertial data.

可选的,头戴显示设备包括多个摄像头,该图像处理装置10可以包括第三确定模块,用于:调整多个摄像头的曝光时间,以对齐多个摄像头的曝光中心点;根据多个摄像头的曝光中心点确定摄像头数据的时间戳数据。Optionally, the head-mounted display device includes multiple cameras, and the image processing device 10 may include a third determination module, used to: adjust the exposure time of the multiple cameras to align the exposure center points of the multiple cameras; determine the timestamp data of the camera data based on the exposure center points of the multiple cameras.

可选的,头戴显示设备包括惯性传感器,该图像处理装置10还可以用于:根据惯性传感器的中断时间,确定惯性数据的时间戳。Optionally, the head mounted display device includes an inertial sensor, and the image processing device 10 may also be used to determine a timestamp of the inertial data according to an interruption time of the inertial sensor.

可选的,采集模块11,可以用于:根据头戴显示设备的摄像头的第一预设采集帧率采集摄像头数据;根据第二预设采集帧率采集惯性数据。Optionally, the acquisition module 11 can be used to: acquire camera data according to a first preset acquisition frame rate of a camera of the head mounted display device; and acquire inertial data according to a second preset acquisition frame rate.

需要说明的是,本申请实施例中的图像处理装置10中各模块的功能可对应参考上述各方法实施例中的具体实现方式,这里不再赘述。It should be noted that the functions of each module in the image processing device 10 in the embodiment of the present application can correspond to the specific implementation methods in the above-mentioned method embodiments, which will not be repeated here.

上述图像处理装置10中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各个模块可以以硬件形式内嵌于或独立于头戴显示设备中的处理器中,也可以以软件形式存储于头戴显示设备中的存储器中,以便于处理器调用执行上述各个模块对应的操作。Each module in the above-mentioned image processing device 10 can be implemented in whole or in part by software, hardware or a combination thereof. Each module can be embedded in or independent of the processor in the head-mounted display device in the form of hardware, or can be stored in the memory in the head-mounted display device in the form of software, so that the processor can call and execute the corresponding operations of each module.

本申请实施例提供的图像处理装置10,通过采集模块11采集摄像头数据和惯性数据,发送模块12将摄像头数据和惯性数据发送至计算装置,之后第一获取模块13获取渲染图像编码数据,渲染图像编码数据是计算装置根据摄像头数据和惯性数据确定的头戴显示设备的渲染图像,并对渲染图像进行编码得到的,接着第二获取模块14获取预设显示帧率、当前惯性数据,以及前一次确定出的历史渲染图像对应的屏幕显示图像对应的显示时间信息,显示模块15据所述渲染图像编码数据、所述预设显示帧率、所述当前惯性数据,以及所述显示时间信息显示所述渲染图像对应的目标屏幕显示图像,从而减少头戴显示设备的计算量,降低头戴显示设备的功耗,进而减小散热器和电池的尺寸和重量,有利于头戴显示设备的小型化和轻量化。The image processing device 10 provided in the embodiment of the present application collects camera data and inertial data through the collection module 11, and the sending module 12 sends the camera data and inertial data to the computing device. Then, the first acquisition module 13 obtains the rendering image coding data, which is the rendering image of the head-mounted display device determined by the computing device according to the camera data and inertial data, and is obtained by encoding the rendering image. Then, the second acquisition module 14 obtains the preset display frame rate, the current inertial data, and the display time information corresponding to the screen display image corresponding to the historical rendering image determined last time. The display module 15 displays the target screen display image corresponding to the rendering image according to the rendering image coding data, the preset display frame rate, the current inertial data, and the display time information, thereby reducing the calculation amount of the head-mounted display device, reducing the power consumption of the head-mounted display device, and further reducing the size and weight of the radiator and the battery, which is conducive to the miniaturization and lightweight of the head-mounted display device.

图9是根据本申请实施例的一种图像处理装置20的示意性结构图,如图9所示,该图像处理装置20应用于计算装置,可以包括:FIG. 9 is a schematic structural diagram of an image processing device 20 according to an embodiment of the present application. As shown in FIG. 9 , the image processing device 20 is applied to a computing device and may include:

获取模块21,用于获取头戴显示设备的摄像头数据和惯性数据;An acquisition module 21 is used to acquire camera data and inertial data of a head mounted display device;

确定模块22,用于根据摄像头数据和惯性数据确定头戴显示设备的渲染图像;A determination module 22, configured to determine a rendered image of a head mounted display device according to the camera data and the inertial data;

编码模块23,用于对渲染图像进行编码,得到渲染图像编码数据,并将渲染图像编码数据发送至头戴显示设备,以使头戴显示设备获取预设显示帧率、当前惯性数据,以及前一次确定出的历史渲染图像对应的屏幕显示图像对应的显示时间信息,并根据所述渲染图像编码数据、所述预设显示帧率、所述当前惯性数据,以及所述显示时间信息显示所述渲染图像对应的目标屏幕显示图像。The encoding module 23 is used to encode the rendered image to obtain the rendered image encoding data, and send the rendered image encoding data to the head-mounted display device, so that the head-mounted display device obtains the preset display frame rate, the current inertia data, and the display time information corresponding to the screen display image corresponding to the historical rendered image determined last time, and displays the target screen display image corresponding to the rendered image according to the rendered image encoding data, the preset display frame rate, the current inertia data, and the display time information.

可选的,确定模块22,可以用于:获取头戴显示设备的预设显示帧率,并根据头戴显示设备的预设显示帧率确定目标渲染帧数;根据摄像头数据、惯性数据和目标渲染帧数确定头戴显示设备的渲染图像。Optionally, the determination module 22 can be used to: obtain a preset display frame rate of the head-mounted display device, and determine a target rendering frame number based on the preset display frame rate of the head-mounted display device; and determine a rendered image of the head-mounted display device based on camera data, inertial data and the target rendering frame number.

需要说明的是,本申请实施例中的图像处理装置20中各模块的功能可对应参考上述各方法实施例中的具体实现方式,这里不再赘述。It should be noted that the functions of each module in the image processing device 20 in the embodiment of the present application can correspond to the specific implementation methods in the above-mentioned method embodiments, which will not be repeated here.

上述图像处理装置20中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各个模块可以以硬件形式内嵌于或独立于头戴显示设备中的处理器中,也可以以软件形式存储于头戴显示设备中的存储器中,以便于处理器调用执行上述各个模块对应的操作。Each module in the above-mentioned image processing device 20 can be implemented in whole or in part by software, hardware or a combination thereof. Each module can be embedded in or independent of the processor in the head-mounted display device in the form of hardware, or can be stored in the memory in the head-mounted display device in the form of software, so that the processor can call and execute the corresponding operations of each module.

本申请实施例提供的图像处理装置20,通过获取模块21获取头戴显示设备的摄像头数据和惯性数据,然后,确定模块22根据摄像头数据和惯性数据确定头戴显示设备的渲染图像,接着编码模块23对渲染图像进行编码,得到渲染图像编码数据,并将渲染图像编码数据发送至头戴显示设备,以使头戴显示设备获取预设显示帧率、当前惯性数据,以及前一次确定出的历史渲染图像对应的屏幕显示图像对应的显示时间信息,并根据所述渲染图像编码数据、所述预设显示帧率、所述当前惯性数据,以及所述显示时间信息显示所述渲染图像对应的目标屏幕显示图像,从而减少头戴显示设备的计算量,降低头戴显示设备的功耗,进而减小散热器和电池的尺寸和重量,有利于头戴显示设备的小型化和轻量化。The image processing device 20 provided in the embodiment of the present application obtains camera data and inertial data of a head-mounted display device through an acquisition module 21, and then the determination module 22 determines a rendered image of the head-mounted display device according to the camera data and the inertial data. Then the encoding module 23 encodes the rendered image to obtain rendered image encoded data, and sends the rendered image encoded data to the head-mounted display device, so that the head-mounted display device obtains a preset display frame rate, current inertial data, and display time information corresponding to a screen display image corresponding to a historical rendered image determined last time, and displays a target screen display image corresponding to the rendered image according to the rendered image encoded data, the preset display frame rate, the current inertial data, and the display time information, thereby reducing the amount of calculation of the head-mounted display device, reducing the power consumption of the head-mounted display device, and further reducing the size and weight of the radiator and the battery, which is conducive to the miniaturization and lightweight of the head-mounted display device.

请参阅图10,图10示出了本申请实施例提供的头戴显示设备的示意性框图。图10示出的头戴显示设备30仅仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。Please refer to Figure 10, which shows a schematic block diagram of a head mounted display device provided in an embodiment of the present application. The head mounted display device 30 shown in Figure 10 is only an example and should not bring any limitation to the functions and scope of use of the embodiment of the present application.

头戴显示设备30可以包括:处理器31、存储器32、输入装置33、输出装置34、通信装置35、通信总线36和输入/输出(I/O)接口37。处理器31,存储器32,输入/输出(I/O)接口37通过通信总线36实现相互间通信。通常,以下装置可以连接至I/O接口37:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置33;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置34;以及通信装置35。通信装置35可以允许头戴显示设备30与其他设备进行无线或有线通信以交换数据。虽然图10示出了具有各种装置的头戴显示设备30,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。The head mounted display device 30 may include: a processor 31, a memory 32, an input device 33, an output device 34, a communication device 35, a communication bus 36, and an input/output (I/O) interface 37. The processor 31, the memory 32, and the input/output (I/O) interface 37 communicate with each other through the communication bus 36. Generally, the following devices may be connected to the I/O interface 37: an input device 33 including, for example, a touch screen, a touch pad, a keyboard, a mouse, a camera, a microphone, an accelerometer, a gyroscope, etc.; an output device 34 including, for example, a liquid crystal display (LCD), a speaker, a vibrator, etc.; and a communication device 35. The communication device 35 may allow the head mounted display device 30 to communicate with other devices wirelessly or by wire to exchange data. Although FIG. 10 shows a head mounted display device 30 having various devices, it should be understood that it is not required to implement or have all the devices shown. More or fewer devices may be implemented or have instead.

具体地,存储器32可以用于存储软件程序以及模块,处理器31通过运行存储在存储器32的软件程序以及模块,例如前述方法实施例中的相应操作的软件程序。Specifically, the memory 32 may be used to store software programs and modules, and the processor 31 runs the software programs and modules stored in the memory 32, such as the software programs of the corresponding operations in the aforementioned method embodiments.

在一些实施例中,该处理器31可以调用存储在存储器32的软件程序以及模块执行如下操作:采集摄像头数据和惯性数据;将所述摄像头数据和所述惯性数据发送至计算装置;获取渲染图像编码数据,所述渲染图像编码数据是所述计算装置根据所述摄像头数据和所述惯性数据确定所述头戴显示设备的渲染图像,并对所述渲染图像进行编码得到的;获取预设显示帧率、当前惯性数据,以及前一次确定出的历史渲染图像对应的屏幕显示图像对应的显示时间信息;根据所述渲染图像编码数据、所述预设显示帧率、所述当前惯性数据,以及所述显示时间信息显示所述渲染图像对应的目标屏幕显示图像。In some embodiments, the processor 31 can call software programs and modules stored in the memory 32 to perform the following operations: collect camera data and inertial data; send the camera data and the inertial data to a computing device; obtain rendering image coding data, wherein the rendering image coding data is obtained by the computing device determining the rendering image of the head-mounted display device based on the camera data and the inertial data, and encoding the rendering image; obtain a preset display frame rate, current inertial data, and display time information corresponding to a screen display image corresponding to a historical rendering image determined last time; and display a target screen display image corresponding to the rendering image according to the rendering image coding data, the preset display frame rate, the current inertial data, and the display time information.

具体地,根据本申请的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本申请实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置35从网络上被下载和安装,或者从存储器32被安装。在该计算机程序被处理器31执行时,执行本申请前述方法实施例中限定的上述功能。Specifically, according to an embodiment of the present application, the process described above with reference to the flowchart can be implemented as a computer software program. For example, an embodiment of the present application includes a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program includes a program code for executing the method shown in the flowchart. In such an embodiment, the computer program can be downloaded and installed from the network through a communication device 35, or installed from a memory 32. When the computer program is executed by the processor 31, the above-mentioned functions defined in the aforementioned method embodiment of the present application are executed.

需说明的是,应理解图10中的各个装置以及终端的划分仅仅是一种逻辑功能的划分,实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。且这些模块可以全部以软件通过处理元件调用的形式实现;也可以全部以硬件的形式实现;还可以部分模块通过处理元件调用软件的形式实现,部分模块通过硬件的形式实现。It should be noted that it should be understood that the division of the various devices and terminals in FIG. 10 is only a division of logical functions, and in actual implementation, they can be fully or partially integrated into one physical entity, or physically separated. Moreover, these modules can be all implemented in the form of software calling through processing elements; they can also be all implemented in the form of hardware; some modules can also be implemented in the form of software calling through processing elements, and some modules can be implemented in the form of hardware.

本公开实施例还提供一种计算装置,该计算装置包括处理器和存储器,存储器中存储有计算机程序,处理器通过调用存储器中存储的计算机程序可以执行如下操作:获取头戴显示设备的摄像头数据和惯性数据;根据摄像头数据和惯性数据确定头戴显示设备的渲染图像;对渲染图像进行编码,得到渲染图像编码数据,并将渲染图像编码数据发送至头戴显示设备,以使头戴显示设备获取预设显示帧率、当前惯性数据,以及前一次确定出的历史渲染图像对应的屏幕显示图像对应的显示时间信息,并根据所述渲染图像编码数据、所述预设显示帧率、所述当前惯性数据,以及所述显示时间信息显示所述渲染图像对应的目标屏幕显示图像。The embodiment of the present disclosure also provides a computing device, which includes a processor and a memory, wherein a computer program is stored in the memory, and the processor can perform the following operations by calling the computer program stored in the memory: obtaining camera data and inertial data of a head-mounted display device; determining a rendering image of the head-mounted display device according to the camera data and the inertial data; encoding the rendering image to obtain rendering image encoding data, and sending the rendering image encoding data to the head-mounted display device, so that the head-mounted display device obtains a preset display frame rate, current inertial data, and display time information corresponding to a screen display image corresponding to a historical rendering image determined last time, and displays a target screen display image corresponding to the rendering image according to the rendering image encoding data, the preset display frame rate, the current inertial data, and the display time information.

本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行时执行上述方法实施例中所述的图像处理方法的步骤。其中,该存储介质可以是易失性或非易失的计算机可读取存储介质。The present disclosure also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the steps of the image processing method described in the above method embodiment are executed. The storage medium may be a volatile or non-volatile computer-readable storage medium.

本公开实施例还提供一种计算机程序产品,该计算机程序产品承载有程序代码,所述程序代码包括的指令可用于执行上述方法实施例中所述的图像处理方法的步骤,具体可参见上述方法实施例,在此不再赘述。The embodiments of the present disclosure also provide a computer program product, which carries a program code. The instructions included in the program code can be used to execute the steps of the image processing method described in the above method embodiment. For details, please refer to the above method embodiment, which will not be repeated here.

其中,上述计算机程序产品可以具体通过硬件、软件或其结合的方式实现。在一个可选实施例中,所述计算机程序产品具体体现为计算机存储介质,在另一个可选实施例中,计算机程序产品具体体现为软件产品,例如软件开发包(SoftwareDevelopmentKit,SDK)等等。The computer program product may be implemented in hardware, software or a combination thereof. In one optional embodiment, the computer program product is implemented as a computer storage medium. In another optional embodiment, the computer program product is implemented as a software product, such as a software development kit (SDK).

应理解,本申请实施例的处理器可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器、数字信号处理器(DigitalSignalProcessor,DSP)、专用集成电路It should be understood that the processor of the embodiment of the present application may be an integrated circuit chip with signal processing capabilities. In the implementation process, each step of the above method embodiment can be completed by an integrated logic circuit of hardware in the processor or instructions in the form of software. The above processor can be a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit, or a processor that can process signals.

(ApplicationSpecificIntegratedCircuit,ASIC)、现成可编程门阵列(FieldProgrammableGateArray,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。(Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components. The methods, steps and logic block diagrams disclosed in the embodiments of the present application can be implemented or executed. The steps of the method disclosed in the embodiments of the present application can be directly embodied as a hardware decoding processor for execution, or can be executed by a combination of hardware and software modules in the decoding processor. The software module can be located in a mature storage medium in the field such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory or an electrically erasable programmable memory, a register, etc. The storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware.

可以理解,本申请实施例中的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(Read-OnlyMemory,ROM)、可编程只读存储器(ProgrammableROM,PROM)、可擦除可编程只读存储器(ErasablePROM,EPROM)、电可擦除可编程只读存储器(ElectricallyEPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(RandomAccessMemory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(StaticRAM,SRAM)、动态随机存取存储器(DynamicRAM,DRAM)、同步动态随机存取存储器(SynchronousDRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(DoubleDataRateSDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(EnhancedSDRAM,ESDRAM)、同步连接动态随机存取存储器(SynchlinkDRAM,SLDRAM)和直接内存总线随机存取存储器(DirectRambusRAM,DRRAM)。应注意,本文描述的系统和方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。It can be understood that the memory in the embodiments of the present application can be a volatile memory or a non-volatile memory, or can include both volatile and non-volatile memories. Among them, the non-volatile memory can be a read-only memory (ROM), a programmable read-only memory (PROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), or a flash memory. The volatile memory can be a random access memory (RAM), which is used as an external cache. By way of example and not limitation, many forms of RAM are available, such as static random access memory (SRAM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), double data rate synchronous dynamic random access memory (DDRSDRAM), enhanced synchronous dynamic random access memory (ESDRAM), synchronous link dynamic random access memory (SLDRAM), and direct memory bus random access memory (DRRAM). It should be noted that the memory of the systems and methods described herein is intended to include, but is not limited to, these and any other suitable types of memory.

应理解,上述存储器为示例性但不是限制性说明,例如,本申请实施例中的存储器还可以是静态随机存取存储器(staticRAM,SRAM)、动态随机存取存储器(dynamicRAM,DRAM)、同步动态随机存取存储器It should be understood that the above-mentioned memory is exemplary but not restrictive. For example, the memory in the embodiment of the present application may also be a static random access memory (static RAM, SRAM), a dynamic random access memory (dynamic RAM, DRAM), a synchronous dynamic random access memory, or a synchronous dynamic random access memory.

(synchronousDRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(doubledatarateSDRAM,DDRSDRAM)、增强型同步动态随机存取存储器(enhancedSDRAM,ESDRAM)、同步连接动态随机存取存储器(synchronous DRAM, SDRAM), double data rate synchronous dynamic random access memory (double data rate SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (enhanced SDRAM, ESDRAM), synchronous linked dynamic random access memory

(synchlinkDRAM,SLDRAM)以及直接内存总线随机存取存储器(DirectRambusRAM,DRRAM)等等。也就是说,本申请实施例中的存储器旨在包括但不限于这些和任意其它适合类型的存储器。(synchlink DRAM, SLDRAM) and direct memory bus random access memory (Direct Rambus RAM, DRRAM), etc. That is, the memory in the embodiments of the present application is intended to include but is not limited to these and any other suitable types of memory.

本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those of ordinary skill in the art will appreciate that the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein can be implemented in electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Professional and technical personnel can use different methods to implement the described functions for each specific application, but such implementation should not be considered to be beyond the scope of this application.

所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统和装置的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。在本公开所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。Those skilled in the art can clearly understand that, for the convenience and simplicity of description, the specific working process of the system and device described above can refer to the corresponding process in the aforementioned method embodiment, and will not be repeated here. In the several embodiments provided in the present disclosure, it should be understood that the disclosed device and method can be implemented in other ways. The device embodiments described above are merely schematic. For example, the division of the units is only a logical function division. There may be other division methods in actual implementation. For example, multiple units or components can be combined or integrated into another system, or some features can be ignored or not executed. Another point is that the mutual coupling or direct coupling or communication connection shown or discussed can be through some communication interfaces, and the indirect coupling or communication connection of the device or unit can be electrical, mechanical or other forms.

所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place or distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.

另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.

所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台电子设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-OnlyMemory,ROM)、随机存取存储器(RandomAccessMemory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。If the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a non-volatile computer-readable storage medium that is executable by a processor. Based on this understanding, the technical solution of the present disclosure, or the part that contributes to the prior art or the part of the technical solution, can be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for enabling an electronic device (which can be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in each embodiment of the present disclosure. The aforementioned storage medium includes: various media that can store program codes, such as a USB flash drive, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk or an optical disk.

以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应所述以权利要求的保护范围为准。The above is only a specific implementation of the present application, but the protection scope of the present application is not limited thereto. Any technician familiar with the technical field can easily think of changes or substitutions within the technical scope disclosed in the present application, which should be included in the protection scope of the present application. Therefore, the protection scope of the present application should be based on the protection scope of the claims.

Claims (13)

1. An image processing method, applied to a head-mounted display device, comprising:
collecting camera data and inertial data;
Transmitting the camera data and the inertial data to a computing device;
Acquiring rendering image coding data, wherein the rendering image coding data is obtained by determining a rendering image of the head-mounted display device according to the camera data and the inertia data by the computing device and coding the rendering image;
Acquiring preset display frame rate, current inertial data and display time information corresponding to a screen display image corresponding to a previously determined historical rendering image;
And displaying a target screen display image corresponding to the rendering image according to the rendering image coding data, the preset display frame rate, the current inertia data and the display time information.
2. The image processing method according to claim 1, wherein after transmitting the camera data and the inertial data to a computing device, the method further comprises:
And if the rendering image coding data is not acquired, displaying a target screen display image corresponding to the rendering image according to the previous frame image coding data, the inertia data corresponding to the previous frame image coding data and the current inertia data.
3. The image processing method of claim 1, wherein the head mounted display device comprises a plurality of cameras, the method further comprising:
adjusting exposure time of the cameras so as to align exposure center points of the cameras;
and determining the timestamp data of the camera data according to the exposure center points of the cameras.
4. The image processing method of claim 3, wherein the head mounted display device includes an inertial sensor, the method further comprising:
and determining the time stamp of the inertial data according to the interruption time of the inertial sensor.
5. The image processing method according to claim 1, wherein the acquiring camera data and inertial data includes:
acquiring the camera data according to a first preset acquisition frame rate of a camera of the head-mounted display device;
and acquiring the inertial data according to a second preset acquisition frame rate, wherein the second preset acquisition frame rate is larger than the first preset acquisition frame rate.
6. An image processing method, applied to a computing device, the method comprising:
acquiring camera data and inertial data of the head-mounted display device;
determining a rendered image of the head-mounted display device according to the camera data and the inertial data;
And encoding the rendering image to obtain rendering image encoding data, and sending the rendering image encoding data to the head-mounted display device, so that the head-mounted display device obtains preset display frame rate, current inertia data and display time information corresponding to a screen display image corresponding to a previously determined historical rendering image, and displays a target screen display image corresponding to the rendering image according to the rendering image encoding data, the preset display frame rate, the current inertia data and the display time information.
7. The image processing method of claim 6, wherein the determining a rendered image of the head mounted display device from the camera data and inertial data comprises:
acquiring a preset display frame rate of the head-mounted display device, and determining a target rendering frame number according to the preset display frame rate of the head-mounted display device;
and determining a rendering image of the head-mounted display device according to the camera data, the inertia data and the target rendering frame number.
8. An image processing apparatus, characterized by being applied to a head-mounted display device, comprising:
the acquisition module is used for acquiring camera data and inertial data;
A transmitting module for transmitting the camera data and the inertial data to a computing device;
The first acquisition module is used for acquiring rendering image coding data, wherein the rendering image coding data is obtained by the computing device through coding a rendering image of the head-mounted display device determined according to the camera data and the inertia data;
the second acquisition module is used for acquiring preset display frame rate, current inertial data and display time information corresponding to a screen display image corresponding to a previously determined historical rendering image;
and the display module is used for displaying a target screen display image corresponding to the rendering image according to the rendering image coding data, the preset display frame rate, the current inertia data and the display time information.
9. An image processing apparatus, characterized by being applied to a computing apparatus, comprising:
The acquisition module is used for acquiring camera data and inertial data of the head-mounted display device;
The determining module is used for determining a rendering image of the head-mounted display device according to the camera data and the inertia data;
The encoding module is used for encoding the rendering image to obtain rendering image encoding data, and sending the rendering image encoding data to the head-mounted display device, so that the head-mounted display device obtains preset display frame rate, current inertia data and display time information corresponding to a screen display image corresponding to a previously determined historical rendering image, and displays a target screen display image corresponding to the rendering image according to the rendering image encoding data, the preset display frame rate, the current inertia data and the display time information.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program, which is adapted to be loaded by a processor for performing the image processing method according to any one of claims 1-5 or for performing the image processing method according to any one of claims 6-7.
11. A head mounted display device comprising a processor and a memory, the memory having stored therein a computer program, the processor being operable to perform the image processing method of any of claims 1-5 by invoking the computer program stored in the memory.
12. A computing device comprising a processor and a memory, the memory having stored therein a computer program for executing the image processing method according to any one of claims 6 to 7 by calling the computer program stored in the memory.
13. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the image processing method of any one of claims 1-5 or the image processing method of any one of claims 6-7.
CN202310118260.8A 2023-02-01 2023-02-01 Image processing method, device, medium, head mounted display device and program product Pending CN118433367A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310118260.8A CN118433367A (en) 2023-02-01 2023-02-01 Image processing method, device, medium, head mounted display device and program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310118260.8A CN118433367A (en) 2023-02-01 2023-02-01 Image processing method, device, medium, head mounted display device and program product

Publications (1)

Publication Number Publication Date
CN118433367A true CN118433367A (en) 2024-08-02

Family

ID=92320247

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310118260.8A Pending CN118433367A (en) 2023-02-01 2023-02-01 Image processing method, device, medium, head mounted display device and program product

Country Status (1)

Country Link
CN (1) CN118433367A (en)

Similar Documents

Publication Publication Date Title
US11838518B2 (en) Reprojecting holographic video to enhance streaming bandwidth/quality
US10739599B2 (en) Predictive, foveated virtual reality system
US20210165229A1 (en) Video compression methods and apparatus
US12293450B2 (en) 3D conversations in an artificial reality environment
TWI813098B (en) Neural blending for novel view synthesis
CN114631127B (en) Small sample synthesis of talking heads
US11496758B2 (en) Priority-based video encoding and transmission
CN115131528B (en) Virtual reality scene determination method, device and system
US12356054B2 (en) Video data generation method and apparatus, electronic device, and readable storage medium
US20230405475A1 (en) Shooting method, apparatus, device and medium based on virtual reality space
WO2022191070A1 (en) 3d object streaming method, device, and program
CN115641392A (en) Data processing method, device, server and medium for virtual reality system
CN111064981B (en) System and method for video streaming
CN118433367A (en) Image processing method, device, medium, head mounted display device and program product
WO2024148992A1 (en) Method and apparatus for generating cover image of virtual object, device, medium, and program
CN118351261A (en) Method, device, equipment, medium and program for generating cover map of virtual object
CN118842892A (en) Parallax adjustment method, device, equipment and storage medium for augmented reality video
van Gemert Dynamic Viewport-Adaptive Rendering in Distributed Interactive VR Streaming: Optimizing viewport resolution under latency and viewport orientation constraints
CN117173309A (en) Image rendering methods, devices, equipment, media and program products
WO2024174050A1 (en) Video communication method and device
CN118057466A (en) Control method and device based on augmented reality, electronic equipment and storage medium
CN119788795A (en) Video recording method, device, equipment and storage medium
CN118337982A (en) Image processing method and device
CN118118717A (en) Screen sharing method, device, equipment and medium
CN118283241A (en) Immersive VR video system and data processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination