CN110321005A - A kind of method, apparatus, AR equipment and storage medium improving AR equipment virtual article display effect - Google Patents
A kind of method, apparatus, AR equipment and storage medium improving AR equipment virtual article display effect Download PDFInfo
- Publication number
- CN110321005A CN110321005A CN201910517424.8A CN201910517424A CN110321005A CN 110321005 A CN110321005 A CN 110321005A CN 201910517424 A CN201910517424 A CN 201910517424A CN 110321005 A CN110321005 A CN 110321005A
- Authority
- CN
- China
- Prior art keywords
- coordinate system
- environment image
- true environment
- virtual article
- difference
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
- G02B27/0172—Head mounted characterised by optical features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/012—Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Optics & Photonics (AREA)
- Processing Or Creating Images (AREA)
Abstract
本发明提供了一种提高AR设备虚拟物件显示效果的方法、装置、AR设备和存储介质。所述方法包括:获取真实环境图像,构建真实环境图像坐标系;获取视角信息,构建视角坐标系;获取所述真实环境图像坐标系和所述视角坐标系的坐标系差值;获取眼球运动状态;根据所述眼球运动状态,调整所述视角坐标系;根据调整后的所述视角坐标系,调整所述坐标系差值;根据调整后的所述坐标系差值,调整所述虚拟物件的坐标。采用上述技术方案后,可以让AR设备的虚拟物件与实际场景中物体的叠加效果更好,使用户具有更好的观看体验。
The present invention provides a method, a device, an AR device and a storage medium for improving the display effect of an AR device virtual object. The method includes: acquiring a real environment image, and constructing a real environment image coordinate system; acquiring perspective information, and constructing a perspective coordinate system; acquiring a coordinate system difference between the real environment image coordinate system and the perspective coordinate system; acquiring an eye movement state ; Adjust the viewing angle coordinate system according to the eye movement state; adjust the coordinate system difference value according to the adjusted viewing angle coordinate system; adjust the coordinate system difference value of the virtual object according to the adjusted coordinate system difference value coordinate. After the above technical solution is adopted, the superposition effect of the virtual object of the AR device and the object in the actual scene can be better, so that the user has a better viewing experience.
Description
技术领域technical field
本发明涉及AR设备技术领域,尤其涉及一种提高AR设备虚拟物件显示效果的方法、装置、AR设备和存储介质。The present invention relates to the technical field of AR devices, and in particular, to a method, an apparatus, an AR device and a storage medium for improving the display effect of virtual objects in an AR device.
背景技术Background technique
增强现实(Augmented Reality,简称AR)技术,是一种将真实世界信息和虚拟世界信息“无缝”集成的新技术,是把原本在现实世界的一定时间空间范围内很难体验到的实体信息(视觉信息,声音,味道,触觉等),通过电脑等科学技术,模拟仿真后再叠加,将虚拟的信息应用到真实世界,被人类感官所感知,从而达到超越现实的感官体验。真实的环境和虚拟的物体实时地叠加到了同一个画面或空间同时存在。Augmented Reality (AR) technology is a new technology that "seamlessly" integrates real world information and virtual world information. (visual information, sound, taste, touch, etc.), through computer and other science and technology, simulation and simulation and then superimposed, the virtual information is applied to the real world, perceived by human senses, so as to achieve a sensory experience beyond reality. The real environment and virtual objects are superimposed on the same screen or space in real time.
目前,头戴式AR设备主要有两种显示方式:视频透视式(Video See-Through)和光学透视式(Optical See-Through)。At present, there are two main display methods for head-mounted AR devices: Video See-Through and Optical See-Through.
视频透视式是通过安装在AR设备上的微型摄像机来采集真实场景的图像,同时计算机通过场景理解和分析将所要添加的信息和图像信号叠加在摄像机的视频信号上,同时将计算机生成的虚拟场景与真实场景进行融合,最后通过显示器呈现给用户。The video perspective is to collect the image of the real scene through the miniature camera installed on the AR device, and the computer will superimpose the information and image signal to be added on the video signal of the camera through scene understanding and analysis, and at the same time, the computer-generated virtual scene It is fused with the real scene and finally presented to the user through the display.
光学透视式是通过一对安装在眼前的半透半反的光学合成器实现对外界真实环境与虚拟信息的融合的。真实场景直接透过半反半透镜呈现给用户,计算机生成的虚拟信息经过光学系统放大后经半透半反透镜反射而进入眼睛,真实场景和虚拟信息汇聚到视网膜上从而形成虚实叠加的成像效果。The optical see-through type realizes the fusion of the external real environment and virtual information through a pair of transflective optical synthesizers installed in front of the eyes. The real scene is directly presented to the user through the semi-reflective and semi-reflective lens. The virtual information generated by the computer is amplified by the optical system and then reflected by the semi-reflective and semi-reflective lens to enter the eyes. The real scene and virtual information converge on the retina to form a virtual and real superimposed imaging effect.
现有的光学透视式AR设备存在的问题在于,计算机生产的虚拟信息,例如虚拟物件,其坐标位置是根据摄像机获得的真实场景建立的坐标系确定的,即是由根据摄像机构建的坐标系确定的;而用户看到的真实场景是人眼直接获得的,及是由根据人眼构成的坐标系确定的。由于这两个坐标系存在偏差,所以导致计算机生产的虚拟物在叠加到真实场景时存在偏差,导致虚实叠加的效果不佳,即虚拟物件的显示效果不佳,影响用户的使用体验。The problem with the existing optical see-through AR equipment is that the virtual information produced by the computer, such as virtual objects, has its coordinate position determined according to the coordinate system established by the real scene obtained by the camera, that is, determined by the coordinate system constructed according to the camera. The real scene seen by the user is directly obtained by the human eye and determined by the coordinate system formed by the human eye. Due to the deviation of these two coordinate systems, the virtual objects produced by the computer have deviations when they are superimposed on the real scene, resulting in poor effect of virtual and real superposition, that is, the display effect of virtual objects is not good, which affects the user experience.
因此,需要开发一种能够提高AR设备虚拟物件显示效果的方法、装置、AR设备和存储介质。Therefore, it is necessary to develop a method, an apparatus, an AR device and a storage medium that can improve the display effect of virtual objects in an AR device.
发明内容SUMMARY OF THE INVENTION
为了克服上述技术缺陷,本发明的目的在于提供一种能够提高AR设备虚拟物件显示效果的方法、装置、AR设备和存储介质。In order to overcome the above-mentioned technical defects, the purpose of the present invention is to provide a method, an apparatus, an AR device and a storage medium that can improve the display effect of a virtual object of an AR device.
本发明公开了一种提高AR设备虚拟物件显示效果的方法,包括:The invention discloses a method for improving the display effect of virtual objects of an AR device, including:
获取真实环境图像,构建真实环境图像坐标系;Obtain the real environment image and construct the real environment image coordinate system;
获取视角信息,构建视角坐标系;Obtain viewing angle information and construct a viewing angle coordinate system;
获取所述真实环境图像坐标系和所述视角坐标系的坐标系差值;obtaining the coordinate system difference between the real environment image coordinate system and the viewing angle coordinate system;
获取眼球运动状态;Get the state of eye movement;
根据所述眼球运动状态,调整所述视角坐标系;adjusting the viewing angle coordinate system according to the eye movement state;
根据调整后的所述视角坐标系,调整所述坐标系差值;adjusting the coordinate system difference according to the adjusted viewing angle coordinate system;
根据调整后的所述坐标系差值,调整所述虚拟物件的坐标。The coordinates of the virtual object are adjusted according to the adjusted coordinate system difference.
优选地,在获取真实环境图像,构建真实环境图像坐标系后,Preferably, after acquiring the real environment image and constructing the real environment image coordinate system,
添加虚拟物件,获取所述虚拟物件在所述真实环境图像坐标系中的坐标。Add a virtual object, and obtain the coordinates of the virtual object in the real environment image coordinate system.
优选地,所述添加虚拟物件,获取所述虚拟物件在所述真实环境图像坐标系中的坐标,包括:Preferably, the adding a virtual object and obtaining the coordinates of the virtual object in the real environment image coordinate system include:
通过物体识别技术识别真实环境中的参照物,并获取所述参照物在真实环境图像坐标系中的坐标;Identify the reference object in the real environment through object recognition technology, and obtain the coordinates of the reference object in the real environment image coordinate system;
根据所述虚拟物件与所述参照物预设的位置关系添加虚拟物件,并获取所述虚拟物件在所述真实环境图像坐标系中的坐标。The virtual object is added according to the preset positional relationship between the virtual object and the reference object, and the coordinates of the virtual object in the real environment image coordinate system are acquired.
优选地,根据调整后所述坐标系差值,调整所述虚拟物件的坐标后,Preferably, after adjusting the coordinates of the virtual object according to the adjusted coordinate system difference,
将所述虚拟物件以调整后坐标投射至AR设备的显示屏幕上。Project the virtual object on the display screen of the AR device with the adjusted coordinates.
优选地,通过眼球追踪技术获取所述眼球运动状态;Preferably, the eye movement state is obtained through eye tracking technology;
所述眼球追踪技术包括:虹膜角度变化追踪技术、眼球和眼球周边特征变化追踪技术、红外线追踪技术。The eye tracking technology includes: iris angle change tracking technology, eyeball and eyeball peripheral feature change tracking technology, and infrared tracking technology.
优选地,所述视角坐标系的原点为眼球中心点,所述视角坐标系的x轴、y轴与视网膜平面或虹膜平面的x轴、y轴方向一致,所述视角坐标系的z轴为视线观察方向。Preferably, the origin of the viewing angle coordinate system is the center point of the eyeball, the x-axis and y-axis of the viewing angle coordinate system are consistent with the x-axis and y-axis directions of the retinal plane or the iris plane, and the z-axis of the viewing angle coordinate system is Line of sight observation direction.
优选地,所述获取所述真实环境图像坐标系和所述视角坐标系的坐标系差值,包括:Preferably, the described acquisition of the coordinate system difference between the real environment image coordinate system and the viewing angle coordinate system includes:
通过物体识别技术识别真实环境中的参照物,并获取所述参照物在真实环境图像坐标系和视角坐标系中的坐标;Identify the reference object in the real environment through object recognition technology, and obtain the coordinates of the reference object in the real environment image coordinate system and the viewing angle coordinate system;
计算所述参照物在真实环境图像坐标系和视角坐标系中的坐标差值;Calculate the coordinate difference of the reference object in the real environment image coordinate system and the viewing angle coordinate system;
所述坐标差值即所述真实环境图像坐标系和视角坐标系的坐标系差值。The coordinate difference is the coordinate difference between the real environment image coordinate system and the viewing angle coordinate system.
本发明还公开了一种提高AR设备虚拟物件显示效果的装置,包括:The invention also discloses a device for improving the display effect of AR equipment virtual objects, including:
第一获取单元,用于获取真实环境图像,构建真实环境图像坐标系;a first acquisition unit, used for acquiring a real environment image and constructing a real environment image coordinate system;
第二获取单元,用于获取视角信息,构建视角坐标系;The second obtaining unit is used to obtain the viewing angle information and construct the viewing angle coordinate system;
第三获取单元,用于获取所述真实环境图像坐标系和所述视角坐标系的坐标系差值;a third acquiring unit, configured to acquire the coordinate system difference between the real environment image coordinate system and the viewing angle coordinate system;
第四获取单元,用于获取眼球运动状态;a fourth acquiring unit, used for acquiring the eye movement state;
第一调整单元,用于根据所述眼球运动状态,调整所述视角坐标系;a first adjustment unit, configured to adjust the viewing angle coordinate system according to the eye movement state;
第二调整单元,用于根据调整后的所述视角坐标系,调整所述坐标系差值;a second adjustment unit, configured to adjust the coordinate system difference according to the adjusted viewing angle coordinate system;
第三调整单元,用于根据调整后所述坐标系差值,调整所述虚拟物件的坐标。A third adjustment unit, configured to adjust the coordinates of the virtual object according to the adjusted coordinate system difference.
本发明还公开了一种AR设备,包括存储器、处理器以及存储在所述存储器中且被配置为由所述处理器执行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现上述的方法。The present invention also discloses an AR device, comprising a memory, a processor, and a computer program stored in the memory and configured to be executed by the processor, characterized in that when the processor executes the computer program Implement the above method.
本发明还公开了一种计算机可读存储介质,所述计算机可读存储介质包括存储的计算机程序,其中,在所述计算机程序运行时控制所述计算机可读存储介质所在设备执行上述的方法。The present invention also discloses a computer-readable storage medium, wherein the computer-readable storage medium includes a stored computer program, wherein when the computer program runs, the device where the computer-readable storage medium is located is controlled to execute the above method.
采用了上述技术方案后,与现有技术相比,能够有效提高AR设备虚拟物件的显示效果,使虚拟物件与现实场景的叠加更匹配,提升AR眼镜视觉效果,从而提升用户的使用体验。After adopting the above technical solution, compared with the prior art, the display effect of the virtual object of the AR device can be effectively improved, the superposition of the virtual object and the real scene is more matched, the visual effect of the AR glasses is improved, and the user experience is improved.
附图说明Description of drawings
图1为本发明一实施例中提高AR设备虚拟物件显示效果的方法的流程示意图。FIG. 1 is a schematic flowchart of a method for improving the display effect of a virtual object of an AR device according to an embodiment of the present invention.
具体实施方式Detailed ways
以下结合附图与具体实施例进一步阐述本发明的优点。The advantages of the present invention are further described below with reference to the accompanying drawings and specific embodiments.
这里将详细地对示例性实施例进行说明,其示例表示在附图中。下面的描述涉及附图时,除非另有表示,不同附图中的相同数字表示相同或相似的要素。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。Exemplary embodiments will be described in detail herein, examples of which are illustrated in the accompanying drawings. Where the following description refers to the drawings, the same numerals in different drawings refer to the same or similar elements unless otherwise indicated. The implementations described in the illustrative examples below are not intended to represent all implementations consistent with this disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as recited in the appended claims.
在本公开使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本公开。在本公开和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to limit the present disclosure. As used in this disclosure and the appended claims, the singular forms "a," "the," and "the" are intended to include the plural forms as well, unless the context clearly dictates otherwise. It will also be understood that the term "and/or" as used herein refers to and includes any and all possible combinations of one or more of the associated listed items.
应当理解,尽管在本公开可能采用术语第一、第二、第三等来描述各种信息,但这些信息不应限于这些术语。这些术语仅用来将同一类型的信息彼此区分开。例如,在不脱离本公开范围的情况下,第一信息也可以被称为第二信息,类似地,第二信息也可以被称为第一信息。取决于语境,如在此所使用的词语“如果”可以被解释成为“在……时”或“当……时”或“响应于确定”。It should be understood that although the terms first, second, third, etc. may be used in this disclosure to describe various pieces of information, such information should not be limited by these terms. These terms are only used to distinguish the same type of information from each other. For example, the first information may also be referred to as the second information, and similarly, the second information may also be referred to as the first information, without departing from the scope of the present disclosure. Depending on the context, the word "if" as used herein can be interpreted as "at the time of" or "when" or "in response to determining."
在本发明的描述中,需要理解的是,术语“纵向”、“横向”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。In the description of the present invention, it should be understood that the terms "portrait", "horizontal", "upper", "lower", "front", "rear", "left", "right", "vertical", The orientations or positional relationships indicated by "horizontal", "top", "bottom", "inside", "outside", etc. are based on the orientations or positional relationships shown in the accompanying drawings, which are only for the convenience of describing the present invention and simplifying the description, rather than An indication or implication that the referred device or element must have a particular orientation, be constructed and operate in a particular orientation, is not to be construed as a limitation of the invention.
在本发明的描述中,除非另有规定和限定,需要说明的是,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是机械连接或电连接,也可以是两个元件内部的连通,可以是直接相连,也可以通过中间媒介间接相连,对于本领域的普通技术人员而言,可以根据具体情况理解上述术语的具体含义。In the description of the present invention, unless otherwise specified and limited, it should be noted that the terms "installed", "connected" and "connected" should be understood in a broad sense, for example, it may be a mechanical connection or an electrical connection, or two The internal communication between the elements may be directly connected or indirectly connected through an intermediate medium, and those of ordinary skill in the art can understand the specific meanings of the above terms according to specific circumstances.
在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本发明的说明,其本身并没有特定的意义。因此,“模块”与“部件”可以混合地使用。In the following description, suffixes such as 'module', 'component' or 'unit' used to represent elements are used only to facilitate the description of the present invention, and have no specific meaning per se. Therefore, "module" and "component" can be used interchangeably.
参见附图1,本发明一实施例中提高AR设备虚拟物件显示效果的方法的流程示意图。Referring to FIG. 1 , it is a schematic flowchart of a method for improving the display effect of a virtual object of an AR device according to an embodiment of the present invention.
所述AR设备可以为AR眼镜或头盔式显示器,所述AR设备的显示方式为光学透视式(Optical See-Through)。The AR device may be AR glasses or a helmet-mounted display, and the display mode of the AR device is Optical See-Through.
所述方法包括如下步骤:The method includes the following steps:
S1:获取真实环境图像,构建真实环境图像坐标系。S1: Obtain a real environment image and construct a real environment image coordinate system.
具体地,所述AR设备上设置有摄像机,所述摄像机用于获取真实环境的图像,所述AR设备的处理模块根据摄像机获取的真实环境的图像计算构建真实环境图像坐标系,也即摄像机坐标系。Specifically, the AR device is provided with a camera, and the camera is used to obtain an image of the real environment, and the processing module of the AR device calculates and constructs a real environment image coordinate system according to the image of the real environment obtained by the camera, that is, the camera coordinates Tie.
所述真实环境图像坐标系的原点为摄像机光心,x轴,y轴与成像平面的x轴和y轴一致,z轴为摄像机的光轴,与x轴和y轴垂直。The origin of the real environment image coordinate system is the optical center of the camera, the x-axis and the y-axis are consistent with the x-axis and y-axis of the imaging plane, and the z-axis is the optical axis of the camera, which is perpendicular to the x-axis and the y-axis.
在步骤S1之后,还包括步骤S1’:添加虚拟物件,获取所述虚拟物件在所述真实环境图像坐标系中的坐标。After step S1, it also includes step S1': adding a virtual object, and acquiring the coordinates of the virtual object in the real environment image coordinate system.
即,在获取到真实环境的图像后,根据获得的真实环境图像,添加需要添加的虚拟物件,并获取所述虚拟物件在所述真实环境图像坐标系中的坐标。应当注意的是,这里的添加可以是指根据真实环境图像构建相应的虚拟物件,也可以是指根据预设的添加规则,将已有的或者说已生成好的虚拟物件添加到所述真实环境图像坐标系中。这里的虚拟物件可以是指图片、视频、动画、文字等,即真实环境图像中没有的,通过处理器添加的东西。That is, after the image of the real environment is acquired, virtual objects to be added are added according to the acquired image of the real environment, and the coordinates of the virtual objects in the real environment image coordinate system are acquired. It should be noted that the addition here may refer to constructing corresponding virtual objects according to real environment images, or it may refer to adding existing or generated virtual objects to the real environment according to preset adding rules. in the image coordinate system. The virtual objects here can refer to pictures, videos, animations, texts, etc., that is, things that are not in the real environment image and are added by the processor.
具体地,在步骤S1’中,先通过物体识别技术识别真实环境中的参照物,并获取所述参照物在真实环境图像坐标系中的坐标;再根据所述虚拟物件与所述参照物预设的位置关系添加虚拟物件,并获取所述虚拟物件在所述真实环境图像坐标系中的坐标。Specifically, in step S1 ′, the reference object in the real environment is first identified by the object recognition technology, and the coordinates of the reference object in the real environment image coordinate system are obtained; A virtual object is added with the set position relationship, and the coordinates of the virtual object in the real environment image coordinate system are obtained.
物体识别技术是计算机视觉领域中的一项基础技术,它的作用是识别出图像中有什么物体,并报告出这个物体在图像表示的场景中的位置和方向。Object recognition technology is a basic technology in the field of computer vision. Its function is to identify what object is in the image and report the position and orientation of this object in the scene represented by the image.
物体识别技术通常包括如下步骤:图像的预处理、特征提取、特征选择、建模、匹配、定位。Object recognition technology usually includes the following steps: image preprocessing, feature extraction, feature selection, modeling, matching, and localization.
物体识别的主要方法包括:基于统计的物体分类方法、基于物体部件的识别、生成性方法与鉴别性方法、基于模型(model)的物体识别方法、基于上下文(context)物体识别方法。The main methods of object recognition include: object classification method based on statistics, recognition based on object components, generative method and discriminative method, object recognition method based on model, and object recognition method based on context.
下面以提高虚拟圆形花瓶在真实环境的圆形桌面上的显示效果为例,来对上述步骤进行说明。The above steps are described below by taking improving the display effect of the virtual circular vase on the circular desktop in the real environment as an example.
在步骤S1中,通过AR设备的摄像机获取含有圆形桌面的真实环境图像,并根据真实环境图像计算构建真实环境图像坐标系,真实环境图像坐标系的原点为摄像机光心,x轴,y轴与成像平面的x轴和y轴一致,z轴为摄像机的光轴,与x轴和y轴垂直。In step S1, a real environment image containing a circular desktop is obtained through the camera of the AR device, and a real environment image coordinate system is calculated and constructed according to the real environment image. The origin of the real environment image coordinate system is the optical center of the camera, the x-axis, and the y-axis Consistent with the x and y axes of the imaging plane, the z axis is the optical axis of the camera and is perpendicular to the x and y axes.
在步骤S1’中,在获取到真实环境图像后,先通过物体识别技术识别真实环境中的圆形桌面,圆形桌面即上述的参照物,识别到圆形桌面后,对圆形桌面进行定位,获取圆形桌面圆心在真实环境图像坐标系中的坐标(x1,y1,z1)。然后根据所述虚拟圆形花瓶与所述圆形桌面预设的位置关系添加虚拟花瓶。虚拟圆形花瓶底部圆心位置在真实环境图像坐标系中的坐标为(x2,y2,z2),预设在圆形桌面圆心右方3cm,上方5cm处添加虚拟圆形花瓶。则根据该预设的位置关系添加虚拟圆形花瓶,则虚拟圆形花瓶在真实环境图像坐标系中坐标(x2,y2,z2)为(x1+3,y1+5,z1)。In step S1', after acquiring the real environment image, first identify the circular desktop in the real environment through the object recognition technology, the circular desktop is the above-mentioned reference object, and after the circular desktop is recognized, the circular desktop is positioned , obtain the coordinates (x1, y1, z1) of the center of the circular desktop in the real environment image coordinate system. Then, the virtual vase is added according to the preset positional relationship between the virtual circular vase and the circular desktop. The coordinates of the center of the bottom of the virtual circular vase in the real environment image coordinate system are (x2, y2, z2). The preset is 3 cm to the right of the center of the circular desktop and 5 cm above the virtual circular vase. Then a virtual circular vase is added according to the preset positional relationship, and the coordinates (x2, y2, z2) of the virtual circular vase in the real environment image coordinate system are (x1+3, y1+5, z1).
S2:获取视角信息,构建视角坐标系。S2: Obtain viewing angle information and construct a viewing angle coordinate system.
具体地,所述视角信息包括眼球的位置、状态、视线方向、视角等信息,所述视角信息可以通过眼球追踪技术获得。Specifically, the angle of view information includes information such as the position, state, direction of sight, and angle of view of the eyeball, and the angle of view information may be obtained through an eye tracking technology.
所述眼球追踪技术包括:虹膜角度变化追踪技术、眼球和眼球周边特征变化追踪技术、红外线追踪技术。The eye tracking technology includes: iris angle change tracking technology, eyeball and eyeball peripheral feature change tracking technology, and infrared tracking technology.
获得视角信息后,VR设备的处理器根据所述视角信息计算构建视角坐标系,具体地,所述视角坐标系原点为眼球中心点,所述视角坐标系的x轴、y轴与视网膜平面或虹膜平面的x轴、y轴方向一致,所述视角坐标系的z轴为视线观察方向。在一些实施例中,所述眼球中心点可以是瞳孔中心点。After obtaining the viewing angle information, the processor of the VR device calculates and constructs a viewing angle coordinate system according to the viewing angle information. The directions of the x-axis and the y-axis of the iris plane are the same, and the z-axis of the viewing angle coordinate system is the viewing direction of the line of sight. In some embodiments, the eye center point may be the pupil center point.
S3:获取所述真实环境图像坐标系和所述视角坐标系的坐标系差值。S3: Obtain the coordinate system difference between the real environment image coordinate system and the viewing angle coordinate system.
具体地,在所述真实环境图像坐标系和所述视角坐标系建立后,根据眼球与摄像机的位置关系、视角信息等信息,可以直接获知所述真实环境图像坐标系和视角坐标系的位置关系,进而可以直接获得所述真实环境图像坐标系和视角坐标系的坐标系差值。Specifically, after the real environment image coordinate system and the viewing angle coordinate system are established, the positional relationship between the real environment image coordinate system and the viewing angle coordinate system can be directly known according to information such as the positional relationship between the eyeball and the camera, and viewing angle information. , and then the coordinate system difference between the real environment image coordinate system and the viewing angle coordinate system can be directly obtained.
在一些实施例中,也可以通过参照物来获取所述真实环境图像坐标系和所述视角坐标系的坐标系差值。具体地,通过物体识别技术识别真实环境中的参照物,并获取所述参照物在真实环境图像坐标系和视角坐标系中的坐标;计算所述参照物在真实环境图像坐标系和视角坐标系中的坐标差值;所述坐标差值即所述真实环境图像坐标系和视角坐标系的坐标系差值。In some embodiments, the coordinate system difference between the real environment image coordinate system and the viewing angle coordinate system can also be obtained by using a reference object. Specifically, the reference object in the real environment is identified by the object recognition technology, and the coordinates of the reference object in the real environment image coordinate system and the viewing angle coordinate system are obtained; the reference object is calculated in the real environment image coordinate system and the viewing angle coordinate system. The coordinate difference in ; the coordinate difference is the coordinate difference between the real environment image coordinate system and the viewing angle coordinate system.
继续以提高虚拟圆形花瓶在真实环境的圆形桌面上的显示效果为例,来对上述步骤进行说明。Continue to take improving the display effect of the virtual circular vase on the circular desktop in the real environment as an example to describe the above steps.
在步骤S2中,通过眼球追踪技术获取视角信息,并计算构建视角坐标系,所述视角坐标系原点为眼球中心点,所述视角坐标系的x轴、y轴与视网膜平面或虹膜平面的x轴、y轴方向一致,所述视角坐标系的z轴为视线观察方向。进一步地,由于在步骤S1’已通过物体识别技术识别出真实环境中的圆形桌面,在本步骤中,获取圆形桌面圆心在视角坐标系中坐标为(x3,y3,z3)。In step S2, the viewing angle information is obtained through eye tracking technology, and a viewing angle coordinate system is calculated and constructed, the origin of the viewing angle coordinate system is the center point of the eyeball, and the x-axis and y-axis of the viewing angle coordinate system are the same as the x-axis of the retinal plane or the iris plane. The directions of the axis and the y-axis are the same, and the z-axis of the viewing angle coordinate system is the viewing direction of the line of sight. Further, since the circular desktop in the real environment has been identified by the object recognition technology in step S1', in this step, the coordinates of the center of the circular desktop in the viewing angle coordinate system are (x3, y3, z3).
在步骤S3中,根据获取的圆形桌面圆心在真实环境图像坐标系中的坐标(x1,y1,z1)和在视角坐标系中坐标为(x3,y3,z3),计算圆形桌面圆心在真实环境图像坐标系和视角坐标系中的坐标差值,所述坐标差值为(x3-x1,y3-y1,z3-z1),即所述真实环境图像坐标系和所述视角坐标系的坐标系差值为(x3-x1,y3-y1,z3-z1)。在一些实施例中,此时,可以根据所述坐标系差值调整虚拟圆形花瓶的坐标,即将虚拟圆形花瓶在真实环境图像坐标系的坐标加上所述坐标系差值,即可得到虚拟圆形花瓶在视角坐标系的坐标(x3-x1+x2,y3-y1+y2,z3-z1+z2)。进而,可以以该坐标将虚拟圆形花瓶的图像投射到AR设备的显示屏幕上。通过这样的方式可以补偿根据摄像机构建的坐标系与人眼视角坐标系的偏差,提高虚拟物件与实际场景的叠加效果,即提高虚拟物件的显示效果。In step S3, according to the obtained coordinates (x1, y1, z1) of the center of the circular desktop in the real environment image coordinate system and coordinates (x3, y3, z3) in the viewing angle coordinate system, calculate the center of the circular desktop at The coordinate difference between the real environment image coordinate system and the viewing angle coordinate system, the coordinate difference is (x3-x1, y3-y1, z3-z1), that is, the difference between the real environment image coordinate system and the viewing angle coordinate system. The coordinate system difference is (x3-x1, y3-y1, z3-z1). In some embodiments, at this time, the coordinates of the virtual circular vase can be adjusted according to the coordinate system difference, that is, the coordinates of the virtual circular vase in the real environment image coordinate system are added to the coordinate system difference to obtain The coordinates of the virtual circular vase in the viewing angle coordinate system (x3-x1+x2, y3-y1+y2, z3-z1+z2). Further, the image of the virtual circular vase can be projected onto the display screen of the AR device at the coordinates. In this way, the deviation between the coordinate system constructed according to the camera and the coordinate system of the human eye can be compensated, and the superposition effect of the virtual object and the actual scene can be improved, that is, the display effect of the virtual object can be improved.
S4:获取眼球运动状态。S4: Acquire the state of eye movement.
具体地,通过眼球追踪技术追踪眼球的运动状态,即眼球的转动方向、角度等。因为在眼球运动时,视线方向会发生变化。如果想要提高虚拟物件的显示效果,则需要根据眼球的运动,即视线方向的变化来调整虚拟物件的坐标,使之与人眼的视线更匹配。Specifically, the movement state of the eyeball, that is, the rotation direction and angle of the eyeball, is tracked through the eyeball tracking technology. Because when the eye moves, the direction of the line of sight changes. If you want to improve the display effect of the virtual object, it is necessary to adjust the coordinates of the virtual object according to the movement of the eyeball, that is, the change of the direction of the line of sight, so as to better match the line of sight of the human eye.
S5:根据所述眼球运动状态,调整所述视角坐标系。S5: Adjust the viewing angle coordinate system according to the eye movement state.
具体地,对于本步骤,在一些实施例中,在初始的视角坐标系的基础上,获取眼球运动的变化量,即眼球的转动方向、角度等信息,并将该变化量叠加到初始的视角坐标系上,得到变化后的视角坐标系。Specifically, for this step, in some embodiments, on the basis of the initial viewing angle coordinate system, the variation of eye movement, that is, information such as the rotation direction and angle of the eyeball, is obtained, and the variation is superimposed on the initial viewing angle On the coordinate system, the changed viewing angle coordinate system is obtained.
对于本步骤,在另一些实施例中,通过眼球追踪技术,对转动一定方向、角度后的眼球,获取其视角相关信息,并重新建立视角坐标系,重新建立的视角坐标系即为调整后的所述视角坐标系。For this step, in other embodiments, the eyeball tracking technology is used to obtain the relevant information of the viewing angle of the eyeball after rotating in a certain direction and angle, and re-establish the viewing angle coordinate system, and the re-established viewing angle coordinate system is the adjusted viewing angle coordinate system. The viewing angle coordinate system.
S6:根据调整后的所述视角坐标系,调整所述坐标系差值。S6: Adjust the coordinate system difference according to the adjusted viewing angle coordinate system.
对于本步骤,在一些实施例中,通过运动后眼球位置、实现方向,即其与摄像机位置关系等信息,可以直接获知所述真实环境图像坐标系和步骤S5中调整后的视角坐标系的位置关系,进而可以直接得到真实环境图像坐标系与步骤S5中调整后的视角坐标系的坐标系差值。For this step, in some embodiments, the position of the real environment image coordinate system and the position of the viewing angle coordinate system adjusted in step S5 can be directly obtained through information such as the position of the eyeball after the movement, the realization direction, that is, the positional relationship between the camera and the camera. relationship, so that the coordinate system difference between the real environment image coordinate system and the viewing angle coordinate system adjusted in step S5 can be directly obtained.
对于本步骤,在另一些实施例中,也可以通过参照物在真实环境图像坐标系和该参照物在步骤S5中调整后的视角坐标系中的坐标差值,来计算真实环境图像坐标系和步骤S5中调整后的视角坐标系的坐标系差值。即将参照物在这两个坐标系中的坐标相减,即可得到调整后的坐标系差值。For this step, in other embodiments, the real environment image coordinate system and The coordinate system difference of the viewing angle coordinate system adjusted in step S5. That is, the coordinates of the reference object in the two coordinate systems are subtracted, and the adjusted coordinate system difference can be obtained.
S7:根据调整后的所述坐标系差值,调整所述虚拟物件的坐标。S7: Adjust the coordinates of the virtual object according to the adjusted coordinate system difference.
在本步骤中,将所述虚拟物件在真实环境图像坐标系中的坐标加上步骤S6得到的调整后的坐标系差值,即可得到虚拟物件在调整后的视角坐标系中的坐标。In this step, the coordinates of the virtual object in the real environment image coordinate system are added to the adjusted coordinate system difference obtained in step S6 to obtain the coordinates of the virtual object in the adjusted viewing angle coordinate system.
进一步地,所述方法还包括步骤S8:将所述虚拟物件以调整后坐标投射至AR设备的显示屏幕上。以调整后的坐标投影虚拟物件,可以有效补偿根据摄像机构建的坐标系与人眼视角坐标系的偏差,即使是眼球运动,也可以使虚拟物件显示在合适的位置,提高虚拟物件与实际场景的叠加效果,即提高虚拟物件的显示效果。Further, the method further includes step S8: projecting the virtual object on the display screen of the AR device with the adjusted coordinates. Projecting virtual objects with adjusted coordinates can effectively compensate for the deviation between the coordinate system constructed by the camera and the coordinate system of the human eye perspective. Overlay effect, that is, to improve the display effect of virtual objects.
这里继续以提高虚拟圆形花瓶在真实环境的圆形桌面上的显示效果为例,来对上述步骤进行说明。Here, the above steps will be described by taking improving the display effect of the virtual circular vase on the circular desktop in the real environment as an example.
在步骤S4、S5中,通过眼球追踪技术获取眼球运动状态,并根据运动后的眼球视角相关信息重新构建视角坐标系,即得到调整后的视角坐标系。In steps S4 and S5, the eye movement state is obtained through the eye tracking technology, and the viewing angle coordinate system is reconstructed according to the eye angle related information after the movement, that is, the adjusted viewing angle coordinate system is obtained.
在步骤S6中,获取圆形桌面圆心在调整后的视角坐标系中的坐标(x4,y4,z4),计算圆形桌面圆心在真实环境图像坐标系和调整后的视角坐标系的坐标差值,即(x4-x1,y4-y1,z4-z1),该坐标差值即真实环境图像坐标系和调整后的视角坐标系的坐标系差值。In step S6, the coordinates (x4, y4, z4) of the center of the circular desktop in the adjusted viewing angle coordinate system are obtained, and the coordinate difference between the center of the circular desktop in the real environment image coordinate system and the adjusted viewing angle coordinate system is calculated. , namely (x4-x1, y4-y1, z4-z1), the coordinate difference is the coordinate difference between the real environment image coordinate system and the adjusted viewing angle coordinate system.
在步骤S7中,将所述虚拟圆形花瓶在真实环境图像坐标系中的坐标加上步骤S6得到的调整后的坐标系差值,得到虚拟物件在调整后的视角坐标系中的坐标(x4-x1+x2,y4-y1+y2,z4-z1+z2)。In step S7, the coordinates of the virtual circular vase in the real environment image coordinate system are added to the adjusted coordinate system difference obtained in step S6 to obtain the coordinates of the virtual object in the adjusted viewing angle coordinate system (x4 -x1+x2,y4-y1+y2,z4-z1+z2).
在步骤S8中,将所述虚拟物件以调整后坐标(x4-x1+x2,y4-y1+y2,z4-z1+z2)投射至AR设备的显示屏幕上。In step S8, the virtual object is projected on the display screen of the AR device with the adjusted coordinates (x4-x1+x2, y4-y1+y2, z4-z1+z2).
应当注意的是,对于本发明的步骤S1~S7,没有严格的顺序要求。例如步骤S4和S2可以同步完成;例如步骤S3可以在步骤S5之后。只要是包含上述步骤S1~S7的方法,即使顺序不同,也应当包含在本申请的保护范围内。It should be noted that there is no strict sequence requirement for steps S1 to S7 of the present invention. For example, steps S4 and S2 can be completed synchronously; for example, step S3 can be after step S5. As long as the method includes the above steps S1 to S7, even if the order is different, it should be included in the protection scope of the present application.
本发明还公开了一种提高AR设备虚拟物件显示效果的装置,包括:The invention also discloses a device for improving the display effect of AR equipment virtual objects, including:
第一获取单元,用于获取真实环境图像,构建真实环境图像坐标系;a first acquisition unit, used for acquiring a real environment image and constructing a real environment image coordinate system;
第二获取单元,用于获取视角信息,构建视角坐标系;The second obtaining unit is used to obtain the viewing angle information and construct the viewing angle coordinate system;
第三获取单元,用于获取所述真实环境图像坐标系和所述视角坐标系的坐标系差值;a third acquiring unit, configured to acquire the coordinate system difference between the real environment image coordinate system and the viewing angle coordinate system;
第四获取单元,用于获取眼球运动状态;a fourth acquiring unit, used for acquiring the eye movement state;
第一调整单元,用于根据所述眼球运动状态,调整所述视角坐标系;a first adjustment unit, configured to adjust the viewing angle coordinate system according to the eye movement state;
第二调整单元,用于根据调整后的所述视角坐标系,调整所述坐标系差值;a second adjustment unit, configured to adjust the coordinate system difference according to the adjusted viewing angle coordinate system;
第三调整单元,用于根据调整后所述坐标系差值,调整所述虚拟物件的坐标。A third adjustment unit, configured to adjust the coordinates of the virtual object according to the adjusted coordinate system difference.
进一步地,本发明还包括:Further, the present invention also includes:
第一添加单元,用于添加虚拟物件,获取所述虚拟物件在所述真实环境图像坐标系中的坐标。The first adding unit is configured to add a virtual object and obtain the coordinates of the virtual object in the real environment image coordinate system.
进一步地,本发明还包括:Further, the present invention also includes:
投影单元,用于将所述虚拟物件以调整后坐标投射至AR设备的显示屏幕上。The projection unit is used for projecting the virtual object on the display screen of the AR device with the adjusted coordinates.
本发明还公开了一种AR设备,包括存储器、处理器以及存储在所述存储器中且被配置为由所述处理器执行的计算机程序,所述处理器执行所述计算机程序时实现上述实施例中的方法。The present invention also discloses an AR device, comprising a memory, a processor, and a computer program stored in the memory and configured to be executed by the processor, and the processor implements the above embodiments when executing the computer program method in .
本申请中,处理器以包括一个或者多个处理核心。通过运行或执行存储在存储器内的计算机程序(包括:指令、程序、代码集或指令集等)以及调用存储在存储器内的数据,执行AR设备的各种功能和处理数据。可选地,处理器可以采用数字信号处理(DigitalSignal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器可集成中央处理器(Central Processing Unit,CPU)、图像处理器(Graphics ProcessingUnit,GPU)和调制解调器等中的一种或几种的组合。In this application, a processor may include one or more processing cores. By running or executing computer programs (including: instructions, programs, code sets or instruction sets, etc.) stored in the memory and calling the data stored in the memory, various functions of the AR device are performed and data is processed. Optionally, the processor may adopt at least one hardware selected from digital signal processing (Digital Signal Processing, DSP), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), and Programmable Logic Array (Programmable Logic Array, PLA). form to achieve. The processor may integrate one or a combination of a central processing unit (Central Processing Unit, CPU), a graphics processing unit (Graphics Processing Unit, GPU), a modem, and the like.
存储器可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory)。可选地,存储器包括非瞬时性计算机可读介质(non-transitorycomputer-readable storage medium)。存储器可用于存储计算机程序(包括:指令、程序、代码、代码集或指令集等)。The memory may include random access memory (Random Access Memory, RAM), or may include read-only memory (Read-Only Memory). Optionally, the memory includes a non-transitory computer-readable storage medium. Memory may be used to store computer programs (including: instructions, programs, codes, sets of codes or sets of instructions, etc.).
所述AR设备还包括:The AR device also includes:
摄像机,用于获取真实环境的图像;Cameras, used to obtain images of the real environment;
眼球追踪装置,用于获取视角信息和眼球运动状态;Eye tracking device, used to obtain viewing angle information and eye movement status;
显示屏幕,用于投影虚拟物件。所述显示屏幕可以是透明的镜片。Display screen for projecting virtual objects. The display screen may be a transparent lens.
本发明还公开了一种计算机可读存储介质,所述计算机可读存储介质包括存储的计算机程序,其中,在所述计算机程序运行时控制所述计算机可读存储介质所在设备执行上述实施例中的方法。具体地,该计算机程序可以内置在AR设备中,这样,AR设备就可以通过执行内置的计算机程序实现上述方法的步骤。The present invention also discloses a computer-readable storage medium, where the computer-readable storage medium includes a stored computer program, wherein when the computer program runs, the device where the computer-readable storage medium is located is controlled to perform the above-mentioned embodiments Methods. Specifically, the computer program can be built in the AR device, so that the AR device can implement the steps of the above method by executing the built-in computer program.
所应计算机可读存储介质可为各种类型的存储介质,可选为非瞬间存储介质。所述计算机可读存储介质,可选为移动存储设备、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。The corresponding computer-readable storage medium may be various types of storage medium, and may optionally be a non-transitory storage medium. The computer-readable storage medium can be optionally a removable storage device, a read-only memory (ROM, Read-Only Memory), a random access memory (RAM, Random Access Memory), a magnetic disk or an optical disk, etc. that can store program codes medium.
应当注意的是,本发明的实施例有较佳的实施性,且并非对本发明作任何形式的限制,任何熟悉该领域的技术人员可能利用上述揭示的技术内容变更或修饰为等同的有效实施例,但凡未脱离本发明技术方案的内容,依据本发明的技术实质对以上实施例所作的任何修改或等同变化及修饰,均仍属于本发明技术方案的范围内。It should be noted that the embodiments of the present invention have better practicability, and do not limit the present invention in any form, and any person skilled in the art may use the technical contents disclosed above to change or modify into equivalent effective embodiments However, any modifications or equivalent changes and modifications made to the above embodiments according to the technical essence of the present invention without departing from the content of the technical solution of the present invention still fall within the scope of the technical solution of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910517424.8A CN110321005B (en) | 2019-06-14 | 2019-06-14 | Method and device for improving virtual object display effect of AR equipment, AR equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910517424.8A CN110321005B (en) | 2019-06-14 | 2019-06-14 | Method and device for improving virtual object display effect of AR equipment, AR equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110321005A true CN110321005A (en) | 2019-10-11 |
CN110321005B CN110321005B (en) | 2025-07-25 |
Family
ID=68119654
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910517424.8A Active CN110321005B (en) | 2019-06-14 | 2019-06-14 | Method and device for improving virtual object display effect of AR equipment, AR equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110321005B (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111429567A (en) * | 2020-03-23 | 2020-07-17 | 成都威爱新经济技术研究院有限公司 | A method for real environment reflection of digital virtual human eyeballs |
CN111880654A (en) * | 2020-07-27 | 2020-11-03 | 歌尔光学科技有限公司 | Image display method and device, wearable device and storage medium |
CN111881861A (en) * | 2020-07-31 | 2020-11-03 | 北京市商汤科技开发有限公司 | Display method, device, equipment and storage medium |
CN113163134A (en) * | 2021-04-21 | 2021-07-23 | 山东新一代信息产业技术研究院有限公司 | Harsh environment vision enhancement method and system based on augmented reality |
CN113568504A (en) * | 2021-07-22 | 2021-10-29 | 福州富星光学科技有限公司 | AR display processing method and system based on mobile equipment |
CN114125415A (en) * | 2020-08-28 | 2022-03-01 | 奥迪股份公司 | System, method, and storage medium for presenting abnormal parts of vehicle through augmented reality |
CN114842172A (en) * | 2022-04-01 | 2022-08-02 | 合众新能源汽车有限公司 | Display control method, device, medium, head-up display system and vehicle |
CN114860084A (en) * | 2022-06-08 | 2022-08-05 | 广东技术师范大学 | Interactive presentation method of template-edited virtual content in MR/AR headsets |
CN115060229A (en) * | 2021-09-30 | 2022-09-16 | 西安荣耀终端有限公司 | Method and device for measuring a moving object |
CN116266071A (en) * | 2021-12-17 | 2023-06-20 | 广州视享科技有限公司 | A display system adjustment method, device and display system |
CN116700500A (en) * | 2023-08-07 | 2023-09-05 | 江西科技学院 | Multi-scene VR interaction method, system and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1117074A2 (en) * | 2000-01-13 | 2001-07-18 | Mixed Reality Systems Laboratory Inc. | Augmented reality presentation apparatus and method, and storage medium |
EP1369769A2 (en) * | 2002-06-06 | 2003-12-10 | Siemens Corporate Research, Inc. | System and method for measuring the registration accuracy of an augmented reality system |
CN104808795A (en) * | 2015-04-29 | 2015-07-29 | 王子川 | Gesture recognition method for reality-augmented eyeglasses and reality-augmented eyeglasses system |
WO2016115872A1 (en) * | 2015-01-21 | 2016-07-28 | 成都理想境界科技有限公司 | Binocular ar head-mounted display device and information display method thereof |
WO2019004565A1 (en) * | 2017-06-29 | 2019-01-03 | 주식회사 맥스트 | Calibration method for adjusting augmented reality object and head-mounted display for performing same |
CN109801379A (en) * | 2019-01-21 | 2019-05-24 | 视辰信息科技(上海)有限公司 | General augmented reality glasses and its scaling method |
-
2019
- 2019-06-14 CN CN201910517424.8A patent/CN110321005B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1117074A2 (en) * | 2000-01-13 | 2001-07-18 | Mixed Reality Systems Laboratory Inc. | Augmented reality presentation apparatus and method, and storage medium |
EP1369769A2 (en) * | 2002-06-06 | 2003-12-10 | Siemens Corporate Research, Inc. | System and method for measuring the registration accuracy of an augmented reality system |
WO2016115872A1 (en) * | 2015-01-21 | 2016-07-28 | 成都理想境界科技有限公司 | Binocular ar head-mounted display device and information display method thereof |
CN104808795A (en) * | 2015-04-29 | 2015-07-29 | 王子川 | Gesture recognition method for reality-augmented eyeglasses and reality-augmented eyeglasses system |
WO2019004565A1 (en) * | 2017-06-29 | 2019-01-03 | 주식회사 맥스트 | Calibration method for adjusting augmented reality object and head-mounted display for performing same |
CN109801379A (en) * | 2019-01-21 | 2019-05-24 | 视辰信息科技(上海)有限公司 | General augmented reality glasses and its scaling method |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111429567B (en) * | 2020-03-23 | 2023-06-13 | 成都威爱新经济技术研究院有限公司 | A method for reflecting the real environment of digital virtual human eyeballs |
CN111429567A (en) * | 2020-03-23 | 2020-07-17 | 成都威爱新经济技术研究院有限公司 | A method for real environment reflection of digital virtual human eyeballs |
CN111880654A (en) * | 2020-07-27 | 2020-11-03 | 歌尔光学科技有限公司 | Image display method and device, wearable device and storage medium |
CN111881861A (en) * | 2020-07-31 | 2020-11-03 | 北京市商汤科技开发有限公司 | Display method, device, equipment and storage medium |
WO2022022036A1 (en) * | 2020-07-31 | 2022-02-03 | 北京市商汤科技开发有限公司 | Display method, apparatus and device, storage medium, and computer program |
CN114125415A (en) * | 2020-08-28 | 2022-03-01 | 奥迪股份公司 | System, method, and storage medium for presenting abnormal parts of vehicle through augmented reality |
CN113163134A (en) * | 2021-04-21 | 2021-07-23 | 山东新一代信息产业技术研究院有限公司 | Harsh environment vision enhancement method and system based on augmented reality |
CN113568504A (en) * | 2021-07-22 | 2021-10-29 | 福州富星光学科技有限公司 | AR display processing method and system based on mobile equipment |
CN113568504B (en) * | 2021-07-22 | 2024-08-16 | 深圳市赫拉新能源有限责任公司 | AR display processing method and system based on mobile equipment |
CN115060229A (en) * | 2021-09-30 | 2022-09-16 | 西安荣耀终端有限公司 | Method and device for measuring a moving object |
CN115060229B (en) * | 2021-09-30 | 2024-08-20 | 西安荣耀终端有限公司 | Method and device for measuring moving object |
CN116266071A (en) * | 2021-12-17 | 2023-06-20 | 广州视享科技有限公司 | A display system adjustment method, device and display system |
CN114842172A (en) * | 2022-04-01 | 2022-08-02 | 合众新能源汽车有限公司 | Display control method, device, medium, head-up display system and vehicle |
CN114860084A (en) * | 2022-06-08 | 2022-08-05 | 广东技术师范大学 | Interactive presentation method of template-edited virtual content in MR/AR headsets |
CN116700500B (en) * | 2023-08-07 | 2024-05-24 | 江西科技学院 | Multi-scene VR interaction method, system and storage medium |
CN116700500A (en) * | 2023-08-07 | 2023-09-05 | 江西科技学院 | Multi-scene VR interaction method, system and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN110321005B (en) | 2025-07-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110321005A (en) | A kind of method, apparatus, AR equipment and storage medium improving AR equipment virtual article display effect | |
US11651565B2 (en) | Systems and methods for presenting perspective views of augmented reality virtual object | |
US11170521B1 (en) | Position estimation based on eye gaze | |
CN110187855B (en) | Intelligent adjusting method for near-eye display equipment for avoiding blocking sight line by holographic image | |
CN107111370B (en) | Virtual representation of real world objects | |
TWI669635B (en) | Method and device for displaying barrage and non-volatile computer readable storage medium | |
JP6177872B2 (en) | Input/output device, input/output program, and input/output method | |
WO2016122991A1 (en) | Automatic generation of virtual materials from real-world materials | |
WO2015068656A1 (en) | Image-generating device and method | |
WO2013155217A1 (en) | Realistic occlusion for a head mounted augmented reality display | |
US20180165887A1 (en) | Information processing method and program for executing the information processing method on a computer | |
JP6250024B2 (en) | Calibration apparatus, calibration program, and calibration method | |
WO2014128752A1 (en) | Display control device, display control program, and display control method | |
US20220405996A1 (en) | Program, information processing apparatus, and information processing method | |
WO2014128751A1 (en) | Head mount display apparatus, head mount display program, and head mount display method | |
JP6250025B2 (en) | Input/output device, input/output program, and input/output method | |
WO2017191703A1 (en) | Image processing device | |
WO2021237952A1 (en) | Augmented reality display system and method | |
JPWO2016051431A1 (en) | Input/output device, input/output program, and input/output method | |
CN109426419B (en) | Interface display method and related equipment | |
CN108881892B (en) | Anti-dizziness method and system for desktop virtual reality system | |
JP6479835B2 (en) | Input/output device, input/output program, and input/output method | |
JP7467748B1 (en) | Display control device, display system and program | |
JP2023515517A (en) | Fitting eyeglass frames including live fitting | |
WO2016051430A1 (en) | Input/output device, input/output program, and input/output method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |