[go: up one dir, main page]

CN118484117A - Method, device, equipment and storage medium for interaction in a virtual scene - Google Patents

Method, device, equipment and storage medium for interaction in a virtual scene Download PDF

Info

Publication number
CN118484117A
CN118484117A CN202310152723.2A CN202310152723A CN118484117A CN 118484117 A CN118484117 A CN 118484117A CN 202310152723 A CN202310152723 A CN 202310152723A CN 118484117 A CN118484117 A CN 118484117A
Authority
CN
China
Prior art keywords
user
virtual
scene
real object
virtual scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310152723.2A
Other languages
Chinese (zh)
Inventor
马新笃
柳寅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202310152723.2A priority Critical patent/CN118484117A/en
Publication of CN118484117A publication Critical patent/CN118484117A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

According to embodiments of the present disclosure, methods, apparatuses, devices, and storage medium for interacting in a virtual scene are provided. The method includes drawing a virtual object in a virtual scene, the virtual object being generated based on a real object in a physical scene. Predetermined operations made by a user in a physical scene with respect to a real object are detected. And in response to detecting the predetermined operation, manipulating at least one interface element associated with the virtual object in the virtual scene. Therefore, the interaction in the virtual scene of the user is realized by utilizing the real objects in the physical scene, and the interaction experience of the user is improved.

Description

用于在虚拟场景中交互的方法、装置、设备和存储介质Method, device, equipment and storage medium for interaction in a virtual scene

技术领域Technical Field

本公开的示例实施例总体涉及计算机领域,特别地涉及用于在虚拟场景中交互方法、装置、设备和计算机可读存储介质。Example embodiments of the present disclosure generally relate to the field of computers, and more particularly to methods, devices, apparatuses, and computer-readable storage media for interacting in a virtual scene.

背景技术Background Art

虚拟现实(Virtual Reality,简称VR)、扩展现实(Extended Reality,简称XR)、增强现实(Augmented Reality,简称AR)和混合现实(Mixed Reality,简称MR)等可联合三维图形技术、多媒体技术、仿真技术、显示技术、伺服技术等技术产生一个逼真的三维视觉、触觉、嗅觉等多种感官体验的虚拟场景。例如VR是利用计算机模拟三维空间的虚拟世界,为用户提供视觉、听觉、触觉等方面的沉浸式体验。AR使得真实环境和虚拟对象实时叠加到同一个空间并同时存在。MR是融合现实世界和虚拟世界的新的可视化环境,物理真实世界场景中的对象与虚拟世界中的对象实时共存。Virtual Reality (VR), Extended Reality (XR), Augmented Reality (AR), and Mixed Reality (MR) can combine 3D graphics technology, multimedia technology, simulation technology, display technology, servo technology, and other technologies to produce a realistic virtual scene with multiple sensory experiences such as 3D vision, touch, and smell. For example, VR uses a computer to simulate a virtual world in 3D space, providing users with an immersive experience in vision, hearing, touch, and other aspects. AR allows real environments and virtual objects to be superimposed in the same space in real time and exist simultaneously. MR is a new visualization environment that integrates the real world and the virtual world, where objects in the physical real world scene coexist with objects in the virtual world in real time.

由此,可利用虚拟场景为用户提供一种身临其境的交互体验。In this way, virtual scenes can be used to provide users with an immersive interactive experience.

发明内容Summary of the invention

在本公开的第一方面,提供了一种在虚拟场景中交互的方法。该方法包括:在虚拟场景中绘制虚拟物体,虚拟物体是基于物理场景中的真实物体而生成的。检测物理场景中的用户关于真实物体所做出的预定操作。以及响应于检测到预定操作,操控虚拟场景中与虚拟物体相关联的至少一个界面元素。In a first aspect of the present disclosure, a method for interacting in a virtual scene is provided. The method includes: drawing a virtual object in the virtual scene, where the virtual object is generated based on a real object in a physical scene. Detecting a predetermined operation performed by a user in the physical scene on the real object. And in response to detecting the predetermined operation, manipulating at least one interface element associated with the virtual object in the virtual scene.

在本公开的第二方面,提供了一种用于在虚拟场景中交互的装置。该装置包括:绘制单元,被配置为在虚拟场景中绘制虚拟物体,虚拟物体是基于物理场景中的真实物体而生成的。检测单元,被配置为检测物理场景中的用户关于真实物体所做出的预定操作。以及操控单元,被配置为响应于检测到预定操作,操控虚拟场景中与虚拟物体相关联的至少一个界面元素。In a second aspect of the present disclosure, a device for interacting in a virtual scene is provided. The device includes: a drawing unit configured to draw a virtual object in the virtual scene, where the virtual object is generated based on a real object in a physical scene; a detection unit configured to detect a predetermined operation performed by a user in the physical scene on the real object; and a manipulation unit configured to manipulate at least one interface element associated with the virtual object in the virtual scene in response to detecting the predetermined operation.

在本公开的第三方面,提供了一种电子设备。该设备包括至少一个处理单元;以及至少一个存储器,至少一个存储器被耦合到至少一个处理单元并且存储用于由至少一个处理单元执行的指令。指令在由至少一个处理单元执行时使设备执行第一方面的方法。In a third aspect of the present disclosure, an electronic device is provided. The device includes at least one processing unit; and at least one memory, the at least one memory is coupled to the at least one processing unit and stores instructions for execution by the at least one processing unit. When the instructions are executed by the at least one processing unit, the device executes the method of the first aspect.

在本公开的第四方面,提供了一种计算机可读存储介质。该计算机可读存储介质上存储有计算机程序,计算机程序可由处理器执行以实现第一方面的方法。In a fourth aspect of the present disclosure, a computer-readable storage medium is provided, wherein a computer program is stored on the computer-readable storage medium, and the computer program can be executed by a processor to implement the method of the first aspect.

应当理解,本发明内容部分中所描述的内容并非旨在限定本公开的实施例的关键特征或重要特征,也不用于限制本公开的范围。本公开的其它特征将通过以下的描述而变得容易理解。It should be understood that the contents described in the summary of the present invention are not intended to limit the key features or important features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become easily understood through the following description.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

结合附图并参考以下详细说明,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。在附图中,相同或相似的附图标记表示相同或相似的元素,其中:The above and other features, advantages and aspects of the embodiments of the present disclosure will become more apparent with reference to the following detailed description in conjunction with the accompanying drawings. In the accompanying drawings, the same or similar reference numerals represent the same or similar elements, wherein:

图1示出了本公开的实施例能够在其中实现的示例环境的示意图;FIG1 shows a schematic diagram of an example environment in which embodiments of the present disclosure can be implemented;

图2示出了根据本公开的一些实施例的在虚拟场景中交互的过程的流程图;FIG2 shows a flow chart of a process of interacting in a virtual scene according to some embodiments of the present disclosure;

图3示出了根据本公开的一些实施例的确定绘制尺寸示例的示意图;FIG3 is a schematic diagram showing an example of determining a drawing size according to some embodiments of the present disclosure;

图4示出了根据本公开的一些实施例的一组界面元素的示例的示意图;FIG4 is a schematic diagram showing an example of a set of interface elements according to some embodiments of the present disclosure;

图5A、图5B、图5C、图5D分别示出了根据本公开的一些实施例的操控界面元素的示例的示意图;5A, 5B, 5C, and 5D are schematic diagrams respectively showing examples of manipulation interface elements according to some embodiments of the present disclosure;

图6示出了根据本公开的一些实施例的关联第二用户的虚拟场景的呈现内容的示例的示意图;FIG6 is a schematic diagram showing an example of presentation content of a virtual scene associated with a second user according to some embodiments of the present disclosure;

图7示出了根据本公开的一些实施例的用于在虚拟场景中交互的装置的框图;FIG7 shows a block diagram of an apparatus for interacting in a virtual scene according to some embodiments of the present disclosure;

图8示出了能够实施本公开的多个实施例的设备的框图。FIG8 shows a block diagram of a device capable of implementing various embodiments of the present disclosure.

具体实施方式DETAILED DESCRIPTION

可以理解的是,在使用本公开各实施例公开的技术方案之前,均应当依据相关法律法规通过恰当的方式对本公开所涉及个人信息的类型、使用范围、使用场景等告知用户并获得用户的授权。It is understandable that before using the technical solutions disclosed in the embodiments of the present disclosure, the types, scope of use, usage scenarios, etc. of the personal information involved in the present disclosure should be informed to the user and the user's authorization should be obtained in an appropriate manner in accordance with relevant laws and regulations.

例如,在响应于接收到用户的主动请求时,向用户发送提示信息,以明确地提示用户,其请求执行的操作将需要获取和使用到用户的个人信息。从而,使得用户可以根据提示信息来自主地选择是否向执行本公开技术方案的操作的电子设备、应用程序、服务器或存储介质等软件或硬件提供个人信息。For example, in response to receiving an active request from a user, a prompt message is sent to the user to clearly prompt the user that the operation requested to be performed will require obtaining and using the user's personal information. Thus, the user can autonomously choose whether to provide personal information to software or hardware such as an electronic device, application, server, or storage medium that performs the operation of the technical solution of the present disclosure according to the prompt message.

作为一种可选的但非限定性的实现方式,响应于接收到用户的主动请求,向用户发送提示信息的方式例如可以是弹窗的方式,弹窗中可以以文字的方式呈现提示信息。此外,弹窗中还可以承载供用户选择“同意”或者“不同意”向电子设备提供个人信息的选择控件。As an optional but non-limiting implementation, in response to receiving an active request from the user, the prompt information may be sent to the user in the form of a pop-up window, in which the prompt information may be presented in text form. In addition, the pop-up window may also carry a selection control for the user to choose "agree" or "disagree" to provide personal information to the electronic device.

可以理解的是,上述通知和获取用户授权过程仅是示意性的,不对本公开的实现方式构成限定,其它满足相关法律法规的方式也可应用于本公开的实现方式中。It is understandable that the above notification and the process of obtaining user authorization are merely illustrative and do not constitute a limitation on the implementation of the present disclosure. Other methods that meet the relevant laws and regulations may also be applied to the implementation of the present disclosure.

可以理解的是,本技术方案所涉及的数据(包括但不限于数据本身、数据的获取或使用)应当遵循相应法律法规及相关规定的要求。It is understandable that the data involved in this technical solution (including but not limited to the data itself, the acquisition or use of the data) shall comply with the requirements of relevant laws, regulations and relevant provisions.

下面将参照附图更详细地描述本公开的实施例。虽然附图中示出了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反,提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. Although certain embodiments of the present disclosure are shown in the accompanying drawings, it should be understood that the present disclosure can be implemented in various forms and should not be construed as being limited to the embodiments set forth herein. On the contrary, these embodiments are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are only for exemplary purposes and are not intended to limit the scope of protection of the present disclosure.

需要注意的是,本文中所提供的任何节/子节的标题并不是限制性的。本文通篇描述了各种实施例,并且任何类型的实施例都可以包括在任何节/子节下。此外,在任一节/子节中描述的实施例可以以任何方式与同一节/子节和/或不同节/子节中描述的任何其他实施例相结合。It should be noted that the titles of any sections/subsections provided herein are not restrictive. Various embodiments are described throughout this article, and any type of embodiment may be included under any section/subsection. In addition, the embodiments described in any section/subsection may be combined in any manner with any other embodiments described in the same section/subsection and/or different sections/subsections.

在本公开的实施例的描述中,术语“包括”及其类似用语应当理解为开放性包含,即“包括但不限于”。术语“基于”应当理解为“至少部分地基于”。术语“一个实施例”或“该实施例”应当理解为“至少一个实施例”。术语“一些实施例”应当理解为“至少一些实施例”。下文还可能包括其他明确的和隐含的定义。术语“第一”、“第二”等可以指代不同的或相同的对象。下文还可能包括其他明确的和隐含的定义。In the description of the embodiments of the present disclosure, the term "including" and similar terms should be understood as open inclusion, that is, "including but not limited to". The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The term "some embodiments" should be understood as "at least some embodiments". Other explicit and implicit definitions may be included below. The terms "first", "second", etc. may refer to different or the same objects. Other explicit and implicit definitions may be included below.

在本公开的实施例的描述中,术语“元素”,例如“图标元素”、“界面元素”等,用于指示在虚拟场景下的呈现内容。通常这种呈现内容是以像素的形式进行呈现,例如界面元素,可为在虚拟场景下,以像素呈现方式呈现的界面。用户可在虚拟场景下通过呈现的像素观看界面和/或与界面发生交互,以达到在虚拟场景下进行内容获取、交互的目的。In the description of the embodiments of the present disclosure, the term "element", such as "icon element", "interface element", etc., is used to indicate the presentation content in a virtual scene. Usually, such presentation content is presented in the form of pixels. For example, an interface element may be an interface presented in a pixel presentation manner in a virtual scene. A user may view the interface and/or interact with the interface through the presented pixels in a virtual scene to achieve the purpose of obtaining and interacting with content in a virtual scene.

如前文所简要提及的,虚拟场景生成设备可对应于不同的虚拟场景使用需求生成对应的虚拟场景。例如,虚拟场景生成设备根据会议需求生成对应的虚拟会议室作为虚拟场景。虚拟会议室在用户连入后生成用户的虚拟化身,用户可利用特定于自身的虚拟化身与特定于其他用户的虚拟化身进行交互。As briefly mentioned above, the virtual scene generation device can generate corresponding virtual scenes corresponding to different virtual scene usage requirements. For example, the virtual scene generation device generates a corresponding virtual conference room as a virtual scene according to the meeting requirements. The virtual conference room generates a virtual avatar of the user after the user connects, and the user can use the virtual avatar specific to himself to interact with the virtual avatars specific to other users.

在生成设备生成有虚拟场景后,允许用户通过交互设备与生成设备进行交互。生成设备可根据交互设备发送的指令确定操控内容,并根据操控内容对虚拟场景中的呈现元素进行调整,以提供虚拟场景下对应于用户的交互。通常生成设备可在虚拟场景中对应交互设备生成相应的指示元素,用户可利用交互设备对指示元素进行控制实现交互指令的发送。例如,生成设备可在虚拟场景中生成与包括虚拟现实手柄的交互设备对应的图标元素。用户可通过移动物理场景中的虚拟现实手柄实现图标元素的移动控制,以及通过触发虚拟现实手柄中物理按键的方式实现对光标所指示的虚拟场景的呈现元素发出指令。After the generating device generates a virtual scene, the user is allowed to interact with the generating device through the interactive device. The generating device can determine the control content according to the instruction sent by the interactive device, and adjust the presentation elements in the virtual scene according to the control content to provide interaction corresponding to the user in the virtual scene. Usually, the generating device can generate corresponding indication elements in the virtual scene corresponding to the interactive device, and the user can use the interactive device to control the indication elements to send interactive instructions. For example, the generating device can generate an icon element corresponding to the interactive device including a virtual reality handle in the virtual scene. The user can realize the movement control of the icon element by moving the virtual reality handle in the physical scene, and can issue instructions to the presentation elements of the virtual scene indicated by the cursor by triggering the physical buttons in the virtual reality handle.

但这样的配置方式中,交互设备所对应的图标元素独立于与虚拟场景中的呈现元素存在,使得用户使用交互设备的交互感知与呈现元素进行交互的交互感知无法同一,用户交互体验不佳。例如,在图标元素为手部虚拟图像的情况下,用户通过触发虚拟现实手柄中物理按键的方式实现对呈现元素的“手部抓取动作”,虚拟场景下用户基于视觉反馈获取到的交互感知与用户与手柄的交互感知不同一、用户交互体验不佳。又例如,头戴式显示器通过捕捉用户在空中的手部动作的方式获取用户的指令的情况下,用户在物理场景中缺乏相应的交互反馈,用户交互体验不佳。However, in such a configuration, the icon elements corresponding to the interactive device exist independently of the presentation elements in the virtual scene, so that the user's interaction perception of using the interactive device is inconsistent with the interaction perception of interacting with the presentation elements, resulting in a poor user interaction experience. For example, when the icon element is a virtual image of a hand, the user triggers the physical buttons in the virtual reality handle to achieve the "hand grabbing action" of the presentation element. The interaction perception obtained by the user based on visual feedback in the virtual scene is different from the interaction perception between the user and the handle, resulting in a poor user interaction experience. For another example, when the head-mounted display obtains the user's instructions by capturing the user's hand movements in the air, the user lacks corresponding interaction feedback in the physical scene, resulting in a poor user interaction experience.

本公开的实施例提出了一种用于在虚拟场景中交互的方案。根据本公开的各种实施例,在虚拟场景中绘制基于物理场景中的真实物体生成的虚拟物体。检测物理场景中的用户关于真实物体所做出的预定交互。以及响应于检测到预定交互,操控虚拟场景中与虚拟物体相关联的至少一个界面元素。因此,本公开的实施例利用物理场景中的真实物体实现用户虚拟场景下的交互,提升用户的交互体验。The embodiments of the present disclosure propose a scheme for interaction in a virtual scene. According to various embodiments of the present disclosure, a virtual object generated based on a real object in a physical scene is drawn in a virtual scene. A predetermined interaction made by a user in the physical scene with respect to the real object is detected. And in response to detecting the predetermined interaction, at least one interface element associated with the virtual object in the virtual scene is manipulated. Therefore, the embodiments of the present disclosure utilize the real object in the physical scene to realize the interaction of the user in the virtual scene, thereby improving the user's interactive experience.

示例环境Example Environment

图1示出了本公开的实施例能够在其中实现的示例环境100的示意图。在环境100中,包括物理场景110,该物理场景110是一个真实世界的实际场景示例,其中用户111正在使用对象(例如,手指)114在真实物体113表面(或者说,用户可与真实物体进行交互的交互表面)滑动。真实物体113,例如,桌面。假设用户111佩戴了XR设备112,例如头戴式显示器、智能眼镜等,该XR设备112可以为用户111呈现一个与物理场景110相对应的虚拟场景120。FIG1 shows a schematic diagram of an example environment 100 in which embodiments of the present disclosure can be implemented. In the environment 100, a physical scene 110 is included, which is an example of an actual scene in the real world, in which a user 111 is using an object (e.g., a finger) 114 to slide on the surface of a real object 113 (or an interactive surface on which the user can interact with the real object). The real object 113 is, for example, a desktop. Assuming that the user 111 is wearing an XR device 112, such as a head-mounted display, smart glasses, etc., the XR device 112 can present a virtual scene 120 corresponding to the physical scene 110 to the user 111.

虚拟场景120中包括基于物理场景110中的真实物体113绘制而成的虚拟物体121。虚拟场景120中还包括图标元素122、123,图标元素122、123与用户111的手相对应(例如图标元素122对应用户111的左手,图标元素123对应用户111的右手),用于在虚拟场景120向用户111反馈用户的动作。例如,虚拟手部对象。在一些实施例中,虚拟物体包括与真实物体的交互表面所对应的面元素。例如,真实物体为桌子的情况下,桌子的交互表面可以例如桌子朝向用户的桌面。相应地,虚拟物体可以包括与该桌面朝向用户的一面的面元素(例如,虚拟桌面)。The virtual scene 120 includes a virtual object 121 drawn based on a real object 113 in the physical scene 110. The virtual scene 120 also includes icon elements 122 and 123, which correspond to the hands of the user 111 (for example, icon element 122 corresponds to the left hand of the user 111, and icon element 123 corresponds to the right hand of the user 111), and are used to feedback the user's actions to the user 111 in the virtual scene 120. For example, a virtual hand object. In some embodiments, the virtual object includes a surface element corresponding to the interactive surface of the real object. For example, when the real object is a table, the interactive surface of the table can be, for example, a desktop of the table facing the user. Accordingly, the virtual object can include a surface element (for example, a virtual desktop) corresponding to the side of the desktop facing the user.

在物理场景110中,还包括电子设备115,用于获取物理场景110图像,并基于该图像来确定用户111的对象114(例如,手指)与真实物体113(例如,桌面)之间的相对位置。电子设备115可以是能够与XR设备112和/或其他图像捕获设备进行通信的单独设备,比如用于图像或数据处理的服务器、计算节点等,也可以与XR设备112和/或其他图像捕获设备集成在一起。在一些实施例中,电子设备115可以实现为XR设备112,也即,在此情况下,XR设备112可以实现电子设备115的所有功能。应当理解,上述关于电子设备115的描述仅仅是示例性而非限制性的,电子设备115可以实现为多种形式、结构或类别的设备,本公开的实施例对此没有限制。In the physical scene 110, an electronic device 115 is also included, which is used to obtain an image of the physical scene 110, and determine the relative position between the object 114 (e.g., finger) of the user 111 and the real object 113 (e.g., desktop) based on the image. The electronic device 115 can be a separate device that can communicate with the XR device 112 and/or other image capture devices, such as a server, computing node, etc. for image or data processing, or it can be integrated with the XR device 112 and/or other image capture devices. In some embodiments, the electronic device 115 can be implemented as an XR device 112, that is, in this case, the XR device 112 can implement all the functions of the electronic device 115. It should be understood that the above description of the electronic device 115 is merely exemplary and non-restrictive, and the electronic device 115 can be implemented as a variety of forms, structures or categories of devices, and the embodiments of the present disclosure are not limited to this.

XR设备112可根据用户111在物理场景110使用对象114(例如,手指)对真实物体113(例如,桌面)所做出的动作对虚拟场景120中的呈现元素进行调整。这样的对象114例如可以包括用户的至少一个适当的肢体部分,例如手指、手腕、手掌等。The XR device 112 may adjust the presentation elements in the virtual scene 120 according to the actions performed by the user 111 on the real object 113 (e.g., a desktop) using an object 114 (e.g., a finger) in the physical scene 110. Such an object 114 may include, for example, at least one appropriate body part of the user, such as a finger, a wrist, a palm, etc.

例如,XR设备112根据用户111在物理场景110中使用对象114(例如,手指)的敲击动作,对虚拟场景120中用于呈现用户111手部动作的图标元素122和123进行调整。例如,XR设备112可对应用户111进行敲击动作的手指,在图标元素123对应呈现反馈元素124。例如,后续同时在图5A对应的示例中所示出的在用户使用“右手”的四个对象114(例如,手指)敲击真实物体113(例如,桌面)的存在接触的情况下,XR设备112可在虚拟物体121(例如,虚拟桌面)上呈现的一组反馈元素124。For example, the XR device 112 adjusts the icon elements 122 and 123 used to present the hand motion of the user 111 in the virtual scene 120 according to the tapping motion of the user 111 using the object 114 (e.g., finger) in the physical scene 110. For example, the XR device 112 may present a feedback element 124 corresponding to the finger of the user 111 performing the tapping motion on the icon element 123. For example, in the case of contact where the user taps the real object 113 (e.g., desktop) using four objects 114 (e.g., fingers) of the "right hand" as shown in the example corresponding to FIG. 5A , the XR device 112 may present a set of feedback elements 124 on the virtual object 121 (e.g., virtual desktop).

在一些实施例中,对象114在体现为手指的情况下,动作可包括敲击、滑动以及触摸中至少一种。相应地,XR设备112可相应于不同动作的配置反馈元素124。例如,在对象114(例如,手指)的动作为滑动的情况下,呈现的反馈元素124可为跟随图标元素122和113滑动的亮条。In some embodiments, when the object 114 is embodied as a finger, the action may include at least one of tapping, sliding, and touching. Accordingly, the XR device 112 may configure feedback elements 124 corresponding to different actions. For example, when the action of the object 114 (e.g., a finger) is sliding, the feedback element 124 presented may be a bright bar that slides along with the icon elements 122 and 113.

在一些实施例中XR设备112中预先配置有用于生成虚拟场景的场景元素,可根据虚拟场景使用需求获取对应的场景元素生成呈现元素,生成虚拟场景120。例如,虚拟场景为虚拟会议室,场景元素可包括桌面元素、座位元素等。In some embodiments, the XR device 112 is pre-configured with scene elements for generating a virtual scene, and the corresponding scene elements can be obtained according to the use requirements of the virtual scene to generate presentation elements to generate a virtual scene 120. For example, the virtual scene is a virtual conference room, and the scene elements may include desktop elements, seat elements, etc.

图1中所示出的真实物体113以及虚拟场景120仅是示例性的,而无意限制本公开的范围。XR设备112可以应用于将任何真实物体生成任何虚拟场景。The real object 113 and the virtual scene 120 shown in FIG1 are merely exemplary and are not intended to limit the scope of the present disclosure. The XR device 112 can be applied to generate any virtual scene from any real object.

应当理解,仅出于示例性的目的描述环境100的结构和功能,而不暗示对于本公开的范围的任何限制。以下将继续参考附图描述本公开的一些示例实施例。It should be understood that the structure and function of the environment 100 are described for exemplary purposes only, and do not imply any limitation on the scope of the present disclosure. Some exemplary embodiments of the present disclosure will be described below with reference to the accompanying drawings.

示例在虚拟场景中的交互流程及效果Example of interactive process and effect in virtual scene

图2示出了根据本公开的一些实施例的用于在虚拟场景中交互的过程200。以在XR设备112处实现过程200XR设备112进行示例性说明。图3、图4、图5A、图5B、图5C、图5D以及图6中分别示出了根据本公开的一些实施例的用于在虚拟场景中交互的示例。为便于讨论,将参考图1的环境100,图3、图4、图5A、图5B、图5C、图5D以及图6中的示例来描述过程200。FIG. 2 shows a process 200 for interacting in a virtual scene according to some embodiments of the present disclosure. The process 200 is implemented at the XR device 112 for exemplary description. Examples of interacting in a virtual scene according to some embodiments of the present disclosure are shown in FIG. 3 , FIG. 4 , FIG. 5A , FIG. 5B , FIG. 5C , FIG. 5D , and FIG. 6 , respectively. For ease of discussion, the process 200 will be described with reference to the environment 100 of FIG. 1 , and the examples in FIG. 3 , FIG. 4 , FIG. 5A , FIG. 5B , FIG. 5C , FIG. 5D , and FIG. 6 .

在框210,在虚拟场景中绘制虚拟物体,该虚拟物体是基于物理场景中的真实物体而生成的。根据本公开的实施例,XR设备112在虚拟场景120中绘制基于物理场景中的真实物体确定的虚拟物体。在一些实施例中,XR设备112基于真实物体113的类型以及真实物体113在物理场景110中的尺寸确定绘制信息,并基于绘制信息相应地在虚拟场景120中绘制虚拟物体121,以便于真实物体113与虚拟物体更好的对应。In block 210, a virtual object is drawn in the virtual scene, the virtual object being generated based on the real object in the physical scene. According to an embodiment of the present disclosure, the XR device 112 draws the virtual object determined based on the real object in the physical scene in the virtual scene 120. In some embodiments, the XR device 112 determines drawing information based on the type of the real object 113 and the size of the real object 113 in the physical scene 110, and draws the virtual object 121 in the virtual scene 120 accordingly based on the drawing information, so that the real object 113 and the virtual object correspond better.

在一些实施例中,XR设备112可根据用户111在选择虚拟物体的类型后,根据用户111提供的物体尺寸绘制虚拟物体。在此情况下,用户111可利用XR设备112或可用于为XR设备112提供测量功能的其他设备,对真实物体的尺寸进行测量,例如测绘设备。例如,图3中,用户111(未示出)可使用用于为XR设备112(未示出)提供测量功能的测绘设备310以真实物体113(例如,桌面)的左边缘位置320为起点,水平拖动测绘310至右边缘位置330为终点进行标定。在此情况下,左边缘位置320和右边缘位置330的标定结果确定真实物体113(例如,桌面)的在水平方向上的尺寸。In some embodiments, the XR device 112 may draw a virtual object according to the object size provided by the user 111 after the user 111 selects the type of virtual object. In this case, the user 111 may measure the size of the real object using the XR device 112 or other devices that can be used to provide measurement functions for the XR device 112, such as a surveying device. For example, in FIG3 , the user 111 (not shown) may use a surveying device 310 for providing a measurement function for the XR device 112 (not shown) to calibrate with the left edge position 320 of the real object 113 (e.g., a desktop) as the starting point, and horizontally drag the surveying device 310 to the right edge position 330 as the end point. In this case, the calibration results of the left edge position 320 and the right edge position 330 determine the size of the real object 113 (e.g., a desktop) in the horizontal direction.

在一些实施例中,用户111可根据需求仅标定部分物体的尺寸,以便于更为合理的确定用户111在物理场景110下的操作空间。例如,用户111可在选定左边缘位置320作为起点后,选择真实物体113(例如,桌面)中的位置340作为终点,将位置320至位置340之间的区域确定为真实物体113(例如,桌面)的尺寸。In some embodiments, the user 111 may calibrate only the size of some objects as required, so as to more reasonably determine the operating space of the user 111 in the physical scene 110. For example, after selecting the left edge position 320 as the starting point, the user 111 may select the position 340 in the real object 113 (e.g., the desktop) as the end point, and determine the area between the position 320 and the position 340 as the size of the real object 113 (e.g., the desktop).

在一些实施例中,测绘设备310中还可通过预设一个方向参数的方式,实现可根据用户在单一方向上的标定结果生成上述尺寸。例如,图3中,测绘设备310可预设在竖直方向上的长度350作为尺寸参数。在这样的情况下,用户111可利用测绘设备310标定真实物体113(例如,桌面)的左边缘位置320,由左边缘出发连续拖动测绘设备310至右边缘位置330。测绘设备310基于真实物体113(例如,桌面)左边缘位置320、右边缘位置330以及竖直方向上的长度350形成的闭合矩形作的尺寸作为虚拟物体121(例如,虚拟桌面)的绘制尺寸。In some embodiments, the surveying and mapping device 310 can also preset a direction parameter to generate the above-mentioned size according to the calibration result of the user in a single direction. For example, in FIG. 3 , the surveying and mapping device 310 can preset a length 350 in the vertical direction as a size parameter. In this case, the user 111 can use the surveying and mapping device 310 to calibrate the left edge position 320 of the real object 113 (e.g., desktop), and continuously drag the surveying and mapping device 310 from the left edge to the right edge position 330. The surveying and mapping device 310 uses the size of the closed rectangle formed by the left edge position 320, the right edge position 330, and the length 350 in the vertical direction of the real object 113 (e.g., desktop) as the drawing size of the virtual object 121 (e.g., virtual desktop).

在一些实施例中,真实物体的选择可基于用户的指令确定,例如,XR设备112在采集用户所处真实物理场景110的场景图像后呈现给用户111。用户111可在场景图像中选择真实物体113(例如,桌面)后控制XR设备112生成对应的虚拟物体121(例如,虚拟桌面)。In some embodiments, the selection of the real object may be determined based on the user's instructions. For example, the XR device 112 captures the scene image of the real physical scene 110 in which the user is located and presents it to the user 111. The user 111 may select a real object 113 (e.g., a desktop) in the scene image and then control the XR device 112 to generate a corresponding virtual object 121 (e.g., a virtual desktop).

在框220,检测物理场景中的用户关于真实物体所做出的预定操作。根据本公开的实施例,XR设备112检测物理场景中的用户关于真实物体所做出的预定操作。XR设备检测物理场景用户关于真实物体所做出的预定操作。预定操作为预先配置对应指令的,XR设备可基于用户做出的预定动作确定用户发送的指令,以便于用户通过预定操作与XR设备112进行交互。例如,预定操作包括用户111的至少一个对象114(例如,手指)与真实物体113(例如,桌面)接触。在此情况下的一些实施例中,为避免与用户111的对象114(例如,手指)与真实物体113(例如,桌面)短暂的接触产生误触,还可相应地配置预定操作包括指示用户的至少一个手指与真实物体的接触达到预定时长的特定操作。又例如,预定操作包括用户111的至少一个对象114(例如,手指)在真实物体113(例如,桌面)上是否存在滑动。在此情况下的一些实施例中,为避免用户不存在交互意图的手指动作导致的误触,还可相应地配置预定操作包括指示用户111的至少一个对象114(例如,手指)相对于真实物体113(例如,桌面)的、在特定方向上的滑动。由此可区分用户的动作,更明确的了解用户的交互意图,减少误触发。避免物理场景中的非预定动作被检测,XR设备112错误的与用户发生交互。In box 220, a predetermined operation performed by a user in a physical scene on a real object is detected. According to an embodiment of the present disclosure, the XR device 112 detects a predetermined operation performed by a user in a physical scene on a real object. The XR device detects a predetermined operation performed by a user in a physical scene on a real object. The predetermined operation is a pre-configured corresponding instruction. The XR device can determine the instruction sent by the user based on the predetermined action performed by the user, so that the user can interact with the XR device 112 through the predetermined operation. For example, the predetermined operation includes at least one object 114 (e.g., finger) of the user 111 in contact with a real object 113 (e.g., desktop). In some embodiments of this case, in order to avoid false touches caused by a short contact between the object 114 (e.g., finger) of the user 111 and the real object 113 (e.g., desktop), the predetermined operation can also be configured accordingly to include a specific operation indicating that the contact between at least one finger of the user and the real object reaches a predetermined duration. For another example, the predetermined operation includes whether at least one object 114 (e.g., finger) of the user 111 slides on the real object 113 (e.g., desktop). In some embodiments of this case, in order to avoid false touches caused by finger movements without user interaction intentions, the predetermined operation may be configured to include indicating that at least one object 114 (e.g., finger) of the user 111 slides in a specific direction relative to a real object 113 (e.g., a desktop). This allows distinguishing the user's movements, more clearly understanding the user's interaction intentions, and reducing false triggers. This prevents non-predetermined movements in the physical scene from being detected, and prevents the XR device 112 from interacting with the user by mistake.

在一些实施例中,检测预定操作包括:XR设备检测用户的至少一个肢体部分与真实物体的相对位置。进一步地,XR设备基于相对位置,检测物理场景中的用户关于真实物体所做出的预定操作。具体的,如上述说明的,XR设备112可由本地和/或其他图像捕捉设备进行通信,对物理场景110下用户的对象114以及真实物体113(例如,桌面)进行的图像进行采集。如上文所介绍的,这样的对象114例如可以包括用户的至少一个适当的肢体部分,例如手指、手腕、手掌等。进一步地,XR设备112基于采集到的图像分别确定对象114(例如,手指)和真实物体113(例如,桌面)的空间位置后,基于对象114(例如,手指)的空间位置以及真实物体113(例如,桌面)的空间位置确定对象114(例如,手指)与真实物体113(例如,桌面)的相对位置。In some embodiments, detecting the predetermined operation includes: the XR device detects the relative position of at least one limb of the user and the real object. Further, the XR device detects the predetermined operation performed by the user on the real object in the physical scene based on the relative position. Specifically, as described above, the XR device 112 can communicate with the local and/or other image capture devices to collect images of the user's object 114 and the real object 113 (e.g., desktop) in the physical scene 110. As introduced above, such an object 114 may include, for example, at least one appropriate limb of the user, such as a finger, wrist, palm, etc. Further, after the XR device 112 determines the spatial positions of the object 114 (e.g., finger) and the real object 113 (e.g., desktop) based on the collected images, it determines the relative position of the object 114 (e.g., finger) and the real object 113 (e.g., desktop) based on the spatial position of the object 114 (e.g., finger) and the spatial position of the real object 113 (e.g., desktop).

XR设备112可以根据对象114(例如,手指)与真实物体113(例如,桌面)的相对位置确定对象114(例如,手指)对于真实物体113(例如,桌面)所执行的操作。例如,对象114(例如,手指)的空间位置处于真实物体113(例如,桌面)对应的空间位置中,可确定对象114(例如,手指)与真实物体113(例如,桌面)的相对位置为对象114(例如,手指)放置于真实物体113(例如,桌面)上,即对象114(例如,手指)与真实物体113(例如,桌面)存在“接触”这一动作。在一些实施例中,XR设备112可基于连续捕捉的图像中所确定的对象114(例如,手指)与真实物体113(例如,桌面)的相对位置确定对象114(例如,手指)对应真实物体113(例如,桌面)的动作。例如,基于连续捕捉的图像确定对象114(例如,手指)相对于真实物体113(例如,桌面)存在多个同一接触点的“接触”动作,可确定对象114(例如,手指)对应于桌面的动作为“持续接触”。又例如,基于连续捕捉的图像确定对象114(例如,手指)相对于真实物体113(例如,桌面)存在多个连续的、不同的接触点的“接触”动作,可确定对象114(例如,手指)对应于桌面的动作为“滑动”。The XR device 112 may determine the operation performed by the object 114 (e.g., finger) on the real object 113 (e.g., desktop) based on the relative position of the object 114 (e.g., finger) and the real object 113 (e.g., desktop). For example, if the spatial position of the object 114 (e.g., finger) is in the spatial position corresponding to the real object 113 (e.g., desktop), the relative position of the object 114 (e.g., finger) and the real object 113 (e.g., desktop) may be determined as the object 114 (e.g., finger) is placed on the real object 113 (e.g., desktop), that is, the object 114 (e.g., finger) and the real object 113 (e.g., desktop) have an action of "contact". In some embodiments, the XR device 112 may determine the action of the object 114 (e.g., finger) corresponding to the real object 113 (e.g., desktop) based on the relative position of the object 114 (e.g., finger) and the real object 113 (e.g., desktop) determined in the continuously captured images. For example, if it is determined based on the continuously captured images that the object 114 (e.g., finger) has a plurality of "contact" actions with the same contact points relative to the real object 113 (e.g., desktop), the action of the object 114 (e.g., finger) corresponding to the desktop can be determined as "continuous contact". For another example, if it is determined based on the continuously captured images that the object 114 (e.g., finger) has a plurality of "contact" actions with continuous and different contact points relative to the real object 113 (e.g., desktop), the action of the object 114 (e.g., finger) corresponding to the desktop can be determined as "sliding".

在此情况下,XR设备112在记录对象114(例如,手指)对于真实物体113(例如,桌面)的动作后,检测记录的动作中是否存在有预设操作。例如上述的至少一个手指与真实物体的接触达到预定时长、至少一个手指在特定方向上的滑动。In this case, after recording the action of the object 114 (e.g., finger) on the real object 113 (e.g., desktop), the XR device 112 detects whether there is a preset operation in the recorded action, such as the contact between at least one finger and the real object for a predetermined time or the sliding of at least one finger in a specific direction.

在框230,响应于检测到预定操作,操控虚拟场景中与虚拟物体相关联的至少一个界面元素。在本公开的一些实施例中,XR设备112可与虚拟物体121对应地配置界面元素,建立界面元素的操控动作与预定操作之间的关联。界面元素用于在虚拟场景中为用户提供信息和/或与用户进行交互,例如提示信息、工具图标。例如界面元素用于呈现工具图标,用户111可在虚拟场景120中通过例如“触击”该工具图标的方式,在虚拟场景下调用该工具图标对应的工具,实现利用界面元素实现虚拟场景下的交互。交互的方式触发在XR设备112检测到预定操作的情况下,XR设备112根据预定操作对应地操控界面元素。例如,XR设备112根据预定操作对应地在虚拟场景中唤出、呈现界面元素。In box 230, in response to detecting a predetermined operation, at least one interface element associated with the virtual object in the virtual scene is manipulated. In some embodiments of the present disclosure, the XR device 112 may configure the interface element corresponding to the virtual object 121, and establish an association between the manipulation action of the interface element and the predetermined operation. The interface element is used to provide information to the user and/or interact with the user in the virtual scene, such as prompt information and tool icons. For example, the interface element is used to present a tool icon, and the user 111 can call the tool corresponding to the tool icon in the virtual scene by, for example, "touching" the tool icon in the virtual scene 120, so as to realize the interaction in the virtual scene by using the interface element. The interactive method triggers that when the XR device 112 detects a predetermined operation, the XR device 112 manipulates the interface element accordingly according to the predetermined operation. For example, the XR device 112 calls out and presents the interface element in the virtual scene according to the predetermined operation.

在一些实施例中,界面元素的呈现位置与虚拟物体相关联。例如,在虚拟物体为虚拟桌面的情况下,界面元素的呈现位置设置于虚拟桌面上。In some embodiments, the presentation position of the interface element is associated with the virtual object. For example, when the virtual object is a virtual desktop, the presentation position of the interface element is set on the virtual desktop.

在一些实施例中,在界面元素用于呈现工具图标的情况下,工具图标可基于虚拟场景进行相应地配置。例如,在虚拟场景为虚拟会议室的情况下,可选择虚拟会议中所使用的文档调用、文件投送、文档分享以及文档记录等工具的工具图标作为界面元素。In some embodiments, when the interface element is used to present a tool icon, the tool icon may be configured accordingly based on the virtual scene. For example, when the virtual scene is a virtual conference room, tool icons for tools such as document calling, file delivery, document sharing, and document recording used in the virtual meeting may be selected as interface elements.

在一些实施例中,在界面元素用于呈现工具图标的情况下,XR设备112呈现界面元素时,还可呈现工具图标所指向工具的预览界面,以方便用户了解工具图标所指向的内容。例如,在工具图标为指向文档调用的工具时,预览界面中可呈现用于调用的文档。In some embodiments, when the interface element is used to present a tool icon, the XR device 112 may also present a preview interface of the tool pointed to by the tool icon when presenting the interface element, so as to facilitate the user to understand the content pointed to by the tool icon. For example, when the tool icon is a tool pointing to a document call, the document used for calling may be presented in the preview interface.

在一些情况下,用户需要移动虚拟场景中与交互设备所对应的图标元素至界面元素处实现应用的交互、触发。例如在上述的虚拟会议室中,用户希望将文档投送至虚拟会议屏幕进行呈现的情况下,虚拟会议屏幕可能与用户所处位置较远,用户无法直接与该虚拟会议屏幕进行交互、投送文档。又例如在虚拟空间中存在多个相距较远的界面元素的情况下,会要求用户在虚拟场景中大范围、频繁的调整图标元素的位置,用户消耗过高、体验不佳。In some cases, users need to move the icon elements corresponding to the interactive devices in the virtual scene to the interface elements to realize the interaction and triggering of the application. For example, in the above-mentioned virtual conference room, when the user wants to project a document to the virtual conference screen for presentation, the virtual conference screen may be far away from the user's location, and the user cannot directly interact with the virtual conference screen or project the document. For another example, when there are multiple interface elements that are far apart in the virtual space, the user will be required to adjust the position of the icon elements in the virtual scene on a large scale and frequently, which results in excessive user consumption and poor experience.

在本公开的一些实施例中,可生成包括多个界面元素的一组界面元素,即利用一组界面元素的形式呈现。在一些实施例中,可通过调整各独立的界面元素的显示位置的方式,由独立的界面元素在视觉上组成一组界面元素,例如横向依次排序的多个互相独立的界面元素。在一些实施例中,XR设备112还可将多个界面元素组成一组界面元素,可在生成的一组界面元素中通过例如滑动等方式选择目标界面元素,例如,在界面元素对应工具的工具图标的情况下,一组界面元素可以对应地配置为工具栏。In some embodiments of the present disclosure, a group of interface elements including a plurality of interface elements may be generated, i.e., presented in the form of a group of interface elements. In some embodiments, the independent interface elements may be visually composed of a group of interface elements by adjusting the display position of each independent interface element, such as a plurality of mutually independent interface elements arranged in sequence horizontally. In some embodiments, the XR device 112 may also combine a plurality of interface elements into a group of interface elements, and a target interface element may be selected from the generated group of interface elements by, for example, sliding, etc. For example, in the case where the interface element corresponds to a tool icon of a tool, a group of interface elements may be correspondingly configured as a toolbar.

具体的,图4中示出根据本公开的一些实施例的虚拟场景下的示例400的示意图,可参见图4中示例400所示。在虚拟场景120为虚拟会议室场景下,XR设备112可对应该虚拟会议室场景选取实现文档调用的工具图标作为界面元素411、实现文档投递的工具图标作为界面元素412,以及实现文档记录的工具图标作为界面元素413。XR设备112生成包括界面元素411、界面元素412以及界面元素413的一组界面元素410。XR设备112可根据用户111的指示在一组界面元素410中进行滑动的方式,选择目标界面元素。在一些实施例中,XR设备112中还可呈现用于标识目标界面元素的标记元素420,以便于用户111根据标记元素420选定目标界面元素。例如,用户111可通过向XR设备112发送向左滑动或向右滑动的动作以操作标记元素420所选中的目标界面元素。在一些实施例中,XR设备112可利用不同与其他界面元素的呈现方式呈现目标界面元素,以便于实现与标记元素420相同的效果,例如,调整作为目标界面元素的界面元素412的显示位置高于一组界面元素410中的界面元素。Specifically, FIG. 4 shows a schematic diagram of an example 400 in a virtual scene according to some embodiments of the present disclosure, as shown in example 400 in FIG. 4. When the virtual scene 120 is a virtual conference room scene, the XR device 112 may select a tool icon for implementing document call as an interface element 411, a tool icon for implementing document delivery as an interface element 412, and a tool icon for implementing document recording as an interface element 413 corresponding to the virtual conference room scene. The XR device 112 generates a group of interface elements 410 including the interface element 411, the interface element 412, and the interface element 413. The XR device 112 may select a target interface element by sliding in a group of interface elements 410 according to the instructions of the user 111. In some embodiments, the XR device 112 may also present a marking element 420 for identifying the target interface element, so that the user 111 may select the target interface element according to the marking element 420. For example, the user 111 may send a left-slide or right-slide action to the XR device 112 to operate the target interface element selected by the marking element 420. In some embodiments, XR device 112 may present the target interface element in a different manner from other interface elements so as to achieve the same effect as marking element 420 , for example, adjusting the display position of interface element 412 as the target interface element to be higher than the interface elements in a group of interface elements 410 .

在一些实施例中,操控虚拟场景中与虚拟物体相关联的至少一个界面元素包括:响应于检测到用户关于真实物体所做出的第一操作,在虚拟场景中与虚拟物体相关联地呈现一组界面元素。具体的,第一操作可供用户使用,以便于在用户做出第一操作的情况下,唤起一组界面元素。XR设备112中可与第一操作关联地存储操作参数,以便于通过检测用户的操作参数是否满足该第一操作的操作参数,以确定用户是否做出第一操作。这样的操作参数例如,用户的手指与真实物体是否相接触,接触的时间等。由此,以方便用户唤起界面元素。在一些实施例中,第一操作指示用户的至少一个手指与真实物体的接触达到预定时长。例如,第一操作操作可以指示用户的至少一个手指与真实物体的接触达到预定时长。预定时长通常可基于响应精度对应的进行设置,例如,设置为1000ms。由此来区分用户的操作意图,减少误触。In some embodiments, manipulating at least one interface element associated with a virtual object in a virtual scene includes: in response to detecting a first operation performed by a user on a real object, presenting a set of interface elements in association with the virtual object in the virtual scene. Specifically, the first operation is available to the user so that when the user performs the first operation, a set of interface elements is called up. The XR device 112 may store operation parameters in association with the first operation so as to determine whether the user performs the first operation by detecting whether the operation parameters of the user meet the operation parameters of the first operation. Such operation parameters include, for example, whether the user's finger is in contact with the real object, the time of contact, etc. Thus, it is convenient for the user to call up the interface element. In some embodiments, the first operation indicates that at least one finger of the user is in contact with the real object for a predetermined time. For example, the first operation operation may indicate that at least one finger of the user is in contact with the real object for a predetermined time. The predetermined time can usually be set based on the response accuracy, for example, set to 1000ms. This distinguishes the user's operation intention and reduces false touches.

具体的,基于图4中所示例的一组界面元素410,进一步参考图5A。图5A中示出根据本公开的一些实施例的虚拟场景下的示例500A的示意图。预先配置第一操作指示用户111的至少一个对象114(例如,手指)与真实物体113(例如,桌面)的接触达到1000ms以上,XR设备112检测到用户111的“左手”、“右手”存在动作的情况下,调整虚拟场景120中的图标元素122、123。以及XR设备基于“右手”的四个对象114(例如,手指)同时与真实物体113(例如,桌面)存在接触的情况,虚拟物体121(例如,虚拟桌面)上对应在图标元素123的呈现一组反馈元素124。XR设备112检测到“右手”的四个对象114(例如,手指)同时与真实物体113(例如,桌面)存在接触,在四个对象114(例如,手指)与真实物体113(例如,桌面)的接触时长满足预先设置的1000ms的情况下,XR设备112在虚拟场景120中与真实物体113(例如,桌面)关联的呈现一组界面元素410。相应地,XR设备112在呈现一组显示元素410时,调整目标界面元素412的显示位置高于一组界面元素410中的界面元素进行显示。以及XR设备112在虚拟物体121(例如,虚拟桌面)中与图标元素123对应地呈现多个反馈元素124。Specifically, based on the set of interface elements 410 illustrated in FIG. 4 , further reference is made to FIG. 5A . FIG. 5A shows a schematic diagram of an example 500A in a virtual scene according to some embodiments of the present disclosure. A pre-configured first operation indicates that at least one object 114 (e.g., finger) of the user 111 is in contact with a real object 113 (e.g., desktop) for more than 1000ms, and the XR device 112 detects that the "left hand" and "right hand" of the user 111 are in motion, and adjusts the icon elements 122 and 123 in the virtual scene 120. And based on the situation that the four objects 114 (e.g., fingers) of the "right hand" of the XR device are in contact with the real object 113 (e.g., desktop) at the same time, a set of feedback elements 124 are presented on the virtual object 121 (e.g., virtual desktop) corresponding to the icon element 123. The XR device 112 detects that the four objects 114 (e.g., fingers) of the "right hand" are in contact with the real object 113 (e.g., desktop) at the same time. When the contact duration between the four objects 114 (e.g., fingers) and the real object 113 (e.g., desktop) meets the preset 1000ms, the XR device 112 presents a group of interface elements 410 associated with the real object 113 (e.g., desktop) in the virtual scene 120. Accordingly, when presenting a group of display elements 410, the XR device 112 adjusts the display position of the target interface element 412 to be higher than the interface elements in the group of interface elements 410 for display. And the XR device 112 presents multiple feedback elements 124 corresponding to the icon element 123 in the virtual object 121 (e.g., virtual desktop).

在一些实施例中,在检测到用户关于真实物体所做出的预定操作包括第二操作的情况下,改变与虚拟物体相关联的一组界面元素的显示位置,以确定一组界面元素中被选择的目标界面元素。具体的,与上述第一操作相类似,XR设备112可配置提供不同功能的第二操作,例如,提供改变上述的界面元素的显示位置的第二操作。相应地,XR设备112在检测到用户做出该第二操作的情况下,相应地对界面元素的显示位置进行调整。由此,进一步方便用户确定、调整界面元素。在一些实施例中,第二操作指示用户的至少一个手指相对于真实物体的、在第一方向上的滑动其中,XR设备112设置可以第一方向,第一方向的选择通常与一组界面元素中界面元素的排列方向对应,例如排列方向为水平排列的情况下,第一方向可确定为水平左向和水平右向。在一些实施例中,XR设备112可设置滑动距离阈值,并将检测到用户的至少一个手指相对于真实物体的、在第一方向的满足距离阈值的滑动确定为有效的滑动。其中,滑动距离可根据真实物体的尺寸、用户的手部尺寸等设置。例如,设置滑动阈值为2cm。由此来区分用户的操作意图,减少误触。In some embodiments, when it is detected that the predetermined operation performed by the user on the real object includes a second operation, the display position of a group of interface elements associated with the virtual object is changed to determine the target interface element selected in the group of interface elements. Specifically, similar to the above-mentioned first operation, the XR device 112 can be configured to provide a second operation with different functions, for example, a second operation of changing the display position of the above-mentioned interface element is provided. Accordingly, when the XR device 112 detects that the user performs the second operation, it adjusts the display position of the interface element accordingly. Thus, it is further convenient for the user to determine and adjust the interface element. In some embodiments, the second operation indicates that at least one finger of the user slides in a first direction relative to the real object, wherein the XR device 112 can be set to a first direction, and the selection of the first direction generally corresponds to the arrangement direction of the interface elements in a group of interface elements, for example, when the arrangement direction is horizontal arrangement, the first direction can be determined as horizontal left and horizontal right. In some embodiments, the XR device 112 can set a sliding distance threshold, and detect that the sliding of at least one finger of the user relative to the real object in the first direction that meets the distance threshold is determined as a valid sliding. Among them, the sliding distance can be set according to the size of the real object, the size of the user's hand, etc. For example, the sliding threshold is set to 2 cm to distinguish the user's operation intention and reduce accidental touches.

具体的,参考图5B中示出的示例500B的示意图。预先配置预定操作包括指示用户111的至少一个对象114(例如,手指)相对于真实物体的、在水平方向上的滑动的第二操作的情况下,XR设备112检测到上述图5A中的四个对象114(例如,手指)在真实物体113(例如,桌面)上存在水平右向的滑动。在四个对象114(例如,手指)的滑动距离超过预设的2cm的情况下,XR设备将标记元素420所标记的目标界面元素由界面元素412调整至界面元素411。Specifically, refer to the schematic diagram of example 500B shown in FIG5B. In the case where the pre-configured predetermined operation includes a second operation indicating that at least one object 114 (e.g., finger) of the user 111 slides in a horizontal direction relative to a real object, the XR device 112 detects that the four objects 114 (e.g., fingers) in FIG5A above slide horizontally to the right on the real object 113 (e.g., desktop). When the sliding distance of the four objects 114 (e.g., fingers) exceeds the preset 2 cm, the XR device adjusts the target interface element marked by the marking element 420 from the interface element 412 to the interface element 411.

在一些实施例中,还包括呈现与目标界面元素对应的子界面。具体的,继续参考图5B中的示例500B,以其中作为目标界面元素的界面元素411进行示例。XR设备112可在确定目标界面元素为界面元素411后,对应地生成目标界面元素对应的子界面510。例如,子界面510可以用于呈现体现为工具图标的界面元素411中,工具图标所指向的工具的操控界面。例如,子界面也可以为上述说明的工具图标所指向工具的预览界面。In some embodiments, it also includes presenting a sub-interface corresponding to the target interface element. Specifically, continue to refer to example 500B in Figure 5B, and take interface element 411 as the target interface element as an example. After determining that the target interface element is interface element 411, the XR device 112 may generate a sub-interface 510 corresponding to the target interface element. For example, the sub-interface 510 can be used to present the control interface of the tool pointed to by the tool icon in the interface element 411 embodied as a tool icon. For example, the sub-interface can also be a preview interface of the tool pointed to by the tool icon described above.

在一些实施例中,XR设备112可保持呈现的目标界面元素及对应的子界面至下一次检测到预定操作。例如,在XR设备112无法检测到用户111在物理场景110中关于真实物体113的操作或XR设备112检测到用户111在物理场景110中所做的动作并非预定操作的情况下,XR设备112保持呈现体现为目标界面元素的界面元素411及对应的子界面510。In some embodiments, the XR device 112 may keep presenting the target interface element and the corresponding sub-interface until the next time a predetermined operation is detected. For example, when the XR device 112 cannot detect the operation of the user 111 on the real object 113 in the physical scene 110 or the XR device 112 detects that the action performed by the user 111 in the physical scene 110 is not a predetermined operation, the XR device 112 keeps presenting the interface element 411 embodied as the target interface element and the corresponding sub-interface 510.

在存在子界面的情况下,还可相应的配置操作指示以实现对子界面的操控,以便于用户根据需求对是否呈现子界面进行调整。在一些实施例中,操控虚拟场景中与虚拟物体相关联的至少一个界面元素包括:响应于检测到用户关于真实物体所做出的第三操作,停止呈现与一组界面元素中的目标界面元素相对应的子界面。具体的,如上述第一操作、第二操作相类似,还可配置用于控制子界面(例如,停止呈现子界面)的第三操作。在XR设备112在检测到用户关于真实物体所做出的第三操作的情况下,停止呈现与一组界面元素中的目标界面元素相对应的子界面。在一些实施例中,第三操作指示用户的至少一个手指相对于真实物体的、在第二方向上的滑动。其中,XR设备112可以设置第二方向,第二方向与上述的第一方向为不同方向,例如设置第一方向为水平左向、右向时相应地设置第二方向为竖直向下。具体的,继续参考图5C中的示例500C,XR设备112可在检测到用户111关于真实物体113(例如,桌面)所做出的竖直向下滑动,停止呈现子界面510。In the case of a sub-interface, an operation instruction may be configured accordingly to realize the manipulation of the sub-interface, so that the user can adjust whether to present the sub-interface according to the needs. In some embodiments, manipulating at least one interface element associated with a virtual object in a virtual scene includes: in response to detecting a third operation made by the user on the real object, stopping the presentation of the sub-interface corresponding to the target interface element in a group of interface elements. Specifically, similar to the first operation and the second operation described above, a third operation for controlling the sub-interface (for example, stopping the presentation of the sub-interface) may also be configured. When the XR device 112 detects the third operation made by the user on the real object, the sub-interface corresponding to the target interface element in a group of interface elements is stopped. In some embodiments, the third operation indicates the sliding of at least one finger of the user in a second direction relative to the real object. Among them, the XR device 112 can set a second direction, and the second direction is different from the first direction described above, for example, when the first direction is set to the horizontal left direction or the right direction, the second direction is set to the vertical downward direction accordingly. Specifically, with continued reference to example 500C in FIG. 5C , the XR device 112 may stop presenting the sub-interface 510 upon detecting a vertical downward slide made by the user 111 on the real object 113 (eg, a desktop).

在一些实施例中,可与设置第三操作指示的方式相同的设置第四操作,XR设备112可根据用户111做出的第四操作相应地对一组界面元素410进行操作。例如,隐藏一组界面元素410、调整一组界面410的呈现形式至“沉浸模式”等。通常可配置“沉浸模式”下的一组界面410的呈现尺寸小于非“沉浸模式”下的一组界面410,以便于用户111基于视觉了解一组界面元素410的状态。在此情况下,可参考图5D,图5D中示出了根据本公开的一些实施例的用于在虚拟场景中交互的示例500D,在示例500D中,XR设备112可根据用户111做出的第四操作相应地对一组界面元素410调整至“沉浸模式”得到一组界面元素530,调整后的一组界面530的呈现尺寸小于一组界面元素530。其中对于第四操作的配置方式、检测方式等可参考上述第三操作,此处不再重复说明。In some embodiments, the fourth operation may be set in the same manner as the third operation indication, and the XR device 112 may operate a group of interface elements 410 accordingly according to the fourth operation performed by the user 111. For example, a group of interface elements 410 may be hidden, the presentation form of a group of interfaces 410 may be adjusted to the "immersive mode", etc. The presentation size of a group of interfaces 410 in the "immersive mode" may be configured to be smaller than that of a group of interfaces 410 in the non-"immersive mode", so that the user 111 can understand the state of a group of interface elements 410 based on vision. In this case, reference may be made to FIG. 5D, which shows an example 500D for interaction in a virtual scene according to some embodiments of the present disclosure. In the example 500D, the XR device 112 may adjust a group of interface elements 410 to the "immersive mode" according to the fourth operation performed by the user 111 to obtain a group of interface elements 530, and the presentation size of the adjusted group of interfaces 530 is smaller than that of the group of interface elements 530. The configuration method, detection method, etc. of the fourth operation may refer to the third operation described above, and will not be repeated here.

虚拟场景可用于多个不同用户的交互,为兼顾呈现元素共享与个性化的呈现元素配置需求。在一些实施例中,生成设备通常会设置两个通信域,例如“公域”和“私域”。其中,公域中的呈现元素可被加入、关联至虚拟场景中的所有用户观看,私域与用户对应地配置,在私域中的呈现元素仅可被特定于该私域的用户观看。The virtual scene can be used for the interaction of multiple different users, in order to take into account the sharing of presentation elements and the personalized presentation element configuration requirements. In some embodiments, the generating device usually sets two communication domains, such as "public domain" and "private domain". Among them, the presentation elements in the public domain can be added and associated to all users in the virtual scene for viewing, and the private domain is configured corresponding to the user, and the presentation elements in the private domain can only be viewed by users specific to the private domain.

在一些实施例中,用户111为第一用户,虚拟场景120还与第二用户相关联,其中至少一个界面元素被呈现在虚拟场景中特定于第一用户的显示域。以便于用户进行个性化配置的同时,避免对虚拟场景中其他的用户产生干扰。In some embodiments, user 111 is a first user, and virtual scene 120 is also associated with a second user, wherein at least one interface element is presented in a display domain specific to the first user in the virtual scene, so as to facilitate the user to perform personalized configuration while avoiding interference to other users in the virtual scene.

图6中示出了根据本公开的一些实施例的用于在虚拟场景中交互的示例600。在示例600中虚拟场景120配置为虚拟会议室,虚拟场景120中呈现有上述各实施例中说明的第一用户111的虚拟化身610,以及与虚拟场景120关联的第二用户的虚拟化身620。为方便理解,在此示例中,虚拟场景120中仅呈现“公域”中的元素。显示域621可呈现上述“公域”中的元素以及特定于第二用户的“私域”中的元素。显示域611可呈现上述“公域”中的元素以及特定于第一用户111的“私域”中的元素。例如,显示域611可中同时呈现“公域”中的共享文档630,以及一组界面元素410。相应地,显示域621中仅呈现位于“公域”中的共享文档630,并不呈现特定于用户111的显示域611中的一组页面元素410。FIG6 shows an example 600 for interaction in a virtual scene according to some embodiments of the present disclosure. In example 600, the virtual scene 120 is configured as a virtual conference room, and the virtual avatar 610 of the first user 111 described in the above embodiments and the virtual avatar 620 of the second user associated with the virtual scene 120 are presented in the virtual scene 120. For ease of understanding, in this example, only elements in the "public domain" are presented in the virtual scene 120. The display domain 621 can present the elements in the "public domain" and the elements in the "private domain" specific to the second user. The display domain 611 can present the elements in the "public domain" and the elements in the "private domain" specific to the first user 111. For example, the display domain 611 can simultaneously present a shared document 630 in the "public domain" and a group of interface elements 410. Accordingly, only the shared document 630 in the "public domain" is presented in the display domain 621, and a group of page elements 410 in the display domain 611 specific to the user 111 are not presented.

在一些实施例中,若虚拟场景中配置有“公域”、“私域”,在界面元素中所包括的工具图标的情况下,工具图标所对应的工具可为允许用户所使用的、“公域”下的功能,以便于用户利用快捷操控“公域”中的呈现内容。例如,在上述示例600中,一组界面元素410中可包括用于实现文档分享的工具的工具图标,用户111在使用该文档分享工具后,可将文档投送至“公域”进行呈现,例如共享文档630。在此情况下,用户111可在“公域”中的观看共享文档630。在一些实施例中,在工具图标所对应的工具可为允许用户所使用的、“公域”下的功能的情况下,还可对应地配置虚拟场景中实现该功能的虚拟物体。例如,在实现共享文档这一示例中,文档分享工具可确定用于呈现共享文档630的虚拟物体。例如,虚拟会议室中的“公域”下的虚拟会议屏幕、虚拟桌面。In some embodiments, if the virtual scene is configured with a "public domain" and a "private domain", in the case of a tool icon included in the interface element, the tool corresponding to the tool icon may be a function in the "public domain" that the user can use, so that the user can use the shortcut to manipulate the presentation content in the "public domain". For example, in the above example 600, a group of interface elements 410 may include a tool icon for a tool for implementing document sharing. After using the document sharing tool, the user 111 can post the document to the "public domain" for presentation, such as a shared document 630. In this case, the user 111 can view the shared document 630 in the "public domain". In some embodiments, in the case where the tool corresponding to the tool icon may be a function in the "public domain" that the user can use, a virtual object that implements the function in the virtual scene may also be configured accordingly. For example, in the example of implementing a shared document, the document sharing tool may determine a virtual object for presenting the shared document 630. For example, a virtual conference screen and a virtual desktop in the "public domain" in a virtual conference room.

在一些实施例中,虚拟场景中的同一虚拟物体可同时用于呈现“公域”和“私域”下的内容。例如,虚拟物体121(例如,虚拟桌面)对于用户111可同时呈现一组界面元素410以及共享文档630。In some embodiments, the same virtual object in a virtual scene can be used to present content in both the “public domain” and the “private domain”. For example, a virtual object 121 (e.g., a virtual desktop) can simultaneously present a set of interface elements 410 and a shared document 630 to a user 111.

根据本公开的实施例在虚拟场景中绘制虚拟物体,虚拟物体是基于物理场景中的真实物体而生成的。检测物理场景中的用户关于真实物体所做出的预定操作。以及响应于检测到预定操作,操控虚拟场景中与虚拟物体相关联的至少一个界面元素。由此,利用物理场景中的真实物体实现用户虚拟场景下的交互,提升用户的交互体验。According to an embodiment of the present disclosure, a virtual object is drawn in a virtual scene, and the virtual object is generated based on a real object in a physical scene. A predetermined operation performed by a user in the physical scene on the real object is detected. And in response to detecting the predetermined operation, at least one interface element associated with the virtual object in the virtual scene is manipulated. Thus, the real object in the physical scene is used to realize the interaction of the user in the virtual scene, thereby improving the user's interactive experience.

示例装置和设备Example devices and equipment

图7示出了根据本公开的某些实施例的用于在虚拟场景中交互的装置700的示意性结构框图。装置700可以被实现为或者被包括在计时XR设备112中。装置700中的各个模块/组件可以由硬件、软件、固件或者它们的任意组合来实现。7 shows a schematic structural block diagram of an apparatus 700 for interacting in a virtual scene according to some embodiments of the present disclosure. The apparatus 700 may be implemented as or included in the timing XR device 112. Each module/component in the apparatus 700 may be implemented by hardware, software, firmware, or any combination thereof.

如图所示,装置700包括绘制单元710,被配置为在虚拟场景中绘制虚拟物体,虚拟物体是基于物理场景中的真实物体而生成的。装置700还包括检测单元720,被配置为检测物理场景中的用户关于真实物体所做出的预定操作。以及装置700还包括操控单元730,被配置为响应于检测到预定操作,操控虚拟场景中与虚拟物体相关联的至少一个界面元素。As shown in the figure, the device 700 includes a drawing unit 710, which is configured to draw a virtual object in a virtual scene, and the virtual object is generated based on a real object in a physical scene. The device 700 also includes a detection unit 720, which is configured to detect a predetermined operation performed by a user on a real object in the physical scene. And the device 700 also includes a manipulation unit 730, which is configured to manipulate at least one interface element associated with the virtual object in the virtual scene in response to detecting the predetermined operation.

在一些实施例中,检测单元720包括:位置检测子单元,检测用户的至少一个肢体部分与真实物体的相对位置;操作检测子单元基于相对位置,检测物理场景中的用户关于真实物体所做出的预定操作。In some embodiments, the detection unit 720 includes: a position detection subunit that detects the relative position of at least one body part of the user and the real object; and an operation detection subunit that detects a predetermined operation performed by the user on the real object in the physical scene based on the relative position.

在一些实施例中,操控单元730进一步被配置为,响应于检测到用户关于真实物体所做出的第一操作,在虚拟场景中与虚拟物体相关联地呈现一组界面元素。In some embodiments, the manipulation unit 730 is further configured to, in response to detecting a first operation performed by the user on the real object, present a group of interface elements in the virtual scene in association with the virtual object.

在一些实施例中,第一操作指示用户的至少一个手指与真实物体的接触达到预定时长。In some embodiments, the first operation indicates that at least one finger of the user is in contact with the real object for a predetermined period of time.

在一些实施例中,操控单元730进一步被配置为,响应于检测到用户关于真实物体所做出的第二操作,改变与虚拟物体相关联的一组界面元素的显示位置,以确定一组界面元素中被选择的目标界面元素。In some embodiments, the manipulation unit 730 is further configured to, in response to detecting a second operation performed by the user on the real object, change the display position of a group of interface elements associated with the virtual object to determine a target interface element selected from the group of interface elements.

在一些实施例中,第二操作指示用户的至少一个手指相对于真实物体的、在第一方向上的滑动。In some embodiments, the second operation indicates a sliding of at least one finger of the user in the first direction relative to the real object.

在一些实施例中,装置700还包括呈现单元,被配置为呈现与目标界面元素对应的子界面。In some embodiments, the device 700 also includes a presentation unit configured to present a sub-interface corresponding to the target interface element.

在一些实施例中,操控单元730进一步被配置为,响应于检测到用户关于真实物体所做出的第三操作,停止呈现与一组界面元素中的目标界面元素相对应的子界面。In some embodiments, the manipulation unit 730 is further configured to, in response to detecting a third operation performed by the user on the real object, stop presenting the sub-interface corresponding to the target interface element in the group of interface elements.

在一些实施例中,第三操作指示用户的至少一个手指相对于真实物体的、在第二方向上的滑动。In some embodiments, the third operation indicates a sliding of at least one finger of the user in a second direction relative to the real object.

在一些实施例中,虚拟物体包括:预定操作包括关于真实物体的交互表面所做出的操作,虚拟物体包括与真实物体的交互表面所对应的面元素。In some embodiments, the virtual object includes: the predetermined operation includes an operation performed on an interactive surface of the real object, and the virtual object includes a surface element corresponding to the interactive surface of the real object.

在一些实施例中,用户为第一用户,虚拟场景还与第二用户相关联,其中至少一个界面元素被呈现在虚拟场景中特定于第一用户的显示域。In some embodiments, the user is a first user, the virtual scene is further associated with a second user, and the at least one interface element is presented in a display field in the virtual scene that is specific to the first user.

图8示出了示出了其中可以实施本公开的一个或多个实施例的计算设备500的框图。应当理解,图8所示出的计算设备800仅仅是示例性的,而不应当构成对本文所描述的实施例的功能和范围的任何限制。图8所示出的计算设备800可以用于实现图1的XR设备112。FIG8 shows a block diagram of a computing device 500 in which one or more embodiments of the present disclosure may be implemented. It should be understood that the computing device 800 shown in FIG8 is merely exemplary and should not constitute any limitation on the functionality and scope of the embodiments described herein. The computing device 800 shown in FIG8 may be used to implement the XR device 112 of FIG1 .

如图8所示,计算设备800是通用计算设备的形式。计算设备800的组件可以包括但不限于一个或多个处理器或处理单元810、存储器820、存储设备830、一个或多个通信单元840、一个或多个输入设备850以及一个或多个输出设备850。处理单元810可以是实际或虚拟处理器并且能够根据存储器820中存储的程序来执行各种处理。在多处理器系统中,多个处理单元并行执行计算机可执行指令,以提高计算设备800的并行处理能力。As shown in Figure 8, computing device 800 is in the form of a general-purpose computing device. The components of computing device 800 may include, but are not limited to, one or more processors or processing units 810, memory 820, storage device 830, one or more communication units 840, one or more input devices 850, and one or more output devices 850. Processing unit 810 may be an actual or virtual processor and is capable of performing various processes according to a program stored in memory 820. In a multi-processor system, multiple processing units execute computer executable instructions in parallel to improve the parallel processing capabilities of computing device 800.

计算设备800通常包括多个计算机存储介质。这样的介质可以是计算设备800可访问的任何可以获取的介质,包括但不限于易失性和非易失性介质、可拆卸和不可拆卸介质。存储器820可以是易失性存储器(例如寄存器、高速缓存、随机访问存储器(RAM))、非易失性存储器(例如,只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、闪存)或它们的某种组合。存储设备830可以是可拆卸或不可拆卸的介质,并且可以包括机器可读介质,诸如闪存驱动、磁盘或者任何其他介质,其可以能够用于存储信息和/或数据(例如用于训练的训练数据)并且可以在计算设备800内被访问。The computing device 800 typically includes a plurality of computer storage media. Such media may be any accessible media that is accessible to the computing device 800, including but not limited to volatile and non-volatile media, removable and non-removable media. The memory 820 may be a volatile memory (e.g., a register, a cache, a random access memory (RAM)), a non-volatile memory (e.g., a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), flash memory), or some combination thereof. The storage device 830 may be a removable or non-removable medium, and may include a machine-readable medium, such as a flash drive, a disk, or any other medium, which may be capable of being used to store information and/or data (e.g., training data for training) and may be accessed within the computing device 800.

计算设备800可以进一步包括另外的可拆卸/不可拆卸、易失性/非易失性存储介质。尽管未在图8中示出,可以提供用于从可拆卸、非易失性磁盘(例如“软盘”)进行读取或写入的磁盘驱动和用于从可拆卸、非易失性光盘进行读取或写入的光盘驱动。在这些情况中,每个驱动可以由一个或多个数据介质接口被连接至总线(未示出)。存储器820可以包括计算机程序产品825,其具有一个或多个程序模块,这些程序模块被配置为执行本公开的各种实施例的各种方法或动作。The computing device 800 may further include additional removable/non-removable, volatile/non-volatile storage media. Although not shown in FIG. 8 , a disk drive for reading or writing from a removable, non-volatile disk (e.g., a “floppy disk”) and an optical drive for reading or writing from a removable, non-volatile optical disk may be provided. In these cases, each drive may be connected to a bus (not shown) by one or more data media interfaces. The memory 820 may include a computer program product 825 having one or more program modules that are configured to perform various methods or actions of various embodiments of the present disclosure.

通信单元840实现通过通信介质与其他计算设备进行通信。附加地,计算设备800的组件的功能可以以单个计算集群或多个计算机器来实现,这些计算机器能够通过通信连接进行通信。因此,计算设备800可以使用与一个或多个其他服务器、网络个人计算机(PC)或者另一个网络节点的逻辑连接来在联网环境中进行操作。The communication unit 840 enables communication with other computing devices via a communication medium. Additionally, the functions of the components of the computing device 800 can be implemented in a single computing cluster or multiple computing machines that can communicate via a communication connection. Therefore, the computing device 800 can operate in a networked environment using a logical connection with one or more other servers, a network personal computer (PC), or another network node.

输入设备850可以是一个或多个输入设备,例如鼠标、键盘、追踪球等。输出设备850可以是一个或多个输出设备,例如显示器、扬声器、打印机等。计算设备800还可以根据需要通过通信单元840与一个或多个外部设备(未示出)进行通信,外部设备诸如存储设备、显示设备等,与一个或多个使得用户与计算设备800交互的设备进行通信,或者与使得计算设备800与一个或多个其他计算设备通信的任何设备(例如,网卡、调制解调器等)进行通信。这样的通信可以经由输入/输出(I/O)接口(未示出)来执行。The input device 850 may be one or more input devices, such as a mouse, keyboard, tracking ball, etc. The output device 850 may be one or more output devices, such as a display, a speaker, a printer, etc. The computing device 800 may also communicate with one or more external devices (not shown) through the communication unit 840 as needed, such as storage devices, display devices, etc., communicate with one or more devices that allow a user to interact with the computing device 800, or communicate with any device that allows the computing device 800 to communicate with one or more other computing devices (e.g., a network card, a modem, etc.). Such communication may be performed via an input/output (I/O) interface (not shown).

根据本公开的示例性实现方式,提供了一种计算机可读存储介质,其上存储有计算机可执行指令,其中计算机可执行指令被处理器执行以实现上文描述的方法。根据本公开的示例性实现方式,还提供了一种计算机程序产品,计算机程序产品被有形地存储在非瞬态计算机可读介质上并且包括计算机可执行指令,而计算机可执行指令被处理器执行以实现上文描述的方法。According to an exemplary implementation of the present disclosure, a computer-readable storage medium is provided, on which computer-executable instructions are stored, wherein the computer-executable instructions are executed by a processor to implement the method described above. According to an exemplary implementation of the present disclosure, a computer program product is also provided, which is tangibly stored on a non-transitory computer-readable medium and includes computer-executable instructions, and the computer-executable instructions are executed by a processor to implement the method described above.

这里参照根据本公开实现的方法、装置、设备和计算机程序产品的流程图和/或框图描述了本公开的各个方面。应当理解,流程图和/或框图的每个方框以及流程图和/或框图中各方框的组合,都可以由计算机可读程序指令实现。Various aspects of the present disclosure are described herein with reference to the flowcharts and/or block diagrams of the methods, devices, equipment, and computer program products implemented according to the present disclosure. It should be understood that each box in the flowchart and/or block diagram and the combination of each box in the flowchart and/or block diagram can be implemented by computer-readable program instructions.

这些计算机可读程序指令可以提供给通用计算机、专用计算机或其他可编程数据处理装置的处理单元,从而生产出一种机器,使得这些指令在通过计算机或其他可编程数据处理装置的处理单元执行时,产生了实现流程图和/或框图中的一个或多个方框中规定的功能/动作的装置。也可以把这些计算机可读程序指令存储在计算机可读存储介质中,这些指令使得计算机、可编程数据处理装置和/或其他设备以特定方式工作,从而,存储有指令的计算机可读介质则包括一个制造品,其包括实现流程图和/或框图中的一个或多个方框中规定的功能/动作的各个方面的指令。These computer-readable program instructions can be provided to a processing unit of a general-purpose computer, a special-purpose computer, or other programmable data processing device, thereby producing a machine, so that when these instructions are executed by the processing unit of the computer or other programmable data processing device, a device that implements the functions/actions specified in one or more boxes in the flowchart and/or block diagram is generated. These computer-readable program instructions can also be stored in a computer-readable storage medium, and these instructions cause the computer, programmable data processing device, and/or other equipment to work in a specific manner, so that the computer-readable medium storing the instructions includes a manufactured product, which includes instructions for implementing various aspects of the functions/actions specified in one or more boxes in the flowchart and/or block diagram.

可以把计算机可读程序指令加载到计算机、其他可编程数据处理装置、或其他设备上,使得在计算机、其他可编程数据处理装置或其他设备上执行一系列操作步骤,以产生计算机实现的过程,从而使得在计算机、其他可编程数据处理装置、或其他设备上执行的指令实现流程图和/或框图中的一个或多个方框中规定的功能/动作。Computer-readable program instructions can be loaded onto a computer, other programmable data processing apparatus, or other device so that a series of operating steps are performed on the computer, other programmable data processing apparatus, or other device to produce a computer-implemented process, so that the instructions executed on the computer, other programmable data processing apparatus, or other device implement the functions/actions specified in one or more boxes in the flowchart and/or block diagram.

附图中的流程图和框图显示了根据本公开的多个实现的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或指令的一部分,模块、程序段或指令的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flow chart and block diagram in the accompanying drawings show the possible architecture, function and operation of the system, method and computer program product according to multiple implementations of the present disclosure. In this regard, each square box in the flow chart or block diagram can represent a part of a module, program segment or instruction, and a part of a module, program segment or instruction includes one or more executable instructions for realizing the logical function of the specification. In some implementations as replacements, the function marked in the square box can also occur in a sequence different from that marked in the accompanying drawings. For example, two continuous square boxes can actually be executed substantially in parallel, and they can sometimes be executed in reverse order, depending on the functions involved. It should also be noted that each square box in the block diagram and/or flow chart, and the combination of the square boxes in the block diagram and/or flow chart can be realized by a special hardware-based system that performs the function or action of the specification, or can be realized by a combination of special hardware and computer instructions.

以上已经描述了本公开的各实现,上述说明是示例性的,并非穷尽性的,并且也不限于所公开的各实现。在不偏离所说明的各实现的范围和精神的情况下,对于本技术领域的普通技术人员来说许多修改和变更都是显而易见的。本文中所用术语的选择,旨在最好地解释各实现的原理、实际应用或对市场中的技术的改进,或者使本技术领域的其他普通技术人员能理解本文公开的各个实现方式。The above descriptions of various implementations of the present disclosure are exemplary, non-exhaustive, and not limited to the disclosed implementations. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described implementations. The selection of terms used herein is intended to best explain the principles of the implementations, practical applications, or improvements to the technology in the market, or to enable other persons of ordinary skill in the art to understand the various implementations disclosed herein.

Claims (14)

1.一种在虚拟场景中交互的方法,包括:1. A method for interacting in a virtual scene, comprising: 在虚拟场景中绘制虚拟物体,所述虚拟物体是基于物理场景中的真实物体而生成的;Drawing a virtual object in a virtual scene, wherein the virtual object is generated based on a real object in a physical scene; 检测所述物理场景中的用户关于所述真实物体所做出的预定操作;以及detecting a predetermined operation performed by a user in the physical scene on the real object; and 响应于检测到所述预定操作,操控所述虚拟场景中与所述虚拟物体相关联的至少一个界面元素。In response to detecting the predetermined operation, at least one interface element associated with the virtual object in the virtual scene is manipulated. 2.根据权利要求1所述的方法,其中检测预定操作包括:2. The method according to claim 1, wherein detecting the predetermined operation comprises: 检测所述用户的至少一个肢体部分与所述真实物体的相对位置;以及detecting a relative position of at least one body part of the user and the real object; and 基于所述相对位置,检测所述物理场景中的用户关于所述真实物体所做出的预定操作。Based on the relative position, a predetermined operation performed by a user in the physical scene on the real object is detected. 3.根据权利要求1所述的方法,其中操控所述虚拟场景中与所述虚拟物体相关联的至少一个界面元素包括:3. The method according to claim 1, wherein manipulating at least one interface element associated with the virtual object in the virtual scene comprises: 响应于检测到所述用户关于所述真实物体所做出的第一操作,在所述虚拟场景中与所述虚拟物体相关联地呈现一组界面元素。In response to detecting a first operation performed by the user on the real object, a group of interface elements are presented in the virtual scene in association with the virtual object. 4.根据权利要求3所述的方法,其中所述第一操作指示所述用户的至少一个手指与所述真实物体的接触达到预定时长。4 . The method according to claim 3 , wherein the first operation indicates that at least one finger of the user is in contact with the real object for a predetermined time period. 5.根据权利要求1所述的方法,其中操控所述虚拟场景中与所述虚拟物体相关联的至少一个界面元素包括:5. The method according to claim 1, wherein manipulating at least one interface element associated with the virtual object in the virtual scene comprises: 响应于检测到所述用户关于所述真实物体所做出的第二操作,改变与所述虚拟物体相关联的一组界面元素的显示位置,以确定所述一组界面元素中被选择的目标界面元素。In response to detecting a second operation performed by the user on the real object, a display position of a group of interface elements associated with the virtual object is changed to determine a selected target interface element in the group of interface elements. 6.根据权利要求5所述的方法,其中所述第二操作指示所述用户的至少一个手指相对于所述真实物体的、在第一方向上的滑动。The method according to claim 5 , wherein the second operation indicates sliding of at least one finger of the user in a first direction relative to the real object. 7.根据权利要求5所述的方法,还包括:7. The method according to claim 5, further comprising: 呈现与所述目标界面元素对应的子界面。The sub-interface corresponding to the target interface element is presented. 8.根据权利要求7所述的方法,其中操控所述虚拟场景中与所述虚拟物体相关联的至少一个界面元素包括:8. The method according to claim 7, wherein manipulating at least one interface element associated with the virtual object in the virtual scene comprises: 响应于检测到所述用户关于所述真实物体所做出的第三操作,停止呈现与所述一组界面元素中的所述目标界面元素相对应的所述子界面。In response to detecting a third operation performed by the user on the real object, stopping presenting the sub-interface corresponding to the target interface element in the group of interface elements. 9.根据权利要求8所述的方法,其中所述第三操作指示所述用户的至少一个手指相对于所述真实物体的、在第二方向上的滑动。9 . The method according to claim 8 , wherein the third operation indicates sliding of at least one finger of the user in a second direction relative to the real object. 10.根据权利要求1所述的方法,其中所述预定操作包括关于所述真实物体的交互表面所做出的操作,所述虚拟物体包括与所述真实物体的所述交互表面所对应的面元素。10. The method according to claim 1, wherein the predetermined operation comprises an operation performed on an interactive surface of the real object, and the virtual object comprises a surface element corresponding to the interactive surface of the real object. 11.根据权利要求1至10中任一项所述的方法,其中所述用户为第一用户,所述虚拟场景还与第二用户相关联,其中所述至少一个界面元素被呈现在所述虚拟场景中特定于所述第一用户的显示域。11. The method according to any one of claims 1 to 10, wherein the user is a first user, the virtual scene is further associated with a second user, and wherein the at least one interface element is presented in a display field in the virtual scene that is specific to the first user. 12.一种用于在虚拟场景中交互的装置,包括:12. A device for interacting in a virtual scene, comprising: 绘制单元,被配置为在虚拟场景中绘制虚拟物体,所述虚拟物体是基于物理场景中的真实物体而生成的;A drawing unit, configured to draw a virtual object in a virtual scene, wherein the virtual object is generated based on a real object in a physical scene; 检测单元,被配置为检测所述物理场景中的用户关于所述真实物体所做出的预定操作;以及a detection unit configured to detect a predetermined operation performed by a user in the physical scene on the real object; and 操控单元,被配置为响应于检测到所述预定操作,操控所述虚拟场景中与所述虚拟物体相关联的至少一个界面元素。The manipulation unit is configured to manipulate at least one interface element associated with the virtual object in the virtual scene in response to detecting the predetermined operation. 13.一种电子设备,包括:13. An electronic device, comprising: 至少一个处理单元;以及at least one processing unit; and 至少一个存储器,所述至少一个存储器被耦合到所述至少一个处理单元并且存储用于由所述至少一个处理单元执行的指令,所述指令在由所述至少一个处理单元执行时使所述电子设备执行根据权利要求1至11中任一项所述的方法。At least one memory, the at least one memory being coupled to the at least one processing unit and storing instructions for execution by the at least one processing unit, the instructions causing the electronic device to perform the method according to any one of claims 1 to 11 when executed by the at least one processing unit. 14.一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序可由处理器执行以实现根据权利要求1至11中任一项所述的方法。14. A computer-readable storage medium having a computer program stored thereon, wherein the computer program can be executed by a processor to implement the method according to any one of claims 1 to 11.
CN202310152723.2A 2023-02-10 2023-02-10 Method, device, equipment and storage medium for interaction in a virtual scene Pending CN118484117A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310152723.2A CN118484117A (en) 2023-02-10 2023-02-10 Method, device, equipment and storage medium for interaction in a virtual scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310152723.2A CN118484117A (en) 2023-02-10 2023-02-10 Method, device, equipment and storage medium for interaction in a virtual scene

Publications (1)

Publication Number Publication Date
CN118484117A true CN118484117A (en) 2024-08-13

Family

ID=92188525

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310152723.2A Pending CN118484117A (en) 2023-02-10 2023-02-10 Method, device, equipment and storage medium for interaction in a virtual scene

Country Status (1)

Country Link
CN (1) CN118484117A (en)

Similar Documents

Publication Publication Date Title
CN108885533B (en) Combining virtual and augmented reality
Hürst et al. Gesture-based interaction via finger tracking for mobile augmented reality
CN105074625B (en) Information processing equipment, information processing method and computer readable recording medium storing program for performing
US9836146B2 (en) Method of controlling virtual object or view point on two dimensional interactive display
CN105190477B (en) Head-mounted display device for user interaction in an augmented reality environment
CN104081307B (en) Image processing apparatus, image processing method and program
JP5846662B2 (en) Method and system for responding to user selection gestures for objects displayed in three dimensions
US10191612B2 (en) Three-dimensional virtualization
US20170263033A1 (en) Contextual Virtual Reality Interaction
US9544556B2 (en) Projection control apparatus and projection control method
CN106959808A (en) A kind of system and method based on gesture control 3D models
CN120548519A (en) Device, method and graphical user interface for interacting with a three-dimensional environment using a cursor
Luo et al. Camera-based selection with cardboard head-mounted displays
CN118484117A (en) Method, device, equipment and storage medium for interaction in a virtual scene
KR101211178B1 (en) System and method for playing contents of augmented reality
CN109144235B (en) Man-machine interaction method and system based on head-hand cooperative action
CN120390916A (en) Method and system for gaze-assisted interaction
CN118534997A (en) Method, apparatus, device and storage medium for operating virtual object
CN118535001A (en) Method, device, apparatus and storage medium for interacting in a virtual environment
KR20210011046A (en) A device and method for displaying a user interface(ui) of virtual input device based on motion rocognition
US20240288933A1 (en) Method, apparatus, device and medium for interacting in a virtual scene
TW201925989A (en) Interactive system
JP2018194801A (en) Display, display method, and program
CN118585091A (en) Method, device, equipment and storage medium for interaction in a virtual scene
WO2025140666A1 (en) Method and apparatus for controlling graphical user interface, electronic device, and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination