[go: up one dir, main page]

WO2022061858A1 - Method for assisting a first user provided with a terminal at a scene - Google Patents

Method for assisting a first user provided with a terminal at a scene Download PDF

Info

Publication number
WO2022061858A1
WO2022061858A1 PCT/CN2020/118296 CN2020118296W WO2022061858A1 WO 2022061858 A1 WO2022061858 A1 WO 2022061858A1 CN 2020118296 W CN2020118296 W CN 2020118296W WO 2022061858 A1 WO2022061858 A1 WO 2022061858A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
terminal
user
annotations
virtual reality
Prior art date
Application number
PCT/CN2020/118296
Other languages
French (fr)
Inventor
Kun QIAN
Zhihong Guo
Original Assignee
Orange
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orange filed Critical Orange
Priority to PCT/CN2020/118296 priority Critical patent/WO2022061858A1/en
Priority to PCT/IB2021/000672 priority patent/WO2022064278A1/en
Publication of WO2022061858A1 publication Critical patent/WO2022061858A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Definitions

  • the field of this invention is that of customer remote assistance.
  • the invention relates to a method for remotely assisting a user provided with a terminal at a scene.
  • customers can call an assistance service, which provides for a technician to orally guide the customers.
  • an assistance service which provides for a technician to orally guide the customers.
  • the communication in such a situation is not easy and, when the maintenance operation is really complex, the customer may not be successful even with this guide of assistance.
  • AR customer remote guidance systems have been developed.
  • the customer captures a photo/video in the field, and this photo/video is either directly enriched by an AI tool, or transmitted to a remote technician.
  • the technician can then edit the photo/video with some visual aids (e.g. notations or even virtual objects) then send it back to the customer, so that the customer can execute himself some maintenance operations with the help of the edited photo/video.
  • some visual aids e.g. notations or even virtual objects
  • the present invention provides a method for assisting a first user provided with a terminal at a scene, characterized it comprises performing, by a processing unit of a virtual reality system connected to the first terminal, the steps of:
  • the method comprises a previous step of generating, by a processing unit of the terminal, said tridimensional model of the scene;
  • the method further comprises a step of generating, by a processing unit of the terminal, from a real view of the scene acquired by the camera, an enriched view of the scene using the data describing said annotations;
  • the method further comprises a step of displaying by an interface of the terminal said enriched view of the scene.
  • - Said terminal is smart or AR glasses wearable by the first user.
  • the virtual reality system comprises a virtual reality headset wearable by the second user, displaying said tridimensional model of the scene comprising rendering virtual reality views of the tridimensional model from virtual reality headset position in space;
  • the virtual reality system further comprises at least one motion controller, said annotations of said tridimensional model being inputted by the second user at the obtaining step using the motion controller;
  • Said data describing said annotations sent at the sending step comprise coordinates of said annotations in the tridimensional model of the scene;
  • Integrating the annotations into said real view in the generating step comprises mapping the captured view to the tridimensional model and calculating coordinates of the annotations in the real view from coordinates of the annotations in the tridimensional model.
  • the invention provides a virtual reality system connectable to a terminal of a first user at a scene, the system comprising a processing unit configured to implement:
  • the invention provides a terminal connectable to a virtual reality system, the terminal comprising a camera and a processing unit configured to implement:
  • the invention provides an assembly of a virtual reality system according to the second aspect and a terminal according to the second aspect.
  • the invention provides a computer program product, comprising code instructions for executing a method according to the first aspect for assisting a first user provided with a terminal at a scene; and a computer-readable medium, on which is stored a computer program product comprising code instructions for executing a method according to the first aspect for assisting a first user provided with a terminal at a scene.
  • FIG. 1 illustrates an example of architecture in which the method according to the invention is performed.
  • FIG. 2 and 3 are diagrams representing steps of a preferred embodiment of a method according to the invention.
  • the present invention proposes a method for assisting a first user provided with a terminal 1 at a scene S, as represented by figure 1.
  • the first user is to be assisted by a remote second user provided with a virtual reality system 2 connected to the terminal 1, for instance through a communication network 20 such as internet.
  • the first user By “assisting” the first user, it meant providing him/her with any relevant information that may be of interest for him/her at the scene S.
  • the first user may be an unexperienced user (such as a customer) while the second user may be an experienced user (such a professional technician) .
  • the scene S designates the visible environment wherein the first user needs assistance, and typically comprises an appliance or a machine to be maintained, i.e. assistance is required for performing said maintenance.
  • the scene shall display something about which the first user seeks assistance from the second user
  • the present invention will not be limited to any kind of assistance and only relates to a technical solution for allowing the second user to easily and efficiently provide suitable information to the first user.
  • the terminal 1 is an electronic device designed for augmented reality (AR) display, in particular a hand-free AR device such as pair of smart glasses or AR glasses, but could also be a smartphone, a tablet, a digital camera, or any other user terminal able to provide AR rendering.
  • AR augmented reality
  • the terminal 1 comprises at least a processing unit 11 (such as one or more processors) , a user interface 13 for displaying images and/or videos (such as a screen or a projector) , a camera 14 for acquiring images and/or videos and possibly a storage unit 12 (a memory, for instance flash) .
  • a processing unit 11 such as one or more processors
  • a user interface 13 for displaying images and/or videos (such as a screen or a projector)
  • a camera 14 for acquiring images and/or videos
  • possibly a storage unit 12 a memory, for instance flash
  • the first user is supposed to hold the terminal 1 such that the camera 14 is directed toward the scene S.
  • the terminal 1 is preferably configured to output on said interface 13 a view of what is visible from the camera 14 (i.e. the scene S) .
  • the camera 14 is advantageously located on a front face of the terminal 1, while the display 13 is located on a rear face of the terminal 1.
  • the interface 13 is typically embedded in one lens, while the camera is either in the bridge over the nose or in an arm of the glasses.
  • the method also involves a virtual reality (VR) system 2, i.e. a system able to generate a virtual reality environment for a second user.
  • the VR system 2 comprises at least a processing unit 21 (typically one or more processors) and comprises advantageously a VR headset 23 wearable by the second user and/or at least one motion controller 24 (typically a pair of VR handles) which can be held by the second user.
  • Such motion controller 24 is able to track the position of a body part of the second user (a hand for instance) in space.
  • the processing unit 21 is typically that of a server to which the VR headset 23 and the motion controller 24 are directly connected (for instance by short distance wireless connection such as Bluetooth, or by a wired connection such as USB) .
  • the system/server may also comprise a storage unit 22 (a memory, for instance a hard drive) .
  • processing unit 21 is part of the VR headset 23 and the VR system 2 consists then of VR headset 23 directly connected with the motion controller 24.
  • the VR system 2 may simply comprise a 2D display (a screen) with ordinary input means (a keyboard and a mouse) , the second user being able to move and orient a tridimensional model and interact with it, even if it less convenient that with full VR equipment.
  • Virtual reality systems are known to the skilled person, and any existing virtual reality system may be used for the present invention.
  • the VR system 2 is not located at the scene S, i.e. the VR system 2 is remotely connected to the terminal 1 through a communication network 20 such as internet.
  • the present method first comprises a step (a) of obtaining, by the processing unit 21 of the VR system 2, a tridimensional model of the scene S (also referred as “3D model” in these figures) .
  • This tridimensional model may already exist (for instance in the case of a maintenance of a known machine at a known place) and may already be stored by the storage unit 22 of the VR system 2.
  • the tridimensional model is received by VR system 2 from the terminal 1.
  • the method preferably comprises a previous step of generating (a0) , by the processing unit 11 of the terminal 1, a tridimensional model of the scene S. Once generated, this tridimensional model is transmitted from the terminal 1 to the VR system 2. These two steps may be triggered by the first user needing some assistance on the scene S.
  • the tridimensional model of the scene S may be generated from views of the scene S acquired by said camera 14 of the terminal 1.
  • Algorithms for constructing a 3D model from a plurality of various 2D view are known to the skilled person (the first user may be requested to “scan” the scene S with the camera 14 up to enough data is acquired) , and the present invention is not limited to any technique when it comes to generate such a tridimensional model from a captured scene.
  • such a tridimensional model may be constructed using a 3D reconstruction algorithm such as :
  • RGB-D channel depth-camera
  • Kinect Kinect
  • Realsense TM Realsense
  • the tridimensional model of the scene S is displayed in virtual reality by the VR system 2, by means of the processing unit 21 of VR system 2.
  • This enables a second user, provided with the VR system 2, to watch the scene S as if the second user was at the scene S.
  • the rendering of the tridimensional model of the scene S to the second user is preferably function of his/her position in space.
  • step (b) typically comprises preprocessing the tridimensional model by the processing unit 21, so as to allow real-time displaying to the second used.
  • the tridimensional model is ordinary displayed by a screen, the second user then moving and orientating the screen for instance with a mouse.
  • step (c) which is generally simultaneously performed with step (b) , annotations of the tridimensional model of the scene S, inputted by the second user in order to assist the first user, are obtained by the processing unit 21 of VR system 2.
  • annotations it is meant any visual aid that could be added to the tridimensional model, such as, among others:
  • annotations may be inputted in any way by the second user, for instance using a keyboard and/or a mouse.
  • the VR system 2 comprises at least one motion controller 24 which can be held by the second user
  • these annotations of the tridimensional model are inputted by the second user at step (c) using this motion controller 24.
  • the second user “designates” a point of the space in the tridimensional model with his/her movement to place an annotation at this point, for instance by pushing a trigger on the motion controller 24 once the point is reached by his/her hand.
  • step (d) the processing unit 21 of the system 2 sends to the terminal 1 data describing the inputted annotations, these data enabling enriching a real view of the scene S acquired by a camera 14 of the device 1 with these annotations.
  • the data describing the annotations may typically comprise coordinates of the annotations in the tridimensional model of the scene S and possibly further parameters (description of a type of annotation, attribute of the annotation such as a color or a size, etc. ) .
  • Step (d) is advantageously performed in real-time, i.e. as soon as a new annotation is inputted by the second user (e.g. a new movement is performed by the user) , data describing this annotation is directly sent to the terminal 1.
  • the idea is to use the annotations inputted by the second user for augmented reality (AR) for the first user, and preferably live AR when the second user simultaneously annotates the tridimensional model.
  • AR augmented reality
  • the method preferably comprises the further step (e) of generating, by the processing unit 11 of the terminal 1, from a real view of the scene S acquired by the camera 14 (e.g. an image or a video) , an enriched view of the scene S using the data describing the annotations inputted by the second user.
  • real view is the view (e.g. image or video) captured in real time by the camera 14.
  • processing unit 11 integrates the received annotations into the real view, i.e. augmenting the real view (the enriched view may be referred to as “augmented” view) .
  • the annotations may be rendered visually as overlaying the real view captured by camera 14.
  • the method may comprise a final step (f) of displaying, by the interface 13 of the terminal 1, said enriched view of the scene S, thereby providing the first user with a view of the scene S which is annotated almost in real-time (i.e. taking into account the processing time at both the VR system 2 and terminal 2 as well as the data transmission time on the communication network between them) by the second user.
  • the annotations are thus rendered on a real view in the first user’s glasses, making it easier for the first user to execute the maintenance operations as he gets visual assistance in the “real world” rather than in an edited video.
  • the first user can still get useful visual assistance in real-time, with the limitation that the first user still has to hold the terminal with one hand in front of the scene S while performing maintenance operations with the other hand, which is less convenient that when the terminal is a hand free terminal such as smart or AR glasses.
  • Algorithms for augmentation of a real view of the scene S with annotations of a tridimensional model of the scene S are known to the skilled person, and the present invention is not limited to any technique.
  • integrating the annotations into the real view in step (e) may comprise mapping the real view to the tridimensional model, and calculating coordinates of the annotation in the real view from coordinates of the coordinates of the annotation in the tridimensional model. Size/orientation of the annotation may also be adapted as a function of their coordinates.
  • the present invention proposes a virtual reality system 2 comprising a processing unit 21, possibly a storage unit 22, which may be connected to a terminal 1 of a first user at a scene S, and is adapted for carrying out the method for assisting the first user as previously described.
  • the virtual reality system 2 advantageously comprises a VR head set 23 and/or at least one motion controller 24.
  • Said processing unit 21 is configured to implement:
  • the present invention proposes a terminal 1, connectable to the aforementioned virtual reality system 2, terminal 1 comprising a camera 14 and a processing unit 11 configured to implement:
  • This terminal 1 preferably comprises an interface 13 for displaying to the first user the enriched view of the scene S.
  • the present invention proposes an assembly of the virtual reality system 2 and the terminal 1 which may be connected together (for example through a communication network 20) .
  • the invention further proposes a computer program product, comprising code instructions for executing (in particular with a processing unit 21 of the system 2) a method according to the method for assisting a first user provided with a terminal 1 at a scene S; and a computer-readable medium (in particular a storage unit 22 of the system 2) , on which is stored a computer program product comprising code instructions for executing said method.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present invention relates to a method for assisting a first user provided with a terminal at a scene, characterized it comprises performing, by a processing unit of a virtual reality system and connected to the first terminal, the steps of: Obtaining (a) a tridimensional model of the scene; Displaying (b) to the second user said tridimensional model of the scene in virtual reality; obtaining (c) annotations of said tridimensional model of the scene inputted by a second user in order to assist the first user; and Sending (d) to the terminal data describing said annotations, said data enabling enriching a view of the scene acquired by a camera of the device with said annotations.

Description

METHOD FOR ASSISTING A FIRST USER PROVIDED WITH A TERMINAL AT A SCENE TECHNICAL FIELD
The field of this invention is that of customer remote assistance.
More particularly, the invention relates to a method for remotely assisting a user provided with a terminal at a scene.
BACKGROUND
The maintenance of complex electronic appliances or machines is generally difficult for non-professional customers, and should preferably be performed by professional technicians on the spot. However, this is time and cost consuming.
Alternatively, customers can call an assistance service, which provides for a technician to orally guide the customers. However, the communication in such a situation is not easy and, when the maintenance operation is really complex, the customer may not be successful even with this guide of assistance.
In order to improve the situation, AR customer remote guidance systems have been developed. In these systems, the customer captures a photo/video in the field, and this photo/video is either directly enriched by an AI tool, or transmitted to a remote technician. The technician can then edit the photo/video with some visual aids (e.g. notations or even virtual objects) then send it back to the customer, so that the customer can execute himself some maintenance operations with the help of the edited photo/video.
While improving the situation, such a technical solution is not really user-friendly for the customers, because they have to watch the edited photos/videos at the same time as they execute the maintenance operations, which may be unpractical especially when the maintenance operations are complex ones. Furthermore, it does not allow a real time interaction with the technician, and several exchanges of photos/videos may be necessary.
There is consequently a need for a remote assistance method allowing a true real-time and more user friendly interaction between a customer and a professional  technician, so as to provide any customer with enough guidance to perform even complex maintenance operations.
SUMMARY OF THE INVENTION
For these purposes, the present invention provides a method for assisting a first user provided with a terminal at a scene, characterized it comprises performing, by a processing unit of a virtual reality system connected to the first terminal, the steps of:
obtaining a tridimensional model of the scene;
displaying said tridimensional model of the scene in virtual reality;
obtaining annotations of said tridimensional model of the scene inputted by a second user in order to assist the first user; and
sending to the terminal data describing said annotations, said data enabling enriching a view of the scene acquired by a camera of the device with said annotations.
Preferred but non limiting features of the present invention are as follow:
- The method comprises a previous step of generating, by a processing unit of the terminal, said tridimensional model of the scene;
- Said tridimensional model of the scene is generated from views of the scene acquired by said camera of the terminal;
- The method further comprises a step of generating, by a processing unit of the terminal, from a real view of the scene acquired by the camera, an enriched view of the scene using the data describing said annotations;
- The method further comprises a step of displaying by an interface of the terminal said enriched view of the scene.
- Said terminal is smart or AR glasses wearable by the first user.
- The virtual reality system comprises a virtual reality headset wearable by the second user, displaying said tridimensional model of the scene comprising rendering virtual reality views of the tridimensional model from virtual reality headset position in space;
- The virtual reality system further comprises at least one motion controller, said annotations of said tridimensional model being inputted by the second user at the obtaining step using the motion controller;
- Said data describing said annotations sent at the sending step comprise coordinates of said annotations in the tridimensional model of the scene;
- Integrating the annotations into said real view in the generating step comprises mapping the captured view to the tridimensional model and calculating coordinates of the annotations in the real view from coordinates of the annotations in the tridimensional model.
In a second aspect, the invention provides a virtual reality system connectable to a terminal of a first user at a scene, the system comprising a processing unit configured to implement:
- obtaining a tridimensional model of the scene;
- displaying said tridimensional model of the scene in virtual reality;
- obtaining annotations of said tridimensional model of the scene inputted by a second user in order to assist the first user; and
- sending to the terminal data describing said annotations, said data enabling enriching a view of the scene acquired by a camera of the device with said annotations.
In a third aspect, the invention provides a terminal connectable to a virtual reality system, the terminal comprising a camera and a processing unit configured to implement:
- generating a tridimensional model of a scene acquired by said camera of the terminal;
- sending said tridimensional model to a virtual reality system able to display said tridimensional model in virtual reality;
- receiving, from said virtual reality system, data describing annotations, inputted by a second user, of said tridimensional model of the scene; and
- generating (e) an enriched view of the scene, from a real view of the scene acquired by said camera of the device, using the data describing said annotations.
In a fourth aspect, the invention provides an assembly of a virtual reality system according to the second aspect and a terminal according to the second aspect.
According to a fifth and a sixth aspects, the invention provides a computer program product, comprising code instructions for executing a method according to the first aspect for assisting a first user provided with a terminal at a scene; and a computer-readable medium, on which is stored a computer program product comprising code instructions for executing a method according to the first aspect for assisting a first user provided with a terminal at a scene.
BRIEF DESCRIPTION OF THE DRAWINGS
The above and other objects, features and advantages of this invention will be apparent in the following detailed description of an illustrative embodiment thereof, which is to be read in connection with the accompanying drawings wherein:
- figure 1 illustrates an example of architecture in which the method according to the invention is performed; and
- figures 2 and 3 are diagrams representing steps of a preferred embodiment of a method according to the invention.
DETAILED DESCRIPTION OF THE EMBODIMENTS
Remote assistance architecture
The present invention proposes a method for assisting a first user provided with a terminal 1 at a scene S, as represented by figure 1. As it will be explained later, the first user is to be assisted by a remote second user provided with a virtual reality system 2 connected to the terminal 1, for instance through a communication network 20 such as internet.
By “assisting” the first user, it meant providing him/her with any relevant information that may be of interest for him/her at the scene S. In particular, the first user may be an unexperienced user (such as a customer) while the second user may be an experienced user (such a professional technician) .
The scene S designates the visible environment wherein the first user needs assistance, and typically comprises an appliance or a machine to be maintained, i.e. assistance is required for performing said maintenance. Generally speaking, the scene shall display something about which the first user seeks assistance from the second user
For example, may be visible at the scene S, among others:
- the screen of a computer on which the first user wants to perform a given task;
- the back of a network appliance that the first user wants to use for connection;
- the internal mechanism of a faulty home appliance;
- an electronic circuitry to be wired;
- etc.
The present invention will not be limited to any kind of assistance and only relates to a technical solution for allowing the second user to easily and efficiently provide suitable information to the first user.
The terminal 1 is an electronic device designed for augmented reality (AR) display, in particular a hand-free AR device such as pair of smart glasses or AR glasses, but could also be a smartphone, a tablet, a digital camera, or any other user terminal able to provide AR rendering.
The terminal 1 comprises at least a processing unit 11 (such as one or more processors) , a user interface 13 for displaying images and/or videos (such as a screen or a projector) , a camera 14 for acquiring images and/or videos and possibly a storage unit 12 (a memory, for instance flash) .
The first user is supposed to hold the terminal 1 such that the camera 14 is directed toward the scene S.
The terminal 1 is preferably configured to output on said interface 13 a view of what is visible from the camera 14 (i.e. the scene S) . To this end, the camera 14 is  advantageously located on a front face of the terminal 1, while the display 13 is located on a rear face of the terminal 1. In the case of smart or AR glasses, the interface 13 is typically embedded in one lens, while the camera is either in the bridge over the nose or in an arm of the glasses.
Note that the present invention is not limited to any kind or architecture of terminal 1.
The method also involves a virtual reality (VR) system 2, i.e. a system able to generate a virtual reality environment for a second user. The VR system 2 comprises at least a processing unit 21 (typically one or more processors) and comprises advantageously a VR headset 23 wearable by the second user and/or at least one motion controller 24 (typically a pair of VR handles) which can be held by the second user. Such motion controller 24 is able to track the position of a body part of the second user (a hand for instance) in space.
In more details, the processing unit 21 is typically that of a server to which the VR headset 23 and the motion controller 24 are directly connected (for instance by short distance wireless connection such as Bluetooth, or by a wired connection such as USB) . The system/server may also comprise a storage unit 22 (a memory, for instance a hard drive) . In another embodiment, processing unit 21 is part of the VR headset 23 and the VR system 2 consists then of VR headset 23 directly connected with the motion controller 24.
Alternatively to VR headset and/or handles, note that the VR system 2 may simply comprise a 2D display (a screen) with ordinary input means (a keyboard and a mouse) , the second user being able to move and orient a tridimensional model and interact with it, even if it less convenient that with full VR equipment.
Virtual reality systems are known to the skilled person, and any existing virtual reality system may be used for the present invention.
As explained, the VR system 2 is not located at the scene S, i.e. the VR system 2 is remotely connected to the terminal 1 through a communication network 20 such as internet.
Remote assistance method
With reference to figures 2 and 3, the present method first comprises a step (a) of obtaining, by the processing unit 21 of the VR system 2, a tridimensional model of the scene S (also referred as “3D model” in these figures) .
This tridimensional model may already exist (for instance in the case of a maintenance of a known machine at a known place) and may already be stored by the storage unit 22 of the VR system 2.
Alternatively, the tridimensional model is received by VR system 2 from the terminal 1. In other words, the method preferably comprises a previous step of generating (a0) , by the processing unit 11 of the terminal 1, a tridimensional model of the scene S. Once generated, this tridimensional model is transmitted from the terminal 1 to the VR system 2. These two steps may be triggered by the first user needing some assistance on the scene S.
In a known fashion, the tridimensional model of the scene S may be generated from views of the scene S acquired by said camera 14 of the terminal 1. Algorithms for constructing a 3D model from a plurality of various 2D view are known to the skilled person (the first user may be requested to “scan” the scene S with the camera 14 up to enough data is acquired) , and the present invention is not limited to any technique when it comes to generate such a tridimensional model from a captured scene. For instance, such a tridimensional model may be constructed using a 3D reconstruction algorithm such as :
- DynamicFusion ( https: //github. com/mihaibujanca/dynamicfusion) ;
- BundleFusion ( http: //graphics. stanford. edu/projects/bundlefusion/) ;
- Scene Reconstruction ( http: //qianyi. info/scene. html) .
These algorithms may use data captured by a depth-camera (RGB-D channel) such as Kinect or
Figure PCTCN2020118296-appb-000001
Realsense TM.
In a following step (b) , the tridimensional model of the scene S is displayed in virtual reality by the VR system 2, by means of the processing unit 21 of VR system 2. This enables a second user, provided with the VR system 2, to watch the scene S as if the second user was at the scene S. Thanks to the VR, the rendering of the  tridimensional model of the scene S to the second user is preferably function of his/her position in space.
More precisely, if VR system 2 comprises a virtual reality headset 23 worn by the second user, displaying (b) said tridimensional model of the scene S typically comprises rendering virtual reality views of the tridimensional model from virtual reality headset 23 position in space. Note that step (b) typically comprises preprocessing the tridimensional model by the processing unit 21, so as to allow real-time displaying to the second used. As already indicated, it is also possible that the tridimensional model is ordinary displayed by a screen, the second user then moving and orientating the screen for instance with a mouse.
In a further step (c) , which is generally simultaneously performed with step (b) , annotations of the tridimensional model of the scene S, inputted by the second user in order to assist the first user, are obtained by the processing unit 21 of VR system 2.
By “annotations” , it is meant any visual aid that could be added to the tridimensional model, such as, among others:
- locations of key points;
- virtual objects;
- color, shapes;
- notations (words, numbers) ;
- wirings;
- etc.
These annotations may be inputted in any way by the second user, for instance using a keyboard and/or a mouse.
Preferably, when the VR system 2 comprises at least one motion controller 24 which can be held by the second user, these annotations of the tridimensional model are inputted by the second user at step (c) using this motion controller 24. In other words, the second user “designates” a point of the space in the tridimensional model with his/her movement to place an annotation at this point, for instance by pushing a trigger on the motion controller 24 once the point is reached by his/her hand.
In step (d) , the processing unit 21 of the system 2 sends to the terminal 1 data describing the inputted annotations, these data enabling enriching a real view of the scene S acquired by a camera 14 of the device 1 with these annotations.
The data describing the annotations may typically comprise coordinates of the annotations in the tridimensional model of the scene S and possibly further parameters (description of a type of annotation, attribute of the annotation such as a color or a size, etc. ) .
Step (d) is advantageously performed in real-time, i.e. as soon as a new annotation is inputted by the second user (e.g. a new movement is performed by the user) , data describing this annotation is directly sent to the terminal 1. The idea is to use the annotations inputted by the second user for augmented reality (AR) for the first user, and preferably live AR when the second user simultaneously annotates the tridimensional model. A real-time convergent AR/VR guidance solution is therefore presently proposed.
The method preferably comprises the further step (e) of generating, by the processing unit 11 of the terminal 1, from a real view of the scene S acquired by the camera 14 (e.g. an image or a video) , an enriched view of the scene S using the data describing the annotations inputted by the second user. What is meant here by “real view” is the view (e.g. image or video) captured in real time by the camera 14. In other words, using the received data, processing unit 11 integrates the received annotations into the real view, i.e. augmenting the real view (the enriched view may be referred to as “augmented” view) . The annotations may be rendered visually as overlaying the real view captured by camera 14.
The method may comprise a final step (f) of displaying, by the interface 13 of the terminal 1, said enriched view of the scene S, thereby providing the first user with a view of the scene S which is annotated almost in real-time (i.e. taking into account the processing time at both the VR system 2 and terminal 2 as well as the data transmission time on the communication network between them) by the second user.
In the case of a terminal 1 comprising smart or AR glasses, the annotations are thus rendered on a real view in the first user’s glasses, making it easier for the first user to execute the maintenance operations as he gets visual assistance in the “real world” rather than in an edited video.
Note that even with more common terminals such as a smartphone, the first user can still get useful visual assistance in real-time, with the limitation that the first user still has to hold the terminal with one hand in front of the scene S while performing maintenance operations with the other hand, which is less convenient that when the terminal is a hand free terminal such as smart or AR glasses.
Algorithms for augmentation of a real view of the scene S with annotations of a tridimensional model of the scene S are known to the skilled person, and the present invention is not limited to any technique.
As an example, integrating the annotations into the real view in step (e) may comprise mapping the real view to the tridimensional model, and calculating coordinates of the annotation in the real view from coordinates of the coordinates of the annotation in the tridimensional model. Size/orientation of the annotation may also be adapted as a function of their coordinates.
Terminal, system, assembly and computer program
In a second aspect, the present invention proposes a virtual reality system 2 comprising a processing unit 21, possibly a storage unit 22, which may be connected to a terminal 1 of a first user at a scene S, and is adapted for carrying out the method for assisting the first user as previously described.
The virtual reality system 2 advantageously comprises a VR head set 23 and/or at least one motion controller 24.
Said processing unit 21 is configured to implement:
- obtaining a tridimensional model of the scene S;
- displaying this tridimensional model of the scene S in virtual reality;
- obtaining annotations of said tridimensional model of the scene S inputted by the second user to assist the first user;
- sending to the terminal 1 data describing these annotations, these data enabling enriching a view of the scene S acquired by a camera 14 of the device 1 with the inputted annotations.
In a third aspect, the present invention proposes a terminal 1, connectable to the aforementioned virtual reality system 2, terminal 1 comprising a camera 14 and a processing unit 11 configured to implement:
- generating a tridimensional model of a scene S acquired by the camera 14 of the terminal 1;
- sending this tridimensional model to a virtual reality system 2 able to display said tridimensional model in virtual reality;
- receiving, from said virtual reality system 2, data describing annotations, inputted by a second user, of said tridimensional model of the scene S; and
- generating a enriched view of the scene S, from a real view of the scene S acquired by said camera 14 of the device 1, using said data describing annotations.
This terminal 1 preferably comprises an interface 13 for displaying to the first user the enriched view of the scene S.
In a fourth aspect, the present invention proposes an assembly of the virtual reality system 2 and the terminal 1 which may be connected together (for example through a communication network 20) .
The invention further proposes a computer program product, comprising code instructions for executing (in particular with a processing unit 21 of the system 2) a method according to the method for assisting a first user provided with a terminal 1 at a scene S; and a computer-readable medium (in particular a storage unit 22 of the system 2) , on which is stored a computer program product comprising code instructions for executing said method.

Claims (15)

  1. A method for assisting a first user provided with a terminal (1) at a scene (S) , characterized it comprises performing, by a processing unit (21) of a virtual reality system (2) connected to the first terminal (1) , the steps of:
    obtaining (a) a tridimensional model of the scene (S) ;
    displaying (b) said tridimensional model of the scene (S) in virtual reality;
    obtaining (c) annotations of said tridimensional model of the scene (S) inputted by a second user in order to assist the first user; and
    sending (d) to the terminal (1) data describing said annotations, said data enabling enriching a view of the scene (S) acquired by a camera (14) of the device (1) with said annotations.
  2. A method according to claim 1, comprising a previous step of generating (a0) , by a processing unit (11) of the terminal (1) , said tridimensional model of the scene (S) .
  3. A method according to claim 2, wherein said tridimensional model of the scene (S) is generated from views of the scene (S) acquired by said camera (14) of the terminal (1) .
  4. A method according to any one of claims 1 to 3, further comprising a step of generating (e) , by a processing unit (11) of the terminal (1) , from a real view of the scene (S) acquired by the camera (14) , an enriched view of the scene using the data describing said annotations.
  5. A method according to claim 4, further comprising a step of displaying (f) by an interface (13) of the terminal (1) said enriched view of the scene (S) .
  6. A method according to any one of claims 1 to 5, wherein said terminal (1) is smart or AR glasses wearable by the first user.
  7. A method according to any one of claims 1 to 6, wherein the virtual reality system (2) comprises a virtual reality headset (23) wearable by the second user, displaying (b) said tridimensional model of the scene (S) comprising rendering virtual reality views of the tridimensional model from virtual reality headset (23) position in space.
  8. A method according to claims 7, wherein the virtual reality system (2) further comprises at least one motion controller (24) , said annotations of said tridimensional model being inputted by the second user at step (c) using the motion controller (24) .
  9. A method according to any one of claims 1 to 8, wherein said data describing said annotations sent at step (d) comprise coordinates of said annotations in the tridimensional model of the scene (S) .
  10. A method according to claims 4 and 9 in combination, wherein integrating the annotations into said real view in step (e) comprises mapping the captured view to the tridimensional model and calculating coordinates of the annotations in the real view from coordinates of the annotations in the tridimensional model.
  11. A virtual reality system (2) connectable to a terminal (1) of a first user at a scene (S) , the system (2) comprising a processing unit (21) configured to implement:
    - obtaining a tridimensional model of the scene (S) ;
    - displaying said tridimensional model of the scene (S) in virtual reality;
    - obtaining annotations of said tridimensional model of the scene (S) inputted by a second user in order to assist the first user; and
    - sending to the terminal (1) data describing said annotations, said data enabling enriching a view of the scene (S) acquired by a camera (14) of the device (1) with said annotations.
  12. A terminal (1) connectable to a virtual reality system (2) , the terminal (1) comprising a camera (14) and a processing unit (11) configured to implement:
    - generating a tridimensional model of a scene (S) acquired by said camera (14) of the terminal (1) ;
    - sending said tridimensional model to a virtual reality system (2) able to display said tridimensional model in virtual reality;
    - receiving, from said virtual reality system (2) , data describing annotations, inputted by a second user, of said tridimensional model of the scene (S) ; and
    - generating (e) an enriched view of the scene (S) , from a real view of the scene (S) acquired by said camera (14) of the device (1) , using the data describing said annotations.
  13. An assembly of a virtual reality system (2) according to claim 11 and a terminal (1) according to claim 12.
  14. A computer program product, comprising code instructions for executing a method according to any one of claims 1 to 10 for assisting a first user provided with a terminal (1) at a scene (S) , when executed by a processing unit.
  15. A computer-readable medium, on which is stored a computer program product comprising code instructions for executing a method according to any one of claims 1 to 10 for assisting a first user provided with a terminal (1) at a scene (S) .
PCT/CN2020/118296 2020-09-28 2020-09-28 Method for assisting a first user provided with a terminal at a scene WO2022061858A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/118296 WO2022061858A1 (en) 2020-09-28 2020-09-28 Method for assisting a first user provided with a terminal at a scene
PCT/IB2021/000672 WO2022064278A1 (en) 2020-09-28 2021-09-24 Method for assisting a first user provided with a terminal at a scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/118296 WO2022061858A1 (en) 2020-09-28 2020-09-28 Method for assisting a first user provided with a terminal at a scene

Publications (1)

Publication Number Publication Date
WO2022061858A1 true WO2022061858A1 (en) 2022-03-31

Family

ID=78483413

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/CN2020/118296 WO2022061858A1 (en) 2020-09-28 2020-09-28 Method for assisting a first user provided with a terminal at a scene
PCT/IB2021/000672 WO2022064278A1 (en) 2020-09-28 2021-09-24 Method for assisting a first user provided with a terminal at a scene

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/IB2021/000672 WO2022064278A1 (en) 2020-09-28 2021-09-24 Method for assisting a first user provided with a terminal at a scene

Country Status (1)

Country Link
WO (2) WO2022061858A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9088787B1 (en) * 2012-08-13 2015-07-21 Lockheed Martin Corporation System, method and computer software product for providing visual remote assistance through computing systems
US20160358383A1 (en) * 2015-06-05 2016-12-08 Steffen Gauglitz Systems and methods for augmented reality-based remote collaboration
CN106339094A (en) * 2016-09-05 2017-01-18 山东万腾电子科技有限公司 Interactive remote expert cooperation maintenance system and method based on augmented reality technology
CN107395671A (en) * 2017-06-12 2017-11-24 深圳增强现实技术有限公司 Remote assistance method, system and augmented reality terminal
CN107491174A (en) * 2016-08-31 2017-12-19 中科云创(北京)科技有限公司 Method, apparatus, system and electronic equipment for remote assistance
CN109982024A (en) * 2019-04-03 2019-07-05 阿依瓦(北京)技术有限公司 Video pictures share labeling system and shared mask method in a kind of remote assistance

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8830267B2 (en) * 2009-11-16 2014-09-09 Alliance For Sustainable Energy, Llc Augmented reality building operations tool
US10354441B2 (en) * 2016-05-20 2019-07-16 ETAK Systems, LLC Augmented reality systems and methods for telecommunications site modeling
US11064009B2 (en) * 2015-08-19 2021-07-13 Honeywell International Inc. Augmented reality-based wiring, commissioning and monitoring of controllers
EP3165979B1 (en) * 2015-11-05 2020-12-30 Rohde & Schwarz GmbH & Co. KG Providing mounting information
TW201828259A (en) * 2017-01-17 2018-08-01 江俊昇 Architectural Planning method to integrate into a 3D stereoscopic Architectural Planning image having both 3D stereoscopic image model with live scene and a 3D stereoscopic virtual building
DE102017010190A1 (en) * 2017-10-30 2019-05-02 Sartorius Stedim Biotech Gmbh Method for virtual configuration of a device, computer program product and corresponding augmented reality system
US10760815B2 (en) * 2017-12-19 2020-09-01 Honeywell International Inc. Building system commissioning using mixed reality
US20210201273A1 (en) * 2018-08-14 2021-07-01 Carrier Corporation Ductwork and fire suppression system visualization

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9088787B1 (en) * 2012-08-13 2015-07-21 Lockheed Martin Corporation System, method and computer software product for providing visual remote assistance through computing systems
US20160358383A1 (en) * 2015-06-05 2016-12-08 Steffen Gauglitz Systems and methods for augmented reality-based remote collaboration
CN107491174A (en) * 2016-08-31 2017-12-19 中科云创(北京)科技有限公司 Method, apparatus, system and electronic equipment for remote assistance
CN106339094A (en) * 2016-09-05 2017-01-18 山东万腾电子科技有限公司 Interactive remote expert cooperation maintenance system and method based on augmented reality technology
CN107395671A (en) * 2017-06-12 2017-11-24 深圳增强现实技术有限公司 Remote assistance method, system and augmented reality terminal
CN109982024A (en) * 2019-04-03 2019-07-05 阿依瓦(北京)技术有限公司 Video pictures share labeling system and shared mask method in a kind of remote assistance

Also Published As

Publication number Publication date
WO2022064278A1 (en) 2022-03-31

Similar Documents

Publication Publication Date Title
CN109069103B (en) Ultrasound imaging probe positioning
US11887234B2 (en) Avatar display device, avatar generating device, and program
US11842437B2 (en) Marker-less augmented reality system for mammoplasty pre-visualization
CN110599603A (en) Mechanical equipment visual interaction and equipment state monitoring system and method based on augmented reality
EP2568355A2 (en) Combined stereo camera and stereo display interaction
JP2021500690A (en) Self-expanding augmented reality-based service instruction library
CN108830894A (en) Remote guide method, apparatus, terminal and storage medium based on augmented reality
JP2011028309A5 (en)
CN102668556A (en) Medical support apparatus, medical support method, and medical support system
JP2008210276A (en) Method and device for generating three-dimensional model information
US11442685B2 (en) Remote interaction via bi-directional mixed-reality telepresence
CN104376193A (en) Managing dental photographs acquired by portable computing devices
JP2008140271A (en) Interactive device and method thereof
CN112667179B (en) Remote synchronous collaboration system based on mixed reality
CN106454311A (en) LED three-dimensional imaging system and method
CN105791390A (en) Data transmission method, device and system
Leutert et al. Projector-based augmented reality for telemaintenance support
WO2022061858A1 (en) Method for assisting a first user provided with a terminal at a scene
JP7279113B2 (en) IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, COMPUTER PROGRAM
JP2015184986A (en) Mixed reality sharing device
JP4424111B2 (en) Model creation device and data distribution system
CN114979568A (en) Remote operation guidance method based on augmented reality technology
JP2019040356A (en) Image processing system, image processing method and computer program
CN110211238A (en) Display methods, device, system, storage medium and the processor of mixed reality
JP2021131490A (en) Information processing device, information processing method, and program

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20954709

Country of ref document: EP

Kind code of ref document: A1