[go: up one dir, main page]

WO2020235541A1 - Image interface device, image manipulation device, manipulation-object manipulation device, manipulation-object manipulation system, manipulation-object presentation method, and manipulation-object presentation program - Google Patents

Image interface device, image manipulation device, manipulation-object manipulation device, manipulation-object manipulation system, manipulation-object presentation method, and manipulation-object presentation program Download PDF

Info

Publication number
WO2020235541A1
WO2020235541A1 PCT/JP2020/019706 JP2020019706W WO2020235541A1 WO 2020235541 A1 WO2020235541 A1 WO 2020235541A1 JP 2020019706 W JP2020019706 W JP 2020019706W WO 2020235541 A1 WO2020235541 A1 WO 2020235541A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
surrogate
user
surrogate body
display unit
Prior art date
Application number
PCT/JP2020/019706
Other languages
French (fr)
Japanese (ja)
Inventor
稲見 昌彦
厚史 泉原
岡本 直樹
敦 檜山
智也 佐々木
将拓 荻野
Original Assignee
国立大学法人東京大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 国立大学法人東京大学 filed Critical 国立大学法人東京大学
Priority to JP2021520790A priority Critical patent/JP7536312B2/en
Publication of WO2020235541A1 publication Critical patent/WO2020235541A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/06Control stands, e.g. consoles, switchboards

Definitions

  • the present invention relates to an image interface device, an image operation device, an operation object operation device, an operation object operation system, an operation object presentation method, and an operation object presentation program.
  • Non-Patent Document 1 There is telexistence in the technology to control the remote robot as if it were in the hands of the user (see, for example, Non-Patent Document 1).
  • Telexistence is a technology for controlling and manipulating a remote robot based on information obtained from sensors installed on the robot.
  • the robot's movements are mapped to the user's body movements, giving the user the sensation of "possessing" the robot.
  • the user can operate the robot while feeling that something in a remote place is close to him.
  • the size of the user relative to the size of the robot is fixed as a predetermined value. Therefore, when the size, movement, or position of the robot changes for each robot, the user cannot operate the robot in response to this without discomfort.
  • the present invention has been made in view of such a situation, and an object of the present invention is to provide a technique for operating an operation object such as a robot having various sizes, positions, movements, or motion characteristics without discomfort.
  • the image interface device draws an image display unit that displays an image of an operation object operated by the user and a virtual surrogate body of the user on the image display unit. It includes a surrogate body drawing unit and an operation signal input unit for inputting an operation signal from the user for the operation object.
  • the size of the surrogate body is variable depending on the size of the object to be operated.
  • the image of the operation object displayed on the image display unit is a subjective image seen from the surrogate body.
  • the surrogate body of the embodiment has a plurality of sizes, and these plurality of sizes may be dynamically switchable with respect to the operation target object.
  • the surrogate body of the embodiment has a plurality of positions with respect to the operation target, and these plurality of positions may be dynamically switchable.
  • the position of the surrogate body of the embodiment may be variable depending on the position of the operation target.
  • the surrogate body of the embodiment may move on the image display unit together with the operation performed on the image of the operation target object. At this time, the movement speed of the surrogate body on the image display unit may be variable according to the movement speed of the operation object.
  • the surrogate body of the embodiment may have a variable position or orientation with respect to the operation target.
  • the operation object of the embodiment may include a plurality of operation objects having different sizes. At this time, when a plurality of operation objects are continuously switched and operated, the size of the surrogate body may be continuously variable.
  • the surrogate body of the embodiment may include two different sized surrogate bodies.
  • the image of the operation object displayed on the image display unit may be a mixture of the image components of the operation object viewed from the surrogate bodies of two different sizes.
  • the image of the operation object of the embodiment may be an image taken by a camera installed in the vicinity of the operation object.
  • This image operation device includes an image display unit that displays an image of an operation object operated by the user, a substitute body drawing unit that draws a virtual substitute body of the user on the image display unit, and a user's reference to the operation object. It includes an operation signal input unit for inputting an operation signal and an operation unit for generating an operation signal by operating the user.
  • the size of the surrogate body is variable depending on the size of the object to be operated.
  • the image of the operation object displayed on the image display unit is a subjective image seen from the surrogate body.
  • the surrogate body of the embodiment has a plurality of sizes, and these plurality of sizes may be dynamically switchable with respect to the operation target object.
  • the surrogate body of the embodiment has a plurality of positions with respect to the operation target, and these plurality of positions may be dynamically switchable.
  • This operation object operation device includes an image display unit that displays an image of the operation object operated by the user, a proxy body drawing unit that draws a virtual substitute body of the user on the image display unit, and a user for the operation object. It includes an operation signal input unit for inputting an operation signal from the user, an operation unit for generating an operation signal by the user's operation, and an operation signal output unit for outputting the user's operation signal to an operation object.
  • the size of the surrogate body is variable depending on the size of the object to be operated.
  • the image of the operation object displayed on the image display unit is a subjective image seen from the surrogate body.
  • This operation object operation system includes an image display unit that displays an image of the operation object operated by the user, a surrogate body drawing unit that draws a virtual surrogate body of the user on the image display unit, and a user for the operation object.
  • An operation signal input unit for inputting an operation signal from the user, an operation unit for generating an operation signal by the user's operation, an operation signal output unit for outputting the user's operation signal to the operation object, and an operation object. It is equipped with a camera for shooting.
  • the size of the surrogate body is variable depending on the size of the object to be operated.
  • the image of the operation object displayed on the image display unit is a subjective image seen from the surrogate body.
  • the surrogate body of the embodiment has a plurality of sizes, and these plurality of sizes may be dynamically switchable with respect to the operation target object.
  • the surrogate body of the embodiment has a plurality of positions with respect to the operation target, and these plurality of positions may be dynamically switchable.
  • the camera of the embodiment may be provided with a moving mechanism for changing the position.
  • the camera of the embodiment may be provided with a rotation mechanism for changing the shooting direction.
  • the moving mechanism and the rotating mechanism of the camera of the embodiment move the camera in the direction opposite to the operation direction of the user, and the image displayed on the image display unit is the left-right inverted image taken by the camera. May be good.
  • the camera of the embodiment may be a stereo camera including two cameras.
  • the inter-eye distance of the stereo camera of the embodiment may be variable according to the size of the surrogate body.
  • the camera of the embodiment may be a depth camera that detects the distance to the operation target in real time.
  • the camera of the embodiment may include a plurality of cameras that capture different fields of view.
  • Yet another aspect of the present invention is a method of presenting an operation object.
  • This method includes a step of acquiring information on the size of the operation object, a step of drawing a virtual surrogate body of the user, and a step of displaying an image of the operation object.
  • the size of the surrogate body is variable depending on the size of the object to be operated.
  • the image of the operation object is a subjective image seen from the surrogate body.
  • the surrogate body of the embodiment has a plurality of sizes, and these plurality of sizes may be dynamically switchable with respect to the operation target object.
  • the surrogate body of the embodiment has a plurality of positions with respect to the operation target, and these plurality of positions may be dynamically switchable.
  • Yet another aspect of the present invention is an operation object presentation program.
  • This program causes the computer to execute a step of acquiring information on the size of the operation object, a step of drawing a virtual surrogate body of the user, and a step of displaying an image of the operation object.
  • the size of the surrogate body is variable depending on the size of the object to be operated.
  • the image of the operation object is a subjective image seen from the surrogate body.
  • the surrogate body of the embodiment has a plurality of sizes, and these plurality of sizes may be dynamically switchable with respect to the operation target object.
  • the surrogate body of the embodiment has a plurality of positions with respect to the operation target, and these plurality of positions may be dynamically switchable.
  • an operation target such as a robot having various sizes and motion characteristics can be operated without discomfort.
  • FIG. 1 shows a functional block diagram of the image interface device 1 according to the first embodiment.
  • the image interface device 1 includes an image display unit 11, a substitute body drawing unit 14, and an operation signal input unit 15.
  • the image display unit 11 is a display that displays moving images such as a liquid crystal display, a video projector, and a head-mounted display.
  • the image display unit 11 displays an image 10 of an operation target operated by the user 12.
  • the virtual substitute body 13 of the user 12 is drawn on the image display unit 11 by the substitute body drawing unit 14.
  • the image 10 of the operation object displayed on the image display unit 11 is a subjective image seen from the surrogate body 13. That is, the image 10 of the operation target object is displayed as an image seen from the eyes of the surrogate body 13 drawn on the image display unit 11.
  • the image 10 of the operation object may be a real image of the remote operation object taken by a camera or the like, or may be a virtual image of a simulator, a game, or the like.
  • the operation target is not limited to this, and the operation target may be any object to be operated by the user, such as a humanoid robot, a vehicle, or a medical device.
  • the surrogate body drawing unit 14 draws the virtual surrogate body 13 of the user 12 on the image display unit 11.
  • the operation signal input unit 15 is input with an operation signal from the user 12 for the operation target object.
  • the operation signal input to the operation signal input unit 15 is transmitted to the image display unit 11.
  • the image 10 of the operation target displayed on the image display unit 11 is operated by this operation signal.
  • the surrogate body 13 virtually substitutes for the body of the user 12, and is drawn on the image display unit 11 by the surrogate body drawing unit 14 as described above. That is, when viewed from the user 12 who operates the image interface device 1, the surrogate body 13 behaves like his own alter ego on the image display unit 11.
  • the surrogate body 13 may be a concrete image similar to the body of the user 12, or may be an abstract image such as a silhouette, a line art, or a translucent image.
  • the size of the surrogate body 13 is variable according to the size of the object to be operated. Typically, the larger the operation target object is drawn as a larger body, and the smaller the operation target object is drawn as a smaller body.
  • the surrogate body 13 is drawn as having substantially the same size as the operation target. For example, when the operation target is a heavy machine, the surrogate body 13 is drawn as a "giant" having substantially the same size as the heavy machine. On the contrary, when the operation target is a micromachine, the surrogate body 13 is drawn as a "dwarf" having almost the same size as the micromachine.
  • the image 10 of the operation object displayed on the image display unit 11 is a subjective image seen from the surrogate body 13. As a result, the user can operate the image 10 of the operation object displayed on the image display unit 11 from the position of the surrogate body 13 having a size corresponding to the size of the operation object.
  • the size of the micromachine is too large for the user as it is, so the minute movement of the micromachine and the movement of the hand are greatly different, and the user feels that it is difficult to operate as desired. ..
  • a surrogate body of the same size as the micromachine on the screen and operating the micromachine from the standpoint of this surrogate body, it seems that the fine movement of the micromachine matches the movement of one's hand. , It is possible to operate without discomfort.
  • the surrogate body 13 may have a plurality of sizes. At this time, the size of the surrogate body 13 may be dynamically switchable with respect to the operation target.
  • the surrogate body 13 may have various sizes for one operation object depending on the intended use and the situation.
  • the surrogate body 13 is a "very large giant” with a height of 10 m, a “giant” with a height approximately the same as the heavy machine, a “small giant” with a height of 4 m, and a height. It may be drawn as having four heights, such as a "human” in the 1 m range.
  • the surrogate body 13 is drawn as a "human”, a highly accurate operation feeling can be obtained, although there is a sense of discomfort due to the difference in scale between the surrogate body 13 and the operation object.
  • the surrogate body 13 is drawn as a "very large giant”, a feeling of operation on a large scale can be obtained, although there is a sense of discomfort due to the surrogate body 13 being larger than the operation target.
  • the surrogate body 13 is given a plurality of sizes, and the plurality of sizes are dynamically switched, so that the user can use the operation, the purpose, and the situation in which the operation target is placed.
  • the size of the surrogate body 13 can be made accurate according to one's taste and the like.
  • the surrogate body 13 When the surrogate body 13 has a plurality of sizes and this size can be dynamically switched with respect to the operation target object, the surrogate body 13 may have a plurality of positions with respect to the operation target object. At this time, the position of the surrogate body 13 may be dynamically switchable according to the size of the surrogate body 13.
  • the surrogate body 13 should be positioned with respect to the operation target depends on the size of the surrogate body 13.
  • the surrogate body 13 when the surrogate body 13 has the sizes of "very large giant”, “giant”, “small giant”, and “human”, the surrogate body 13 is an operation target in each size. It may have a plurality of height positions in the range from the top to the bottom of the.
  • the surrogate body 13 when the surrogate body 13 is drawn as a "giant”, since the surrogate body 13 and the operation target are substantially the same size, the height position of the surrogate body 13 may be substantially the same as the operation target.
  • the center position of the surrogate body 13 is matched with the operation position of the operation target (the tip of the arm, the end effector of the robot, etc.) to achieve higher accuracy. High operation can be realized.
  • the surrogate body 13 is provided with a plurality of positions with respect to the operation target object, and the plurality of positions are dynamically switched, so that the user accurately positions the surrogate body 13 according to the size of the surrogate body. be able to.
  • the position of the surrogate body 13 may be variable depending on the position of the object to be operated.
  • a user operates an operation object displayed on the screen, he / she feels uncomfortable if he / she is positioned in an inappropriate place with respect to the operation object on the screen.
  • the operation object is located near a wall, the user does not operate from a narrow place such that the operation object is sandwiched between the wall.
  • the surrogate body 13 by locating the surrogate body 13 at a position that is easy for the user to operate according to the position of the operation target object, the operation can be performed without discomfort for the user.
  • the surrogate body 13 may move on the image display unit together with the operation performed on the image 10 of the operation target object.
  • the movement speed of the surrogate body 13 may be variable according to the movement speed of the operation target object.
  • the surrogate body 13 is drawn so that the larger the operation object is, the slower the movement is, and the smaller the operation object is, the faster the movement is drawn.
  • the arm of the heavy equipment is manually moved on the image display unit as described above, the user feels that the movement speed of his / her hand is too fast as it is.
  • the user manually moves the micromachine on the image display unit the user feels that the movement speed of his / her hand is too slow compared to the movement speed of the micromachine as it is.
  • the hand of the surrogate body 13 drawn on the image display unit moves quickly in response to the movement of the micromachine, the user can operate the micromachine without discomfort.
  • the surrogate body 13 may have a variable position and orientation with respect to the object to be operated.
  • the surrogate body 13 may be drawn so as to be located in front of, behind, left, right, above or below the operation target. In this case, the user feels that the operation object is operated from the front, the back, the left, the right, the top, and the bottom, respectively.
  • the surrogate body 13 is drawn so that the position and orientation with respect to the heavy machine are appropriate, the user can operate the heavy machine without discomfort.
  • the surrogate body 13 may continuously change the size. For example, consider the case where there is a huge first operation object such as a heavy machine and a small second operation object such as a robot that performs fine electrical wiring. The user operates the first operation object to lift and move the automobile, and then promptly operates the second operation object to perform the electrical wiring work in the engine room of the automobile. At this time, the surrogate body 13 is a "giant" while the user is operating the first operation object, and when the user starts the operation of the second operation object, the surrogate body 13 is continuously "small”. Transform into a "person". As a result, when the operation objects of different sizes are continuously switched and operated, the user naturally performs the operation as if he / she continuously transferred to the body of the optimum size for the operation of these operation objects. Can be done.
  • the surrogate body 13 may continuously change its size. For example, consider the case where there is a robot having a first arm having a large size and coarse motion accuracy and a second arm having a small size and precise motion accuracy. That is, this robot has two parts having different sizes and operating accuracy. The user operates the first arm of the robot to move the precision equipment, and then promptly operates the second arm to repair the precision equipment. At this time, the surrogate body 13 is a "human" while the user is operating the first arm, and when the user starts operating the second arm, the surrogate body 13 is continuously transformed into a "dwarf". To do. As a result, when multiple parts with different sizes and operating accuracy of the operation target are continuously switched and operated, the user naturally moves to a body of the optimum size for operating these parts. Can be operated on.
  • the image 10 of the operation object displayed on the image display unit 11 may be a mixture of the image components of the operation object seen from the surrogate bodies of two different sizes.
  • it may be a mixture of an image component of a large operating object viewed from a large surrogate body and an image component of a small operating object viewed from a small surrogate body.
  • the user can instantly switch and view each image by switching the consciousness of which image to view.
  • the image of the operation target can be displayed on the image display unit. Since the image displayed at this time is a subjective image seen from the virtual body, the user can operate it without discomfort.
  • FIG. 2 shows a functional block diagram of the image interface device 1 according to the second embodiment.
  • the image interface device 1 includes an image display unit 11 that displays an image 10 of an operation object 110 operated by the user 12, and a surrogate body drawing unit 14 that draws a virtual surrogate body 13 of the user 12 on the image display unit 11.
  • the operation signal input unit 15 for inputting the operation signal from the user 12 to the operation object 110 is provided.
  • the image 10 of the operation object 110 is an image taken by a camera 100 installed in the vicinity of the operation object 110. That is, this embodiment is different from the more general first embodiment in that the image of the operation target 110 is an image taken by the camera 100. Other configurations and operations are common to the first embodiment.
  • the camera 100 is, for example, a video camera.
  • the camera 100 is arranged in the vicinity of the operation target 110.
  • the image taken by the camera 100 is almost the same as the scenery seen from the surrogate body 13. In this sense, the camera 100 shares a viewpoint with the surrogate body 13.
  • the camera 100 transmits the captured image to the image display unit 11.
  • the image display unit 11 displays an image received from the camera 100.
  • an actual operation target can be displayed on the image display unit. Since the image displayed at this time is a subjective image seen from the virtual body, the user can operate it without discomfort.
  • FIG. 3 shows a functional block diagram of the image manipulation device 2 according to the third embodiment.
  • the image operating device 2 includes an image display unit 11 that displays an image 10 of an operation object 110 operated by the user 12, and a surrogate body drawing unit 14 that draws a virtual surrogate body 13 of the user 12 on the image display unit 11.
  • the operation signal input unit 15 for inputting the operation signal from the user 12 to the operation object 110, and the operation unit 20 are provided. That is, in this embodiment, the operation unit 20 is added to the image interface device of FIG.
  • the operation unit 20 generates an operation signal by being operated by the user 12.
  • the operation unit 20 is, for example, a motion controller held in the hand of the user 12. Not limited to this, the operation unit 20 may be any suitable controller such as a mouse, a joystick, and a game pad.
  • the user 12 operates the operation unit 20 in order to operate the image 10 while looking at the image 10 of the operation target displayed on the image display unit 11.
  • the operation unit 20 generates an operation signal that determines the operation of the image 10.
  • the operation unit 20 transmits the generated operation signal to the operation signal input unit 15.
  • the operation signal input unit 15 transmits the operation signal to the image display unit 11.
  • the image 10 of the operation object may be an image of a remote operation object taken by a camera or the like, or may be a virtual image of a simulator, a game, or the like.
  • the user can operate this image on the screen without discomfort while looking at the image of the operation object displayed on the image display unit.
  • FIG. 4 shows a functional block diagram of the operation object operation device 3 according to the fourth embodiment.
  • the operation object operation device 3 includes an image display unit 11 that displays an image 10 of the operation object 110 operated by the user 12, and a substitute body drawing unit that draws a virtual substitute body 13 of the user 12 on the image display unit 11.
  • the operation signal input unit 15, the operation unit 20, and the operation signal output unit 30 for inputting the operation signal from the user 12 to the operation object 110 are provided. That is, in this embodiment, the operation signal output unit 30 is added to the image operation device 2 of FIG.
  • the operation signal output unit 30 receives the operation signal from the user 12 from the image display unit 11. The operation signal output unit 30 outputs this operation signal to the operation object 110.
  • the user 12 operates the operation unit 20 in order to operate the remote operation object 110 while looking at the image 10 of the operation object 110 displayed on the image display unit 11.
  • the operation unit 20 generates an operation signal that determines the operation of the operation object 110.
  • the operation unit 20 transmits the generated operation signal to the operation signal input unit 15.
  • the operation signal input unit 15 transmits the operation signal to the image display unit 11.
  • the image display unit 11 transmits an operation signal to the operation signal output unit 30.
  • the operation signal output unit 30 outputs the operation signal to the operation object 110.
  • the user can remotely operate the operation object without discomfort while viewing the image of the operation object displayed on the image display unit.
  • FIG. 5 shows a functional block diagram of the operation object operation system 4 according to the fifth embodiment.
  • the operation object operation system 4 includes an image display unit 11 that displays an image 10 of the operation object 110 operated by the user 12, and a substitute body drawing unit that draws a virtual substitute body 13 of the user 12 on the image display unit 11. It includes an operation signal input unit 15, an operation signal input unit 15, an operation signal output unit 30, and a camera 100 into which an operation signal from the user 12 for the operation object 110 is input. That is, in this embodiment, the camera 100 is added to the operation object operation device 3 of FIG.
  • the camera 100 is, for example, a video camera, and is arranged in the vicinity of the operation target 110.
  • the image taken by the camera 100 is almost the same as the scenery seen from the surrogate body 13. In this sense, the camera 100 shares a viewpoint with the surrogate body 13.
  • the camera 100 transmits the captured image to the image display unit 11.
  • the image display unit 11 displays an image received from the camera 100.
  • the user can remotely operate the operation object without discomfort while viewing the image of the actual operation object taken by the camera provided in this system.
  • the camera 100 may include a moving mechanism for changing the position.
  • the moving mechanism moves the position of the camera 100 with respect to the operation target 110 by, for example, a motor.
  • the camera 100 can photograph the operation object 110 from, for example, front, rear, left, right, top or bottom. Therefore, the subjective image seen from the surrogate body 13 is also a view of the operation object 110 from the front, the back, the left, the right, the top, or the bottom.
  • the camera 100 may include a rotation mechanism for changing the shooting direction.
  • the rotation mechanism rotates the camera 100 around a predetermined axis by, for example, a motor. Rotation around the axis corresponds to, for example, rolling, pitching, yawing, etc. of the camera.
  • the camera 100 can photograph the operation object 110 from various angles. Therefore, the subjective image seen from the surrogate body 13 also looks at the operation object 110 from various angles.
  • the moving or rotating mechanism of the camera 100 may move the camera in a direction opposite to the user's operating direction.
  • the image displayed on the image display unit may be an inverted image of the image taken by the camera 100.
  • the user sees the scenery seen from the surrogate body on the image display unit 11 and the left and right sides are reversed from the actual scenery.
  • the user operates the operation object in the direction opposite to the direction of the operation performed by the user. In other words, the user feels as if he / she is operating an object to be operated in a world that is opposite to the actual world.
  • the right hand when a right-handed user wants to approach from the right side with his / her right hand as his / her physical sensation, but can approach only from the left side due to the environment in which the object to be operated is placed, the right hand is used. It can be operated without any discomfort.
  • the camera 100 may be a stereo camera with two cameras.
  • the image of the operation object 110 taken by the camera 100 is three-dimensional and has a sense of depth for the user.
  • the intereye distance of the stereo camera may be variable depending on the size of the surrogate body 13.
  • the camera 100 may be provided with an inter-eye distance adjusting mechanism for dynamically adjusting the inter-eye distance using a motor or the like.
  • the interocular distance of the stereo camera is adjusted longer as the surrogate body 13 is larger, and shorter as the surrogate body 13 is smaller.
  • the inter-eye distance of the stereo camera is adjusted according to the size of the surrogate body 13, the user can see an image having a more natural sense of depth for each operation target 110.
  • the camera 100 may be a depth camera that detects the distance to the operation target 110 in real time.
  • the image of the operation target 110 taken by the camera 100 can provide a more realistic sense of depth and immersiveness for the user.
  • the camera 100 may include a plurality of cameras that capture different fields of view. At this time, the images of the plurality of fields of view taken by the plurality of cameras may be displayed at the same time, or may be switched by the user and displayed separately. In this embodiment, the user can see the operation object 110 as viewed from a plurality of viewpoints of the surrogate body 13.
  • FIG. 6 is a flowchart showing a processing procedure of the operation object presentation method according to the sixth embodiment.
  • Step S1 is a process of acquiring information on the size of the operation target to be presented.
  • the size information may be acquired in real time by a camera or the like, or may be acquired from data stored in a database or the like in advance.
  • the size of the object to be operated is arbitrary, but for example, it is several tens of m for a construction machine and several mm for a micromachine.
  • Step S2 is a process of drawing a surrogate body on the image display unit according to the size of the operation object.
  • This surrogate body virtually represents the user's body. This may be a concrete image similar to the user's body, or an abstract image such as a silhouette, a line art, or a translucent image.
  • the size of the surrogate body is variable depending on the size of the object to be operated. Typically, the larger the operation object is drawn as a larger body, and the smaller the operation object is, the smaller the body is drawn. For example, the surrogate body is drawn as being substantially the same size as the object to be manipulated.
  • the process proceeds to step S3.
  • Step S3 is a process of displaying an image of the operation object on the image display unit as a subjective image of the surrogate body. That is, the image of the operation object is displayed as an image seen from the eyes of the surrogate body drawn on the image display unit.
  • the image of the operation object may be a real image of the remote operation object taken by a camera or the like, or may be a virtual image of a simulator, a game, or the like.
  • the operation target can be presented to the user. Since the image presented at this time is a subjective image seen from the virtual body, the user can operate it without discomfort.
  • a seventh embodiment is an operation object presentation program.
  • This program includes a step S1 for displaying an image of an operation object operated by the user on the image display unit, a step S2 for drawing a virtual surrogate body of the user on the image display unit, and an operation target based on an operation signal from the user.
  • the operation target can be presented to the user using a computer. Since the image presented at this time is a subjective image seen from the virtual body, the user can operate it without discomfort.
  • the moving mechanism and the rotating mechanism of the camera of the embodiment may be operated by using the head-mounted display.
  • the head-mounted display transmits a signal for operating the moving mechanism and the rotating mechanism of the camera to the moving mechanism and the rotating mechanism.
  • This signal is for moving the shooting position and direction of the camera in conjunction with the movement of the user's head.
  • the user can freely operate the shooting position and direction of the operation target by the movement of the head.
  • the operating object operating device of the embodiment may include a microphone in the vicinity of the operating object.
  • the sound collected by the microphone is transmitted to the user as a sound heard by the surrogate body.
  • the user operates the operation target while viewing the image displayed on the image display unit and listening to the sound collected by the microphone.
  • the user can share hearing with the surrogate body in addition to the viewpoint. As a result, the user can operate the operation object with a more natural feeling.
  • the present invention covers a wide range of applications such as remote control of construction machinery, surgical support, operation of humanoid robots, operation of vehicles, simulators, and games.
  • the method according to the present invention can be used for such various applications.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Manipulator (AREA)
  • Processing Or Creating Images (AREA)

Abstract

This image interface device comprises: an image display unit that displays images of an manipulation object that is manipulated by a user; a representative-body rendering unit that renders a virtual representative body of the user on the image display unit; and a manipulation-signal input unit that receives input of a manipulation signal from the user with respect to the manipulation object. The size of the representative body is variable in accordance with the size of the manipulation object. The manipulation-object images displayed on the image display unit are subjective images as seen from the representative body. An image manipulation device includes, in addition to the image interface device, a manipulation unit that generates a manipulation signal in response to the user manipulating the manipulation unit.

Description

画像インタフェース装置、画像操作装置、操作対象物操作装置、操作対象物操作システム、操作対象物提示方法および操作対象物提示プログラムImage interface device, image operation device, operation object operation device, operation object operation system, operation object presentation method and operation object presentation program
 本発明は、画像インタフェース装置、画像操作装置、操作対象物操作装置、操作対象物操作システム、操作対象物提示方法および操作対象物提示プログラムに関する。 The present invention relates to an image interface device, an image operation device, an operation object operation device, an operation object operation system, an operation object presentation method, and an operation object presentation program.
 遠隔ロボットをあたかもユーザの手元にあるかのようなものとして制御する技術に、テレイグジスタンスがある(例えば、非特許文献1参照)。 There is telexistence in the technology to control the remote robot as if it were in the hands of the user (see, for example, Non-Patent Document 1).
 テレイグジスタンスは、遠隔にあるロボットに設置したセンサ等から得た情報に基づいて、該ロボットを制御、操縦する技術である。テレイグジスタンスでは、ロボットの動作がユーザの身体動作にマッピングされるため、ユーザはロボットに「憑依」して操作しているような感覚を得る。これによりユーザは、遠隔地にあるものをあたかも身近にあるかのように感じながらロボットを操作することができる。 Telexistence is a technology for controlling and manipulating a remote robot based on information obtained from sensors installed on the robot. In telexistence, the robot's movements are mapped to the user's body movements, giving the user the sensation of "possessing" the robot. As a result, the user can operate the robot while feeling that something in a remote place is close to him.
 しかしながら従来のテレイグジスタンスでは、ロボットのサイズに対するユーザのサイズは、予め定められたものとして固定されている。このためロボットのサイズ、動作または位置などがロボットごとに変わったような場合、ユーザはこれに対応して違和感なくロボットを操作することができない。 However, in the conventional telexistence, the size of the user relative to the size of the robot is fixed as a predetermined value. Therefore, when the size, movement, or position of the robot changes for each robot, the user cannot operate the robot in response to this without discomfort.
 本発明はこうした状況に鑑みてなされたものであり、その目的は、多様なサイズ、位置、動作または運動特性を持つロボット等の操作対象物を、違和感なく操作できる技術を提供することにある。 The present invention has been made in view of such a situation, and an object of the present invention is to provide a technique for operating an operation object such as a robot having various sizes, positions, movements, or motion characteristics without discomfort.
 上記課題を解決するために、本発明のある態様の画像インタフェース装置は、ユーザが操作する操作対象物の画像を表示する画像表示部と、画像表示部にユーザの仮想的な代理身体を描画する代理身体描画部と、操作対象物に対するユーザからの操作信号が入力される操作信号入力部と、を備える。代理身体の大きさは、操作対象物の大きさに応じて可変である。画像表示部に表示される操作対象物の画像は、代理身体から見た主観画像である。 In order to solve the above problems, the image interface device according to an embodiment of the present invention draws an image display unit that displays an image of an operation object operated by the user and a virtual surrogate body of the user on the image display unit. It includes a surrogate body drawing unit and an operation signal input unit for inputting an operation signal from the user for the operation object. The size of the surrogate body is variable depending on the size of the object to be operated. The image of the operation object displayed on the image display unit is a subjective image seen from the surrogate body.
 実施の形態の代理身体は複数の大きさを有し、これら複数の大きさは前記操作対象物に対して動的に切替可能であってよい。 The surrogate body of the embodiment has a plurality of sizes, and these plurality of sizes may be dynamically switchable with respect to the operation target object.
 実施の形態の代理身体は操作対象物に対する複数の位置を有し、これら複数の位置は動的に切替可能であってよい。 The surrogate body of the embodiment has a plurality of positions with respect to the operation target, and these plurality of positions may be dynamically switchable.
 実施の形態の代理身体の位置は、操作対象物の位置に応じて可変であってよい。 The position of the surrogate body of the embodiment may be variable depending on the position of the operation target.
 実施の形態の代理身体は、操作対象物の画像に対してなされる操作とともに画像表示部上で運動してもよい。このとき代理身体の画像表示部上での運動速度は、操作対象物の運動速度に応じて可変であってもよい。 The surrogate body of the embodiment may move on the image display unit together with the operation performed on the image of the operation target object. At this time, the movement speed of the surrogate body on the image display unit may be variable according to the movement speed of the operation object.
 実施の形態の代理身体は、操作対象物に対する位置または向きが可変であってもよい。 The surrogate body of the embodiment may have a variable position or orientation with respect to the operation target.
 実施の形態の操作対象物は大きさの異なる複数の操作対象物を含んでもよい。このとき、複数の操作対象物が連続的に切り替えて操作されるとき、代理身体の大きさは連続的に可変であってもよい。 The operation object of the embodiment may include a plurality of operation objects having different sizes. At this time, when a plurality of operation objects are continuously switched and operated, the size of the surrogate body may be continuously variable.
 実施の形態の代理身体は、2つの異なる大きさの代理身体を含んでもよい。このとき、画像表示部に表示される操作対象物の画像は、2つの異なる大きさの代理身体から見た操作対象物の画像成分を混合したものであってもよい。 The surrogate body of the embodiment may include two different sized surrogate bodies. At this time, the image of the operation object displayed on the image display unit may be a mixture of the image components of the operation object viewed from the surrogate bodies of two different sizes.
 実施の形態の操作対象物の画像は、操作対象物の近辺に設置されたカメラによって撮影された画像であってもよい。 The image of the operation object of the embodiment may be an image taken by a camera installed in the vicinity of the operation object.
 本発明の別の態様は、画像操作装置である。この画像操作装置は、ユーザが操作する操作対象物の画像を表示する画像表示部と、画像表示部にユーザの仮想的な代理身体を描画する代理身体描画部と、操作対象物に対するユーザからの操作信号が入力される操作信号入力部と、ユーザが操作することにより操作信号を生成する操作部と、を備える。代理身体の大きさは、操作対象物の大きさに応じて可変である。画像表示部に表示される操作対象物の画像は、代理身体から見た主観画像である。 Another aspect of the present invention is an image manipulation device. This image operation device includes an image display unit that displays an image of an operation object operated by the user, a substitute body drawing unit that draws a virtual substitute body of the user on the image display unit, and a user's reference to the operation object. It includes an operation signal input unit for inputting an operation signal and an operation unit for generating an operation signal by operating the user. The size of the surrogate body is variable depending on the size of the object to be operated. The image of the operation object displayed on the image display unit is a subjective image seen from the surrogate body.
 実施の形態の代理身体は複数の大きさを有し、これら複数の大きさは前記操作対象物に対して動的に切替可能であってよい。 The surrogate body of the embodiment has a plurality of sizes, and these plurality of sizes may be dynamically switchable with respect to the operation target object.
 実施の形態の代理身体は操作対象物に対する複数の位置を有し、これら複数の位置は動的に切替可能であってよい。 The surrogate body of the embodiment has a plurality of positions with respect to the operation target, and these plurality of positions may be dynamically switchable.
 本発明のさらに別の態様は、操作対象物操作装置である。この操作対象物操作装置は、ユーザが操作する操作対象物の画像を表示する画像表示部と、画像表示部にユーザの仮想的な代理身体を描画する代理身体描画部と、操作対象物に対するユーザからの操作信号が入力される操作信号入力部と、ユーザが操作することにより操作信号を生成する操作部と、ユーザの操作信号を操作対象物に出力する操作信号出力部と、を備える。代理身体の大きさは、操作対象物の大きさに応じて可変である。画像表示部に表示される操作対象物の画像は、代理身体から見た主観画像である。 Yet another aspect of the present invention is an operation object operation device. This operation object operation device includes an image display unit that displays an image of the operation object operated by the user, a proxy body drawing unit that draws a virtual substitute body of the user on the image display unit, and a user for the operation object. It includes an operation signal input unit for inputting an operation signal from the user, an operation unit for generating an operation signal by the user's operation, and an operation signal output unit for outputting the user's operation signal to an operation object. The size of the surrogate body is variable depending on the size of the object to be operated. The image of the operation object displayed on the image display unit is a subjective image seen from the surrogate body.
 本発明のさらに別の態様は、操作対象物操作システムである。この操作対象物操作システムは、ユーザが操作する操作対象物の画像を表示する画像表示部と、画像表示部にユーザの仮想的な代理身体を描画する代理身体描画部と、操作対象物に対するユーザからの操作信号が入力される操作信号入力部と、ユーザが操作することにより操作信号を生成する操作部と、ユーザの操作信号を操作対象物に出力する操作信号出力部と、操作対象物を撮影するカメラと、を備える。代理身体の大きさは、操作対象物の大きさに応じて可変である。画像表示部に表示される操作対象物の画像は、代理身体から見た主観画像である。 Yet another aspect of the present invention is an operation object operation system. This operation object operation system includes an image display unit that displays an image of the operation object operated by the user, a surrogate body drawing unit that draws a virtual surrogate body of the user on the image display unit, and a user for the operation object. An operation signal input unit for inputting an operation signal from the user, an operation unit for generating an operation signal by the user's operation, an operation signal output unit for outputting the user's operation signal to the operation object, and an operation object. It is equipped with a camera for shooting. The size of the surrogate body is variable depending on the size of the object to be operated. The image of the operation object displayed on the image display unit is a subjective image seen from the surrogate body.
 実施の形態の代理身体は複数の大きさを有し、これら複数の大きさは前記操作対象物に対して動的に切替可能であってよい。 The surrogate body of the embodiment has a plurality of sizes, and these plurality of sizes may be dynamically switchable with respect to the operation target object.
 実施の形態の代理身体は操作対象物に対する複数の位置を有し、これら複数の位置は動的に切替可能であってよい。 The surrogate body of the embodiment has a plurality of positions with respect to the operation target, and these plurality of positions may be dynamically switchable.
 実施の形態のカメラは、位置を変えるための移動機構を備えてもよい。 The camera of the embodiment may be provided with a moving mechanism for changing the position.
 実施の形態のカメラは、撮影方向を変えるための回転機構を備えてもよい。 The camera of the embodiment may be provided with a rotation mechanism for changing the shooting direction.
 実施の形態のカメラの移動機構と回転機構は、ユーザの操作方向と反対方向にカメラを動かし、画像表示部に表示される画像は、カメラが撮影した画像の左右を反転させたものであってもよい。 The moving mechanism and the rotating mechanism of the camera of the embodiment move the camera in the direction opposite to the operation direction of the user, and the image displayed on the image display unit is the left-right inverted image taken by the camera. May be good.
 実施の形態のカメラは、2つのカメラを備えたステレオカメラであってもよい。 The camera of the embodiment may be a stereo camera including two cameras.
 実施の形態のステレオカメラの眼間距離は、代理身体の大きさに応じて可変であってもよい。 The inter-eye distance of the stereo camera of the embodiment may be variable according to the size of the surrogate body.
 実施の形態のカメラは、操作対象物までの距離をリアルタイムに検知する深度カメラであってよい。 The camera of the embodiment may be a depth camera that detects the distance to the operation target in real time.
 実施の形態のカメラは、それぞれ異なる視野を撮影する複数のカメラを備えてよい。 The camera of the embodiment may include a plurality of cameras that capture different fields of view.
 本発明のさらに別の態様は、操作対象物提示方法である。この方法は、操作対象物の大きさの情報を取得するステップと、ユーザの仮想的な代理身体を描画するステップと、操作対象物の画像を表示するステップと、を備える。代理身体の大きさは操作対象物の大きさに応じて可変である。操作対象物の画像は代理身体から見た主観画像である。 Yet another aspect of the present invention is a method of presenting an operation object. This method includes a step of acquiring information on the size of the operation object, a step of drawing a virtual surrogate body of the user, and a step of displaying an image of the operation object. The size of the surrogate body is variable depending on the size of the object to be operated. The image of the operation object is a subjective image seen from the surrogate body.
 実施の形態の代理身体は複数の大きさを有し、これら複数の大きさは前記操作対象物に対して動的に切替可能であってよい。 The surrogate body of the embodiment has a plurality of sizes, and these plurality of sizes may be dynamically switchable with respect to the operation target object.
 実施の形態の代理身体は操作対象物に対する複数の位置を有し、これら複数の位置は動的に切替可能であってよい。 The surrogate body of the embodiment has a plurality of positions with respect to the operation target, and these plurality of positions may be dynamically switchable.
 本発明のさらに別の態様は、操作対象物提示プログラムである。このプログラムは、操作対象物の大きさの情報を取得するステップと、ユーザの仮想的な代理身体を描画するステップと、操作対象物の画像を表示するステップと、をコンピュータに実行させる。代理身体の大きさは操作対象物の大きさに応じて可変である。操作対象物の画像は代理身体から見た主観画像である。 Yet another aspect of the present invention is an operation object presentation program. This program causes the computer to execute a step of acquiring information on the size of the operation object, a step of drawing a virtual surrogate body of the user, and a step of displaying an image of the operation object. The size of the surrogate body is variable depending on the size of the object to be operated. The image of the operation object is a subjective image seen from the surrogate body.
 実施の形態の代理身体は複数の大きさを有し、これら複数の大きさは前記操作対象物に対して動的に切替可能であってよい。 The surrogate body of the embodiment has a plurality of sizes, and these plurality of sizes may be dynamically switchable with respect to the operation target object.
 実施の形態の代理身体は操作対象物に対する複数の位置を有し、これら複数の位置は動的に切替可能であってよい。 The surrogate body of the embodiment has a plurality of positions with respect to the operation target, and these plurality of positions may be dynamically switchable.
 なお、以上の構成要素の任意の組合せ、本発明の表現を装置、方法、システム、記録媒体、コンピュータプログラムなどの間で変換したものもまた、本発明の態様として有効である。 It should be noted that any combination of the above components and the conversion of the expression of the present invention between devices, methods, systems, recording media, computer programs, etc. are also effective as aspects of the present invention.
 本発明によれば、多様なサイズや運動特性を持つロボットなどの操作対象物を違和感なく操作可能な画像を画面上に提示することができる。 According to the present invention, it is possible to present on the screen an image in which an operation target such as a robot having various sizes and motion characteristics can be operated without discomfort.
第1の実施の形態に係る画像インタフェース装置の機能ブロック図である。It is a functional block diagram of the image interface apparatus which concerns on 1st Embodiment. 第2の実施の形態に係る画像インタフェース装置の機能ブロック図である。It is a functional block diagram of the image interface apparatus which concerns on 2nd Embodiment. 第3の実施の形態に係る画像操作装置の機能ブロック図である。It is a functional block diagram of the image manipulation apparatus which concerns on 3rd Embodiment. 第4の実施の形態に係る操作対象物操作装置の機能ブロック図である。It is a functional block diagram of the operation object operation apparatus which concerns on 4th Embodiment. 第5の実施の形態に係る操作対象物操作システムの機能ブロック図である。It is a functional block diagram of the operation object operation system which concerns on 5th Embodiment. 第6の実施の形態に係る操作対象物提示方法の処理手順を示すフローチャートである。It is a flowchart which shows the processing procedure of the operation object presentation method which concerns on 6th Embodiment.
 以下、本発明を好適な実施の形態をもとに図面を参照しながら説明する。実施の形態は、発明を限定するものではなく例示である。実施の形態に記述されるすべての特徴やその組み合わせは、必ずしも発明の本質的なものであるとは限らない。各図面に示される同一または同等の構成要素、部材、処理には、同一の符号を付するものとし、適宜重複した説明は省略する。また、各図に示す各部の縮尺や形状は、説明を容易にするために便宜的に設定されており、特に言及がない限り限定的に解釈されるものではない。また、本明細書または請求項の中で「第1」、「第2」等の用語が用いられる場合、特に言及がない限りこの用語はいかなる順序や重要度を表すものでもなく、ある構成と他の構成とを区別するだけのためのものである。また、各図面において実施の形態を説明する上で重要ではない部材の一部は省略して表示する。 Hereinafter, the present invention will be described based on a preferred embodiment with reference to the drawings. The embodiment is not limited to the invention but is an example. Not all features and combinations thereof described in the embodiments are necessarily essential to the invention. The same or equivalent components, members, and processes shown in the drawings shall be designated by the same reference numerals, and redundant description will be omitted as appropriate. In addition, the scale and shape of each part shown in each figure are set for convenience in order to facilitate explanation, and are not limitedly interpreted unless otherwise specified. In addition, when terms such as "first" and "second" are used in the present specification or claims, these terms do not represent any order or importance unless otherwise specified, and have a certain structure. It is just to distinguish it from other configurations. In addition, some of the members that are not important for explaining the embodiment in each drawing are omitted and displayed.
[第1の実施の形態]
 図1に、第1の実施の形態に係る画像インタフェース装置1の機能ブロック図を示す。画像インタフェース装置1は、画像表示部11と、代理身体描画部14と、操作信号入力部15と、を備える。
[First Embodiment]
FIG. 1 shows a functional block diagram of the image interface device 1 according to the first embodiment. The image interface device 1 includes an image display unit 11, a substitute body drawing unit 14, and an operation signal input unit 15.
 画像表示部11は、例えば液晶ディスプレイ、ビデオプロジェクタ、ヘッドマウントディスプレイといった動画像を表示するディスプレイである。画像表示部11には、ユーザ12が操作する操作対象物の画像10が表示される。さらに画像表示部11には、代理身体描画部14によって、ユーザ12の仮想的な代理身体13が描画される。画像表示部11に表示される操作対象物の画像10は、代理身体13から見た主観画像である。すなわち操作対象物の画像10は、画像表示部11に描画された代理身体13の目から見た画像として表示される。操作対象物の画像10は、カメラなどによって撮影された遠隔の操作対象物の実画像であってもよいし、シミュレータやゲームなどの仮想的な画像であってもよい。以下、主に建設機械などの産業用ロボットを操作対象物の例として説明する。しかしこれに限られず、操作対象物は、人型ロボット、乗り物、医療用器具など、ユーザの操作の対象になるものであれば何であってもよい。 The image display unit 11 is a display that displays moving images such as a liquid crystal display, a video projector, and a head-mounted display. The image display unit 11 displays an image 10 of an operation target operated by the user 12. Further, the virtual substitute body 13 of the user 12 is drawn on the image display unit 11 by the substitute body drawing unit 14. The image 10 of the operation object displayed on the image display unit 11 is a subjective image seen from the surrogate body 13. That is, the image 10 of the operation target object is displayed as an image seen from the eyes of the surrogate body 13 drawn on the image display unit 11. The image 10 of the operation object may be a real image of the remote operation object taken by a camera or the like, or may be a virtual image of a simulator, a game, or the like. Hereinafter, industrial robots such as construction machines will be mainly described as examples of operation objects. However, the operation target is not limited to this, and the operation target may be any object to be operated by the user, such as a humanoid robot, a vehicle, or a medical device.
 代理身体描画部14は、画像表示部11にユーザ12の仮想的な代理身体13を描画する。 The surrogate body drawing unit 14 draws the virtual surrogate body 13 of the user 12 on the image display unit 11.
 操作信号入力部15には、操作対象物に対するユーザ12からの操作信号が入力される。操作信号入力部15に入力された操作信号は、画像表示部11に送信される。画像表示部11に表示された操作対象物の画像10は、この操作信号によって操作される。 The operation signal input unit 15 is input with an operation signal from the user 12 for the operation target object. The operation signal input to the operation signal input unit 15 is transmitted to the image display unit 11. The image 10 of the operation target displayed on the image display unit 11 is operated by this operation signal.
 代理身体13は、ユーザ12の身体を仮想的に代理するものであり、前述のように代理身体描画部14によって画像表示部11に描画される。すなわち、画像インタフェース装置1を操作するユーザ12から見たとき、代理身体13は画像表示部11上で自分の分身のように振る舞う。代理身体13は、ユーザ12の身体に類似した具象的な画像であってもよいし、シルエット、線画、半透明画像といった抽象的な画像であってもよい。 The surrogate body 13 virtually substitutes for the body of the user 12, and is drawn on the image display unit 11 by the surrogate body drawing unit 14 as described above. That is, when viewed from the user 12 who operates the image interface device 1, the surrogate body 13 behaves like his own alter ego on the image display unit 11. The surrogate body 13 may be a concrete image similar to the body of the user 12, or may be an abstract image such as a silhouette, a line art, or a translucent image.
 代理身体13の大きさは、操作対象物の大きさに応じて可変である。典型的には代理身体13は、操作対象物が大きければ大きいほど大きな身体として描画され、操作対象物が小さければ小さいほど小さな身体として描画される。例えば代理身体13は、操作対象物の大きさと実質的に同じ大きさのものとして描画される。例えば操作対象物が重機である場合、代理身体13は当該重機とほぼ同じ大きさの「巨人」として描画される。逆に操作対象物がマイクロマシンである場合、代理身体13は当該マイクロマシンとほぼ同じ大きさの「小人」として描画される。前述のように、画像表示部11に表示される操作対象物の画像10は代理身体13から見た主観画像である。これによりユーザは、操作対象物の大きさに応じた大きさを持つ代理身体13の立場で、画像表示部11に表示された操作対象物の画像10を操作することができる。 The size of the surrogate body 13 is variable according to the size of the object to be operated. Typically, the larger the operation target object is drawn as a larger body, and the smaller the operation target object is drawn as a smaller body. For example, the surrogate body 13 is drawn as having substantially the same size as the operation target. For example, when the operation target is a heavy machine, the surrogate body 13 is drawn as a "giant" having substantially the same size as the heavy machine. On the contrary, when the operation target is a micromachine, the surrogate body 13 is drawn as a "dwarf" having almost the same size as the micromachine. As described above, the image 10 of the operation object displayed on the image display unit 11 is a subjective image seen from the surrogate body 13. As a result, the user can operate the image 10 of the operation object displayed on the image display unit 11 from the position of the surrogate body 13 having a size corresponding to the size of the operation object.
 一般にユーザが画面に表示された操作対象物を操作する場合、画面上の操作対象物と自分の大きさとが異なっていると、ユーザは操作に違和感を覚える。例えば画面上で重機のアームを手で動かす操作をしたとき、そのままでは重機に対する自分のサイズが小さすぎるため、アームの可動域と自分の腕の可動域とが大きく異なり、ユーザは思い通りの操作がしにくいと感じる。これに対し、画面上に重機と同じサイズの代理身体を配置し、この代理身体の立場で重機を操作することにより、アームの可動域と自分の腕の可動域とが一致するように感じられ、違和感のない操作が可能となる。逆に例えば画面上でマイクロマシンを手で操作したとき、そのままではマイクロマシンに対する自分のサイズが大きすぎるため、マイクロマシンの微細な動きと手の動きとが大きく異なり、ユーザは思い通りの操作がしにくいと感じる。これに対し、画面上にマイクロマシンと同じサイズの代理身体を配置し、この代理身体の立場でマイクロマシンを操作することにより、マイクロマシンの精細な動きと自分の手の動きとが一致するように感じられ、違和感のない操作が可能となる。 Generally, when a user operates an operation object displayed on the screen, the user feels uncomfortable with the operation if the operation object on the screen and his / her own size are different. For example, when you manually move the arm of a heavy machine on the screen, your size is too small for the heavy machine, so the range of motion of the arm and the range of motion of your arm are significantly different, and the user can operate as desired. I find it difficult to do. On the other hand, by arranging a substitute body of the same size as the heavy machine on the screen and operating the heavy machine from the position of this substitute body, it seems that the range of motion of the arm and the range of motion of one's arm match. , It is possible to operate without discomfort. On the contrary, for example, when the micromachine is operated by hand on the screen, the size of the micromachine is too large for the user as it is, so the minute movement of the micromachine and the movement of the hand are greatly different, and the user feels that it is difficult to operate as desired. .. On the other hand, by arranging a surrogate body of the same size as the micromachine on the screen and operating the micromachine from the standpoint of this surrogate body, it seems that the fine movement of the micromachine matches the movement of one's hand. , It is possible to operate without discomfort.
 ある実施の形態では、代理身体13は複数の大きさを有してもよい。このとき代理身体13の大きさは、操作対象物に対して動的に切替可能であってよい。上記の説明では、代理身体13の典型的な例として、操作対象物が大きければ大きいほど大きな身体として描画され、操作対象物が小さければ小さいほど小さな身体として描画されるものを示した。しかしこれに限られず、代理身体13は、1つの操作対象物に対し、用途や状況に応じて様々な大きさを持つものであってもよい。例えば操作対象物高さ6mの重機である場合、代理身体13は、身長10mの「非常に大きな巨人」、当該重機とほぼ同じ大きさの「巨人」、身長4mの「小さめの巨人」、身長1m台の「人間」といったように、4つの身長を持つものとして描画されてもよい。 In certain embodiments, the surrogate body 13 may have a plurality of sizes. At this time, the size of the surrogate body 13 may be dynamically switchable with respect to the operation target. In the above description, as a typical example of the surrogate body 13, the larger the operation object is, the larger the body is drawn, and the smaller the operation object is, the smaller the body is drawn. However, the present invention is not limited to this, and the surrogate body 13 may have various sizes for one operation object depending on the intended use and the situation. For example, in the case of a heavy machine with a height of 6 m, the surrogate body 13 is a "very large giant" with a height of 10 m, a "giant" with a height approximately the same as the heavy machine, a "small giant" with a height of 4 m, and a height. It may be drawn as having four heights, such as a "human" in the 1 m range.
 上記の例で代理身体13が「巨人」として描画された場合、代理身体13と操作対象物とがほぼ同じスケールとなるので、ユーザは違和感を覚えることがない。その一方で、操作対象物は「巨人」によって操作される結果、操作の精度は粗いものとなる。次に代理身体13が「小さめの巨人」として描画された場合、代理身体13と操作対象物のスケールが大きく違わないため、一定程度違和感が改善されつつ、精度の低下も抑制された操作感が得られる。次に代理身体13が「人間」として描画された場合、代理身体13と操作対象物のスケールの違いによる違和感はあるものの、高い精度の操作感が得られる。最後に代理身体13が「非常に大きな巨人」として描画された場合、代理身体13が操作対象より大きいことによる違和感はあるものの、大きなスケールでの操作感が得られる。このように、代理身体13に複数の大きさを持たせ、この複数の大きさを動的に切り替えることにより、ユーザは、操作の用途や目的、操作対象物の置かれた状況。自分の好みなどに応じて、代理身体13の大きさを的確なものとすることができる。 In the above example, when the surrogate body 13 is drawn as a "giant", the surrogate body 13 and the operation target have almost the same scale, so that the user does not feel any discomfort. On the other hand, as a result of the operation object being operated by the "giant", the accuracy of the operation becomes coarse. Next, when the surrogate body 13 is drawn as a "small giant", the scale of the operation target is not significantly different from that of the surrogate body 13, so that the sense of incongruity is improved to some extent and the decrease in accuracy is suppressed. can get. Next, when the surrogate body 13 is drawn as a "human", a highly accurate operation feeling can be obtained, although there is a sense of discomfort due to the difference in scale between the surrogate body 13 and the operation object. Finally, when the surrogate body 13 is drawn as a "very large giant", a feeling of operation on a large scale can be obtained, although there is a sense of discomfort due to the surrogate body 13 being larger than the operation target. In this way, the surrogate body 13 is given a plurality of sizes, and the plurality of sizes are dynamically switched, so that the user can use the operation, the purpose, and the situation in which the operation target is placed. The size of the surrogate body 13 can be made accurate according to one's taste and the like.
 代理身体13が複数の大きさを有し、この大きさが操作対象物に対して動的に切替可能である場合、代理身体13は、操作対象物に対する複数の位置を有してもよい。このとき代理身体13の位置は、代理身体13の大きさに応じて動的に切替可能であってよい。 When the surrogate body 13 has a plurality of sizes and this size can be dynamically switched with respect to the operation target object, the surrogate body 13 may have a plurality of positions with respect to the operation target object. At this time, the position of the surrogate body 13 may be dynamically switchable according to the size of the surrogate body 13.
 一般に、操作対象物に対して代理身体13をどこに位置付ければよいかは、代理身体13の大きさによる。特に代理身体13と操作対象物の大きさが異なるほど、代理身体13の操作対象物に対する位置は重要となる。例えば上記の例で代理身体13が「非常に大きな巨人」「巨人」「小さめの巨人」「人間」の大きさを有する場合、この代理身体13は、それぞれのそれぞれの大きさにおいて、操作対象物の最上部から最下部までの範囲で複数の高さ位置を有してよい。ここで代理身体13が「巨人」として描画された場合、代理身体13と操作対象物はほぼ同じサイズであるため、代理身体13の高さ位置は概ね操作対象物と一致していればよい。ここで代理身体13の大きさが「人間」に切り替わった場合、代理身体13の中心位置を操作対象物の操作位置(アームの先端や、ロボットのエンドエフェクタ等)に一致させることにより、より精度の高い操作を実現することができる。
このように、代理身体13に操作対象物に対する複数の位置を持たせ、この複数の位置を動的に切り替えることにより、ユーザは、代理身体の大きさに応じて、代理身体13を的確に位置付けることができる。
In general, where the surrogate body 13 should be positioned with respect to the operation target depends on the size of the surrogate body 13. In particular, the larger the size of the surrogate body 13 and the operating object is, the more important the position of the surrogate body 13 with respect to the operating object is. For example, in the above example, when the surrogate body 13 has the sizes of "very large giant", "giant", "small giant", and "human", the surrogate body 13 is an operation target in each size. It may have a plurality of height positions in the range from the top to the bottom of the. Here, when the surrogate body 13 is drawn as a "giant", since the surrogate body 13 and the operation target are substantially the same size, the height position of the surrogate body 13 may be substantially the same as the operation target. Here, when the size of the surrogate body 13 is switched to "human", the center position of the surrogate body 13 is matched with the operation position of the operation target (the tip of the arm, the end effector of the robot, etc.) to achieve higher accuracy. High operation can be realized.
In this way, the surrogate body 13 is provided with a plurality of positions with respect to the operation target object, and the plurality of positions are dynamically switched, so that the user accurately positions the surrogate body 13 according to the size of the surrogate body. be able to.
 ある実施の形態では、代理身体13の位置は、操作対象物の位置に応じて可変であってよい。一般にユーザが画面に表示された操作対象物を操作する場合、画面上の操作対象物に対して自分が適切でないところ位置すると違和感を覚える。例えば、操作対象物が壁際に位置するようなとき、ユーザは操作対象物と壁とに挟まれるような狭い場所からは操作しない。このような場合、操作対象物の位置に応じて、ユーザが操作しやすい位置に代理身体13を位置づけることにより、ユーザにとって違和感のない操作が可能となる。 In certain embodiments, the position of the surrogate body 13 may be variable depending on the position of the object to be operated. Generally, when a user operates an operation object displayed on the screen, he / she feels uncomfortable if he / she is positioned in an inappropriate place with respect to the operation object on the screen. For example, when the operation object is located near a wall, the user does not operate from a narrow place such that the operation object is sandwiched between the wall. In such a case, by locating the surrogate body 13 at a position that is easy for the user to operate according to the position of the operation target object, the operation can be performed without discomfort for the user.
 ある実施の形態では、代理身体13は、操作対象物の画像10に対してなされる操作とともに画像表示部上で運動してもよい。このとき代理身体13の運動速度は、操作対象物の運動速度に応じて可変であってもよい。典型的には代理身体13は、操作対象物が大きければ大きいほどゆっくり動くように描画され、操作対象物が小さければ小さいほど速く動くように描画される。前述と同様に画像表示部上で重機のアームを手で動かす操作をしたとき、ユーザはそのままでは、アームの運動速度に比べて自分の手の運動速度が速すぎると感じる。同様に、画像表示部上でマイクロマシンを手で動かす操作をしたとき、ユーザはそのままでは、マイクロマシンの運動速度に比べて自分の手の運動速度が遅すぎると感じる。これに対し、画像表示部上に描画された代理身体13の手が、マイクロマシンの動きに対応して素早く動くようにすれば、ユーザは違和感なくマイクロマシンを操作することができる。 In a certain embodiment, the surrogate body 13 may move on the image display unit together with the operation performed on the image 10 of the operation target object. At this time, the movement speed of the surrogate body 13 may be variable according to the movement speed of the operation target object. Typically, the surrogate body 13 is drawn so that the larger the operation object is, the slower the movement is, and the smaller the operation object is, the faster the movement is drawn. When the arm of the heavy equipment is manually moved on the image display unit as described above, the user feels that the movement speed of his / her hand is too fast as it is. Similarly, when the user manually moves the micromachine on the image display unit, the user feels that the movement speed of his / her hand is too slow compared to the movement speed of the micromachine as it is. On the other hand, if the hand of the surrogate body 13 drawn on the image display unit moves quickly in response to the movement of the micromachine, the user can operate the micromachine without discomfort.
 ある実施の形態では、代理身体13は、操作対象物に対する位置や向きが可変であってもよい。例えば代理身体13は、操作対象物の前、後、左、右、上または下に位置するように描画されてよい。この場合ユーザは、この操作対象物をそれぞれ前、後、左、右、上、下から操作するように感じる。例えばユーザが画像表示部上で重機のアームを手で動かす操作をするとき、重機の姿勢や運動方向に応じて、重機に対してどの位置からどの向きで操作すれば自然であるかが異なる。従って、重機に対する自分の位置や向きが固定的であると、操作が不自然だと感じられる場合がある。これに対し、重機に対する位置や向きが適当であるように代理身体13が描画されれば、ユーザは違和感なく重機を操作することができる。 In some embodiments, the surrogate body 13 may have a variable position and orientation with respect to the object to be operated. For example, the surrogate body 13 may be drawn so as to be located in front of, behind, left, right, above or below the operation target. In this case, the user feels that the operation object is operated from the front, the back, the left, the right, the top, and the bottom, respectively. For example, when a user manually moves an arm of a heavy machine on an image display unit, it is natural to operate the heavy machine from which position and in which direction depending on the posture and the direction of movement of the heavy machine. Therefore, if one's position or orientation with respect to heavy machinery is fixed, the operation may feel unnatural. On the other hand, if the surrogate body 13 is drawn so that the position and orientation with respect to the heavy machine are appropriate, the user can operate the heavy machine without discomfort.
 ある実施の形態では、大きさの異なる複数の操作対象物が存在し、ユーザがこれらを連続的に切り替えて操作するとき、代理身体13はその大きさを連続的に変えてもよい。例えば、重機のような巨大な第1の操作対象物と、微細な電気配線を行うロボットのような小さな第2の操作対象物とがある場合を考える。ユーザは、第1の操作対象物を操作して自動車を持ち上げて移動した後、速やかに第2の操作対象物を操作して自動車のエンジンルーム内の電気配線作業を行うものとする。このとき、ユーザが第1の操作対象物を操作している間は代理身体13は「巨人」であり、ユーザが第2の操作対象物の操作を開始すると代理身体13は連続的に「小人」に変身する。これにより、大きさの異なる操作対象物を連続的に切り替えて操作するとき、ユーザはこれらの操作対象物の操作に最適なサイズの身体に連続的に乗り移ったかのように、自然に操作を行うことができる。 In a certain embodiment, there are a plurality of operation objects having different sizes, and when the user continuously switches and operates them, the surrogate body 13 may continuously change the size. For example, consider the case where there is a huge first operation object such as a heavy machine and a small second operation object such as a robot that performs fine electrical wiring. The user operates the first operation object to lift and move the automobile, and then promptly operates the second operation object to perform the electrical wiring work in the engine room of the automobile. At this time, the surrogate body 13 is a "giant" while the user is operating the first operation object, and when the user starts the operation of the second operation object, the surrogate body 13 is continuously "small". Transform into a "person". As a result, when the operation objects of different sizes are continuously switched and operated, the user naturally performs the operation as if he / she continuously transferred to the body of the optimum size for the operation of these operation objects. Can be done.
 ある実施の形態では、操作対象物が複数の構成やパーツを有し、ユーザがこれらを連続的に切り替えて操作するとき、代理身体13はその大きさを連続的に変えてもよい。例えば、サイズが大きく動作精度が粗い第1の腕と、サイズが小さく動作精度が精密な第2の腕とを持つロボットがある場合を考える。すなわちこのロボットは、大きさや動作精度が異なる2つのパーツを有する。ユーザは、このロボットの第1の腕を操作して精密機器を移動した後、速やかに第2の腕を操作してこの精密機器を修理するものとする。このとき、ユーザが第1の腕を操作している間は代理身体13は「人間」であり、ユーザが第2の腕の操作を開始すると代理身体13は連続的に「小人」に変身する。これにより、操作対象物が持つ大きさや動作精度が異なる複数のパーツを連続的に切り替えて操作するとき、ユーザはこれらのパーツの操作に最適なサイズの身体に連続的に乗り移ったかのように、自然に操作を行うことができる。 In a certain embodiment, when the operation target has a plurality of configurations and parts and the user continuously switches and operates them, the surrogate body 13 may continuously change its size. For example, consider the case where there is a robot having a first arm having a large size and coarse motion accuracy and a second arm having a small size and precise motion accuracy. That is, this robot has two parts having different sizes and operating accuracy. The user operates the first arm of the robot to move the precision equipment, and then promptly operates the second arm to repair the precision equipment. At this time, the surrogate body 13 is a "human" while the user is operating the first arm, and when the user starts operating the second arm, the surrogate body 13 is continuously transformed into a "dwarf". To do. As a result, when multiple parts with different sizes and operating accuracy of the operation target are continuously switched and operated, the user naturally moves to a body of the optimum size for operating these parts. Can be operated on.
 ある実施の形態では、画像表示部11に表示される操作対象物の画像10は、2つの異なる大きさの代理身体から見た操作対象物の画像成分を混合したものであってよい。例えば、大きな代理身体から見た大きな操作対象物の画像成分と、小さな代理身体から見た小さな操作対象物の画像成分とを混合したものであってよい。この実施の形態によれば、ユーザは、どちらの画像を見るかの意識を切り替えることによって、それぞれの画像を瞬時に切り替えて見ることができる。 In a certain embodiment, the image 10 of the operation object displayed on the image display unit 11 may be a mixture of the image components of the operation object seen from the surrogate bodies of two different sizes. For example, it may be a mixture of an image component of a large operating object viewed from a large surrogate body and an image component of a small operating object viewed from a small surrogate body. According to this embodiment, the user can instantly switch and view each image by switching the consciousness of which image to view.
 以上述べたように、これらの実施の形態によれば、操作対象物の画像を画像表示部に表示することができる。このとき表示される画像は仮想身体から見た主観画像なので、ユーザはこれを違和感なく操作することができる。 As described above, according to these embodiments, the image of the operation target can be displayed on the image display unit. Since the image displayed at this time is a subjective image seen from the virtual body, the user can operate it without discomfort.
[第2の実施の形態]
 図2に、第2の実施の形態に係る画像インタフェース装置1の機能ブロック図を示す。画像インタフェース装置1は、ユーザ12が操作する操作対象物110の画像10を表示する画像表示部11と、画像表示部11にユーザ12の仮想的な代理身体13を描画する代理身体描画部14と、操作対象物110に対するユーザ12からの操作信号が入力される操作信号入力部15と、を備える。操作対象物110の画像10は、操作対象物110の近辺に設置されたカメラ100によって撮影された画像である。すなわちこの実施の形態は、特に操作対象物110の画像がカメラ100によって撮影された画像であるという点で、より一般的な第1の実施の形態と異なる。その他の構成と動作は、第1の実施の形態と共通である。
[Second Embodiment]
FIG. 2 shows a functional block diagram of the image interface device 1 according to the second embodiment. The image interface device 1 includes an image display unit 11 that displays an image 10 of an operation object 110 operated by the user 12, and a surrogate body drawing unit 14 that draws a virtual surrogate body 13 of the user 12 on the image display unit 11. The operation signal input unit 15 for inputting the operation signal from the user 12 to the operation object 110 is provided. The image 10 of the operation object 110 is an image taken by a camera 100 installed in the vicinity of the operation object 110. That is, this embodiment is different from the more general first embodiment in that the image of the operation target 110 is an image taken by the camera 100. Other configurations and operations are common to the first embodiment.
 カメラ100は、例えばビデオカメラである。カメラ100は操作対象物110の近辺に配置される。カメラ100が撮影した画像は、代理身体13から見た景色とほぼ同じものである。この意味でカメラ100は、代理身体13と視点を共有する。カメラ100は、撮影した画像を画像表示部11に送信する。画像表示部11は、カメラ100から受信した画像を表示する。 The camera 100 is, for example, a video camera. The camera 100 is arranged in the vicinity of the operation target 110. The image taken by the camera 100 is almost the same as the scenery seen from the surrogate body 13. In this sense, the camera 100 shares a viewpoint with the surrogate body 13. The camera 100 transmits the captured image to the image display unit 11. The image display unit 11 displays an image received from the camera 100.
 この実施の形態によれば、現実の操作対象物を画像表示部に表示することができる。このとき表示される画像は仮想身体から見た主観画像なので、ユーザはこれを違和感なく操作することができる。 According to this embodiment, an actual operation target can be displayed on the image display unit. Since the image displayed at this time is a subjective image seen from the virtual body, the user can operate it without discomfort.
[第3の実施の形態]
 図3に、第3の実施の形態に係る画像操作装置2の機能ブロック図を示す。画像操作装置2は、ユーザ12が操作する操作対象物110の画像10を表示する画像表示部11と、画像表示部11にユーザ12の仮想的な代理身体13を描画する代理身体描画部14と、操作対象物110に対するユーザ12からの操作信号が入力される操作信号入力部15と、操作部20と、を備える。すなわちこの実施の形態は、図1の画像インタフェース装置に操作部20を追加したものである。
[Third Embodiment]
FIG. 3 shows a functional block diagram of the image manipulation device 2 according to the third embodiment. The image operating device 2 includes an image display unit 11 that displays an image 10 of an operation object 110 operated by the user 12, and a surrogate body drawing unit 14 that draws a virtual surrogate body 13 of the user 12 on the image display unit 11. The operation signal input unit 15 for inputting the operation signal from the user 12 to the operation object 110, and the operation unit 20 are provided. That is, in this embodiment, the operation unit 20 is added to the image interface device of FIG.
 操作部20は、ユーザ12が操作することにより操作信号を生成する。操作部20は、例えばユーザ12の手に把持されるモーションコントローラである。これに限られず、操作部20は、マウス、ジョイスティック、ゲームパッドなどの任意の好適なコントローラであってよい。 The operation unit 20 generates an operation signal by being operated by the user 12. The operation unit 20 is, for example, a motion controller held in the hand of the user 12. Not limited to this, the operation unit 20 may be any suitable controller such as a mouse, a joystick, and a game pad.
 ユーザ12は、画像表示部11に表示された操作対象物の画像10を見ながら、画像10を操作するために、操作部20を操作する。これにより操作部20は、画像10の動作を定める操作信号を生成する。操作部20は、生成した操作信号を操作信号入力部15に送信する。操作信号入力部15は、操作信号を画像表示部11に送信する。 The user 12 operates the operation unit 20 in order to operate the image 10 while looking at the image 10 of the operation target displayed on the image display unit 11. As a result, the operation unit 20 generates an operation signal that determines the operation of the image 10. The operation unit 20 transmits the generated operation signal to the operation signal input unit 15. The operation signal input unit 15 transmits the operation signal to the image display unit 11.
 操作対象物の画像10は、カメラなどによって撮影された遠隔の操作対象物の画像であってもよいし、シミュレータやゲームなどの仮想的な画像であってもよい。 The image 10 of the operation object may be an image of a remote operation object taken by a camera or the like, or may be a virtual image of a simulator, a game, or the like.
 この実施の形態によれば、ユーザは、画像表示部に表示された操作対象物の画像を見ながら、この画像を画面上で違和感なく操作することができる。 According to this embodiment, the user can operate this image on the screen without discomfort while looking at the image of the operation object displayed on the image display unit.
[第4の実施の形態]
 図4に、第4の実施の形態に係る操作対象物操作装置3の機能ブロック図を示す。操作対象物操作装置3は、ユーザ12が操作する操作対象物110の画像10を表示する画像表示部11と、画像表示部11にユーザ12の仮想的な代理身体13を描画する代理身体描画部14と、操作対象物110に対するユーザ12からの操作信号が入力される操作信号入力部15と、操作部20と、操作信号出力部30と、を備える。すなわちこの実施の形態は、図3の画像操作装置2に操作信号出力部30を追加したものである。
[Fourth Embodiment]
FIG. 4 shows a functional block diagram of the operation object operation device 3 according to the fourth embodiment. The operation object operation device 3 includes an image display unit 11 that displays an image 10 of the operation object 110 operated by the user 12, and a substitute body drawing unit that draws a virtual substitute body 13 of the user 12 on the image display unit 11. The operation signal input unit 15, the operation unit 20, and the operation signal output unit 30 for inputting the operation signal from the user 12 to the operation object 110 are provided. That is, in this embodiment, the operation signal output unit 30 is added to the image operation device 2 of FIG.
 操作信号出力部30は、ユーザ12からの操作信号を画像表示部11から受信する。
操作信号出力部30は、この操作信号を操作対象物110に出力する。
The operation signal output unit 30 receives the operation signal from the user 12 from the image display unit 11.
The operation signal output unit 30 outputs this operation signal to the operation object 110.
 ユーザ12は、画像表示部11に表示された操作対象物110の画像10を見ながら、遠隔にある操作対象物110を操作するために、操作部20を操作する。これにより操作部20は、操作対象物110の動作を定める操作信号を生成する。操作部20は、生成した操作信号を操作信号入力部15に送信する。操作信号入力部15は、操作信号を画像表示部11に送信する。画像表示部11は、操作信号を操作信号出力部30に送信する。操作信号出力部30は、操作信号を操作対象物110に出力する。 The user 12 operates the operation unit 20 in order to operate the remote operation object 110 while looking at the image 10 of the operation object 110 displayed on the image display unit 11. As a result, the operation unit 20 generates an operation signal that determines the operation of the operation object 110. The operation unit 20 transmits the generated operation signal to the operation signal input unit 15. The operation signal input unit 15 transmits the operation signal to the image display unit 11. The image display unit 11 transmits an operation signal to the operation signal output unit 30. The operation signal output unit 30 outputs the operation signal to the operation object 110.
 この実施の形態によれば、ユーザは、画像表示部に表示された操作対象物の画像を見ながら、操作対象物を遠隔から違和感なく操作することができる。 According to this embodiment, the user can remotely operate the operation object without discomfort while viewing the image of the operation object displayed on the image display unit.
[第5の実施の形態]
 図5に、第5の実施の形態に係る操作対象物操作システム4の機能ブロック図を示す。操作対象物操作システム4は、ユーザ12が操作する操作対象物110の画像10を表示する画像表示部11と、画像表示部11にユーザ12の仮想的な代理身体13を描画する代理身体描画部14と、操作対象物110に対するユーザ12からの操作信号が入力される操作信号入力部15と、操作部20と、操作信号出力部30と、カメラ100と、を備える。すなわちこの実施の形態は、図4の操作対象物操作装置3にカメラ100を追加したものである。
[Fifth Embodiment]
FIG. 5 shows a functional block diagram of the operation object operation system 4 according to the fifth embodiment. The operation object operation system 4 includes an image display unit 11 that displays an image 10 of the operation object 110 operated by the user 12, and a substitute body drawing unit that draws a virtual substitute body 13 of the user 12 on the image display unit 11. It includes an operation signal input unit 15, an operation signal input unit 15, an operation signal output unit 30, and a camera 100 into which an operation signal from the user 12 for the operation object 110 is input. That is, in this embodiment, the camera 100 is added to the operation object operation device 3 of FIG.
 カメラ100は、例えばビデオカメラであり、操作対象物110の近辺に配置される。カメラ100が撮影した画像は、代理身体13から見た景色とほぼ同じものである。この意味でカメラ100は、代理身体13と視点を共有する。カメラ100は、撮影した画像を画像表示部11に送信する。画像表示部11は、カメラ100から受信した画像を表示する。 The camera 100 is, for example, a video camera, and is arranged in the vicinity of the operation target 110. The image taken by the camera 100 is almost the same as the scenery seen from the surrogate body 13. In this sense, the camera 100 shares a viewpoint with the surrogate body 13. The camera 100 transmits the captured image to the image display unit 11. The image display unit 11 displays an image received from the camera 100.
 この実施の形態によれば、ユーザは、このシステムに備えられたカメラによって撮影された現実の操作対象物の画像を見ながら、操作対象物を遠隔から違和感なく操作することができる。 According to this embodiment, the user can remotely operate the operation object without discomfort while viewing the image of the actual operation object taken by the camera provided in this system.
 ある実施の形態では、カメラ100は、位置を変えるための移動機構を備えてもよい。移動機構は、例えばモータで操作対象物110に対するカメラ100の位置を移動する。これによりカメラ100は、例えば前、後、左、右、上または下から操作対象物110を撮影することができる。従って代理身体13から見た主観画像も、前、後、左、右、上または下から操作対象物110を見たものとなる。 In certain embodiments, the camera 100 may include a moving mechanism for changing the position. The moving mechanism moves the position of the camera 100 with respect to the operation target 110 by, for example, a motor. As a result, the camera 100 can photograph the operation object 110 from, for example, front, rear, left, right, top or bottom. Therefore, the subjective image seen from the surrogate body 13 is also a view of the operation object 110 from the front, the back, the left, the right, the top, or the bottom.
 ある実施の形態では、カメラ100は、撮影方向を変えるための回転機構を備えてもよい。回転機構は、例えばモータでカメラ100を所定の軸周りに回転させる。軸回りの回転は、例えばカメラのローリング、ピッチング、ヨーイングなどに相当する。これによりカメラ100は、様々な角度から操作対象物110を撮影することができる。従って代理身体13から見た主観画像も、様々な角度から操作対象物110を見たものとなる。 In certain embodiments, the camera 100 may include a rotation mechanism for changing the shooting direction. The rotation mechanism rotates the camera 100 around a predetermined axis by, for example, a motor. Rotation around the axis corresponds to, for example, rolling, pitching, yawing, etc. of the camera. As a result, the camera 100 can photograph the operation object 110 from various angles. Therefore, the subjective image seen from the surrogate body 13 also looks at the operation object 110 from various angles.
 ある実施の形態では、カメラ100の移動機構や回転機構は、ユーザの操作方向と反対方向にカメラを動かしてもよい。さらに画像表示部に表示される画像は、カメラ100が撮影した画像の左右を反転させたものであってもよい。このときユーザは、画像表示部11上で、代理身体から見た景色であって、実際とは左右が反転した景色を見る。さらにユーザは、自分のした操作の方向と反対方向に操作対象物を操作する。言い換えればユーザは、実際の世界とは左右の逆転した世界で操作対象物を操作しているかのような感覚を覚える。この実施の形態によれば、例えば右利きのユーザが、自分の身体感覚としては右手で右側からアプローチしたいが、操作対象物のおかれた環境によって左側からしかアプローチできないような場合、右手を使って違和感なく操作することができる。 In some embodiments, the moving or rotating mechanism of the camera 100 may move the camera in a direction opposite to the user's operating direction. Further, the image displayed on the image display unit may be an inverted image of the image taken by the camera 100. At this time, the user sees the scenery seen from the surrogate body on the image display unit 11 and the left and right sides are reversed from the actual scenery. Further, the user operates the operation object in the direction opposite to the direction of the operation performed by the user. In other words, the user feels as if he / she is operating an object to be operated in a world that is opposite to the actual world. According to this embodiment, for example, when a right-handed user wants to approach from the right side with his / her right hand as his / her physical sensation, but can approach only from the left side due to the environment in which the object to be operated is placed, the right hand is used. It can be operated without any discomfort.
 ある実施の形態では、カメラ100は、2つのカメラを備えたステレオカメラであってよい。この場合、カメラ100が撮影した操作対象物110の画像は、ユーザにとって立体的で奥行き感のあるものとなる。 In certain embodiments, the camera 100 may be a stereo camera with two cameras. In this case, the image of the operation object 110 taken by the camera 100 is three-dimensional and has a sense of depth for the user.
 ある実施の形態では、ステレオカメラの眼間距離は、代理身体13の大きさに応じて可変であってもよい。この場合、カメラ100は、モータなどを用いて眼間距離を動的に調整するための眼間距離調整機構を備えてもよい。典型的にはステレオカメラの眼間距離は、代理身体13が大きければ大きいほど長く調整され、代理身体13が小さければ小さいほど短く調整される。この実施の形態では、ステレオカメラの眼間距離が代理身体13の大きさに合わせて調整されるので、ユーザは、操作対象物110ごとにより自然な奥行き感を持つ画像を見ることができる。 In some embodiments, the intereye distance of the stereo camera may be variable depending on the size of the surrogate body 13. In this case, the camera 100 may be provided with an inter-eye distance adjusting mechanism for dynamically adjusting the inter-eye distance using a motor or the like. Typically, the interocular distance of the stereo camera is adjusted longer as the surrogate body 13 is larger, and shorter as the surrogate body 13 is smaller. In this embodiment, since the inter-eye distance of the stereo camera is adjusted according to the size of the surrogate body 13, the user can see an image having a more natural sense of depth for each operation target 110.
 ある実施の形態では、カメラ100は、操作対象物110までの距離をリアルタイムに検知する深度カメラであってよい。この場合、カメラ100が撮影した操作対象物110の画像は、ユーザにとってよりリアルな奥行き感と没入感が得られるものとなる。 In a certain embodiment, the camera 100 may be a depth camera that detects the distance to the operation target 110 in real time. In this case, the image of the operation target 110 taken by the camera 100 can provide a more realistic sense of depth and immersiveness for the user.
 ある実施の形態では、カメラ100は、それぞれ異なる視野を撮影する複数のカメラを備えてよい。このとき、これら複数のカメラによって撮影された複数の視野の画像は、同時に表示されてもよいし、ユーザが切り替えて別々に表示されてもよい。この実施の形態では、ユーザは、代理身体13の複数の視点から見た操作対象物110を見ることができる。 In certain embodiments, the camera 100 may include a plurality of cameras that capture different fields of view. At this time, the images of the plurality of fields of view taken by the plurality of cameras may be displayed at the same time, or may be switched by the user and displayed separately. In this embodiment, the user can see the operation object 110 as viewed from a plurality of viewpoints of the surrogate body 13.
[第6の実施の形態]
 図6は、第6の実施の形態に係る操作対象物提示方法の処理手順を示すフローチャートである。
[Sixth Embodiment]
FIG. 6 is a flowchart showing a processing procedure of the operation object presentation method according to the sixth embodiment.
 ステップS1は、提示の目的となる操作対象物の大きさの情報を取得する処理である。大きさの情報は、カメラ等によりリアルタイムに取得したものであってもよいし、予めデータベース等に記憶されたデータから取得されたものであってもよい。操作対象物の大きさは任意であるが、例えば建設機械であれば数10m、マイクロマシンであれば数mmである。操作対象物の大きさの情報が取得されると、処理はステップS2に移る。 Step S1 is a process of acquiring information on the size of the operation target to be presented. The size information may be acquired in real time by a camera or the like, or may be acquired from data stored in a database or the like in advance. The size of the object to be operated is arbitrary, but for example, it is several tens of m for a construction machine and several mm for a micromachine. When the information on the size of the operation target is acquired, the process proceeds to step S2.
 ステップS2は、操作対象物の大きさに応じた代理身体を画像表示部に描画する処理である。この代理身体は、ユーザの身体を仮想的に代理するものである。これは、ユーザの身体に類似した具象的な画像であってもよいし、シルエット、線画、半透明画像といった抽象的な画像であってもよい。代理身体の大きさは、操作対象物の大きさに応じて可変である。典型的には代理身体は、操作対象物が大きければ大きいほど大きな身体として描画され、操作対象物が小さければ小さいほど小さな身体として描画される。例えば代理身体は、操作対象物の大きさと実質的に同じ大きさのものとして描画される。画像表示部に代理身体が描画されると、処理はステップS3に移る。 Step S2 is a process of drawing a surrogate body on the image display unit according to the size of the operation object. This surrogate body virtually represents the user's body. This may be a concrete image similar to the user's body, or an abstract image such as a silhouette, a line art, or a translucent image. The size of the surrogate body is variable depending on the size of the object to be operated. Typically, the larger the operation object is drawn as a larger body, and the smaller the operation object is, the smaller the body is drawn. For example, the surrogate body is drawn as being substantially the same size as the object to be manipulated. When the surrogate body is drawn on the image display unit, the process proceeds to step S3.
 ステップS3は、操作対象物の画像を代理身体の主観画像として画像表示部に表示する処理である。すなわち操作対象物の画像は、画像表示部に描画された代理身体の目から見た画像として表示される。操作対象物の画像は、カメラなどによって撮影された遠隔の操作対象物の実画像であってもよいし、シミュレータやゲームなどの仮想的な画像であってもよい。 Step S3 is a process of displaying an image of the operation object on the image display unit as a subjective image of the surrogate body. That is, the image of the operation object is displayed as an image seen from the eyes of the surrogate body drawn on the image display unit. The image of the operation object may be a real image of the remote operation object taken by a camera or the like, or may be a virtual image of a simulator, a game, or the like.
 この実施の形態によれば、ユーザに対して操作対象物を提示することができる。このとき提示される画像は仮想身体から見た主観画像なので、ユーザはこれを違和感なく操作することができる。 According to this embodiment, the operation target can be presented to the user. Since the image presented at this time is a subjective image seen from the virtual body, the user can operate it without discomfort.
[第7の実施の形態]
 第7の実施の形態は、操作対象物提示プログラムである。このプログラムは、ユーザが操作する操作対象物の画像を画像表示部に表示するステップS1と、ユーザの仮想的な代理身体を画像表示部に描画するステップS2と、ユーザからの操作信号により操作対象物の画像を操作するステップS3と、をコンピュータに実行させる。各ステップの処理手順は前述の第6の実施形態で説明したものと共通するので、詳しい説明は省略する。
[7th Embodiment]
A seventh embodiment is an operation object presentation program. This program includes a step S1 for displaying an image of an operation object operated by the user on the image display unit, a step S2 for drawing a virtual surrogate body of the user on the image display unit, and an operation target based on an operation signal from the user. Have the computer perform step S3 of manipulating the image of the object. Since the processing procedure of each step is the same as that described in the sixth embodiment described above, detailed description thereof will be omitted.
 この実施の形態によれば、コンピュータを用いてユーザに対して操作対象物を提示することができる。このとき提示される画像は仮想身体から見た主観画像なので、ユーザはこれを違和感なく操作することができる。 According to this embodiment, the operation target can be presented to the user using a computer. Since the image presented at this time is a subjective image seen from the virtual body, the user can operate it without discomfort.
(変形例)
 以上、本発明を実施例を基に説明した。これらの実施例は例示であり、それらの各構成要素や各処理プロセスの組合せにいろいろな変形例が可能なこと、またそうした変形例も本発明の範囲にあることは当業者に理解されるところである。
(Modification example)
The present invention has been described above based on examples. It will be understood by those skilled in the art that these examples are examples, and that various modifications are possible for each of the components and combinations of each processing process, and that such modifications are also within the scope of the present invention. is there.
[変形例1]
 実施の形態のカメラの移動機構や回転機構は、ヘッドマウントディスプレイを用いて操作されてもよい。このときヘッドマウントディスプレイは、カメラの移動機構や回転機構を操作するための信号を、移動機構や回転機構に送信する。この信号は、ユーザの頭部の動きに連動して、カメラの撮影位置や方向を動かすためのものである。
 本変形例によれば、ユーザは、頭の動きによって操作対象物の撮影位置や方向を自在に操作することができる。
[Modification 1]
The moving mechanism and the rotating mechanism of the camera of the embodiment may be operated by using the head-mounted display. At this time, the head-mounted display transmits a signal for operating the moving mechanism and the rotating mechanism of the camera to the moving mechanism and the rotating mechanism. This signal is for moving the shooting position and direction of the camera in conjunction with the movement of the user's head.
According to this modification, the user can freely operate the shooting position and direction of the operation target by the movement of the head.
[変形例2]
 実施の形態の操作対象物操作装置は、操作対象物の近辺にマイクロフォンを備えてもよい。マイクロフォンが集音した音は、代理身体が聞く音としてユーザに送信される。ユーザは、画像表示部に表示された画像を見るとともに、マイクロフォンが集音した音を聞きながら、操作対象物を操作する。
 本変形例によれば、ユーザは、視点に加えて聴覚をも代理身体と共有することができる。
これによりユーザは、より自然な感覚で操作対象物を操作することができる。
[Modification 2]
The operating object operating device of the embodiment may include a microphone in the vicinity of the operating object. The sound collected by the microphone is transmitted to the user as a sound heard by the surrogate body. The user operates the operation target while viewing the image displayed on the image display unit and listening to the sound collected by the microphone.
According to this modification, the user can share hearing with the surrogate body in addition to the viewpoint.
As a result, the user can operate the operation object with a more natural feeling.
 各変形例は実施の形態と同様の作用・効果を奏する。 Each modification has the same action and effect as the embodiment.
 上述した各実施の形態と変形例の任意の組み合わせもまた本発明の実施の形態として有用である。組み合わせによって生じる新たな実施の形態は、組み合わされる各実施の形態および変形例それぞれの効果をあわせもつ。 Any combination of each of the above-described embodiments and modifications is also useful as an embodiment of the present invention. The new embodiments resulting from the combination have the effects of each of the combined embodiments and variants.
 本発明は、建設機械の遠隔操作、手術支援、人型ロボットの操縦、乗り物の操縦、シミュレータ、ゲームなど多岐にわたる。本発明による手法はこうした様々な用途に利用可能である。 The present invention covers a wide range of applications such as remote control of construction machinery, surgical support, operation of humanoid robots, operation of vehicles, simulators, and games. The method according to the present invention can be used for such various applications.
 1・・画像インタフェース装置
 2・・画像操作装置
 3・・操作対象物操作装置
 4・・操作対象物操作システム
 10・・画像
 11・・画像表示部
 12・・ユーザ
 13・・代理身体
 14・・代理身体描画部
 15・・操作信号入力部
 20・・操作部
 30・・操作信号出力部
 100・・カメラ
 110・・操作対象物
 S1・・操作対象物の大きさの情報を取得するステップ
 S2・・操作対象物の大きさに応じた代理身体を表示部に描画するステップ
 S3・・操作対象物の画像を代理身体の主観画像として表示部に表示するステップ
1 ・ ・ Image interface device 2 ・ ・ Image operation device 3 ・ ・ Operation object operation device 4 ・ ・ Operation object operation system 10 ・ ・ Image 11 ・ ・ Image display unit 12 ・ ・ User 13 ・ ・ Proxy body 14 ・ ・Surrogate body drawing unit 15 ・ ・ Operation signal input unit 20 ・ ・ Operation unit 30 ・ ・ Operation signal output unit 100 ・ ・ Camera 110 ・ ・ Operation object S1 ・ ・ Step to acquire information on the size of the operation object S2 ・-Step of drawing a surrogate body according to the size of the operation object on the display unit S3 ... Step of displaying the image of the operation object on the display unit as a subjective image of the surrogate body

Claims (31)

  1.  ユーザが操作する操作対象物の画像を表示する画像表示部と、
     前記画像表示部に前記ユーザの仮想的な代理身体を描画する代理身体描画部と、
     前記操作対象物に対する前記ユーザからの操作信号が入力される操作信号入力部と、を備え、
     前記代理身体の大きさは、前記操作対象物の大きさに応じて可変であり、
     前記画像表示部に表示される操作対象物の画像は、前記代理身体から見た主観画像であることを特徴とする画像インタフェース装置。
    An image display unit that displays an image of the operation object operated by the user,
    A surrogate body drawing unit that draws a virtual surrogate body of the user on the image display unit,
    An operation signal input unit for inputting an operation signal from the user to the operation object is provided.
    The size of the surrogate body is variable according to the size of the operation target object, and is variable.
    An image interface device, characterized in that the image of the operation object displayed on the image display unit is a subjective image viewed from the surrogate body.
  2.  前記代理身体は複数の大きさを有し、前記複数の大きさは前記操作対象物に対して動的に切替可能であることを特徴とする請求項1に記載の画像インタフェース装置。 The image interface device according to claim 1, wherein the surrogate body has a plurality of sizes, and the plurality of sizes can be dynamically switched with respect to the operation target object.
  3.  前記代理身体は前記操作対象物に対する複数の位置を有し、前記複数の位置は動的に切替可能であることを特徴とする請求項2に記載の画像インタフェース装置。 The image interface device according to claim 2, wherein the surrogate body has a plurality of positions with respect to the operation target object, and the plurality of positions can be dynamically switched.
  4.  前記代理身体の位置は、前記操作対象物の位置に応じて可変であることを特徴とする請求項1から3のいずれかに記載の画像インタフェース装置。 The image interface device according to any one of claims 1 to 3, wherein the position of the surrogate body is variable according to the position of the operation target object.
  5.  前記代理身体は、前記操作対象物の画像に対してなされる操作とともに画像表示部上で運動し、
     前記代理身体の画像表示部上での運動速度は、前記操作対象物の運動速度に応じて可変であることを特徴とする請求項1から4のいずれかに記載の画像インタフェース装置。
    The surrogate body moves on the image display unit together with the operation performed on the image of the operation object.
    The image interface device according to any one of claims 1 to 4, wherein the moving speed of the surrogate body on the image display unit is variable according to the moving speed of the operation object.
  6.  前記代理身体は、前記操作対象物に対する位置または向きが可変であることを特徴とする請求項1から5のいずれかに記載の画像インタフェース装置。 The image interface device according to any one of claims 1 to 5, wherein the surrogate body has a variable position or orientation with respect to the operation object.
  7.  前記操作対象物は大きさの異なる複数の操作対象物を含み、
     前記複数の操作対象物が連続的に切り替えて操作されるとき、前記代理身体の大きさは連続的に可変であることを特徴とする請求項1から6のいずれかに記載の画像インタフェース装置。
    The operation object includes a plurality of operation objects having different sizes.
    The image interface device according to any one of claims 1 to 6, wherein the size of the surrogate body is continuously variable when the plurality of operation objects are continuously switched and operated.
  8.  前記代理身体は、2つの異なる大きさの代理身体を含み、
     前記画像表示部に表示される操作対象物の画像は、前記2つの異なる大きさの代理身体から見た操作対象物の画像成分を混合したものであることを特徴とする請求項1から7のいずれかに記載の画像インタフェース装置。
    The surrogate body comprises two different sized surrogate bodies.
    The image of the operation object displayed on the image display unit is a mixture of image components of the operation object seen from the surrogate bodies of the two different sizes, according to claims 1 to 7. The image interface device according to any one.
  9.  前記操作対象物の画像は、前記操作対象物の近辺に設置されたカメラによって撮影された画像であることを特徴とする請求項1から8のいずれかに記載の画像インタフェース装置。 The image interface device according to any one of claims 1 to 8, wherein the image of the operation object is an image taken by a camera installed in the vicinity of the operation object.
  10.  ユーザが操作する操作対象物の画像を表示する画像表示部と、
     前記画像表示部に前記ユーザの仮想的な代理身体を描画する代理身体描画部と、
     前記操作対象物に対する前記ユーザからの操作信号が入力される操作信号入力部と、
     前記ユーザが操作することにより操作信号を生成する操作部と、を備え、
     前記代理身体の大きさは、前記操作対象物の大きさに応じて可変であり、
     前記画像表示部に表示される操作対象物の画像は、前記代理身体から見た主観画像であることを特徴とする画像操作装置。
    An image display unit that displays an image of the operation object operated by the user,
    A surrogate body drawing unit that draws a virtual surrogate body of the user on the image display unit,
    An operation signal input unit for inputting an operation signal from the user to the operation object,
    An operation unit that generates an operation signal by the user's operation is provided.
    The size of the surrogate body is variable according to the size of the operation target object, and is variable.
    An image manipulation device, characterized in that the image of the operation object displayed on the image display unit is a subjective image viewed from the surrogate body.
  11.  前記代理身体は複数の大きさを有し、前記複数の大きさは前記操作対象物に対して動的に切替可能であることを特徴とする請求項10に記載の画像操作装置。 The image manipulation device according to claim 10, wherein the surrogate body has a plurality of sizes, and the plurality of sizes can be dynamically switched with respect to the operation target object.
  12.  前記代理身体は前記操作対象物に対する複数の位置を有し、前記複数の位置は動的に切替可能であることを特徴とする請求項11に記載の画像操作装置。 The image manipulation device according to claim 11, wherein the surrogate body has a plurality of positions with respect to the operation target object, and the plurality of positions can be dynamically switched.
  13.  ユーザが操作する操作対象物の画像を表示する画像表示部と、
     前記画像表示部に前記ユーザの仮想的な代理身体を描画する代理身体描画部と、
     前記操作対象物に対する前記ユーザからの操作信号が入力される操作信号入力部と、
     前記ユーザが操作することにより操作信号を生成する操作部と、
     前記ユーザの操作信号を前記操作対象物に出力する操作信号出力部と、を備え、
     前記代理身体の大きさは、前記操作対象物の大きさに応じて可変であり、
     前記画像表示部に表示される操作対象物の画像は、前記代理身体から見た主観画像であることを特徴とする対象物操作装置。
    An image display unit that displays an image of the operation object operated by the user,
    A surrogate body drawing unit that draws a virtual surrogate body of the user on the image display unit,
    An operation signal input unit for inputting an operation signal from the user to the operation object,
    An operation unit that generates an operation signal when the user operates it,
    An operation signal output unit that outputs an operation signal of the user to the operation object is provided.
    The size of the surrogate body is variable according to the size of the operation target object, and is variable.
    The object operation device, characterized in that the image of the operation object displayed on the image display unit is a subjective image seen from the surrogate body.
  14.  前記代理身体は複数の大きさを有し、前記複数の大きさは前記操作対象物に対して動的に切替可能であることを特徴とする請求項13に記載の対象物操作装置。 The object operating device according to claim 13, wherein the surrogate body has a plurality of sizes, and the plurality of sizes can be dynamically switched with respect to the operating object.
  15.  前記代理身体は前記操作対象物に対する複数の位置を有し、前記複数の位置は動的に切替可能であることを特徴とする請求項14に記載の対象物操作装置。 The object operating device according to claim 14, wherein the surrogate body has a plurality of positions with respect to the operating object, and the plurality of positions can be dynamically switched.
  16.  ユーザが操作する操作対象物の画像を表示する画像表示部と、
     前記画像表示部に前記ユーザの仮想的な代理身体を描画する代理身体描画部と、
     前記操作対象物に対する前記ユーザからの操作信号が入力される操作信号入力部と、
     前記ユーザが操作することにより操作信号を生成する操作部と、
     前記ユーザの操作信号を前記操作対象物に出力する操作信号出力部と、
     前記操作対象物を撮影するカメラと、を備え、
     前記代理身体の大きさは、前記操作対象物の大きさに応じて可変であり、
     前記画像表示部に表示される操作対象物の画像は、前記代理身体から見た主観画像であることを特徴とする対象物操作システム。
    An image display unit that displays an image of the operation object operated by the user,
    A surrogate body drawing unit that draws a virtual surrogate body of the user on the image display unit,
    An operation signal input unit for inputting an operation signal from the user to the operation object,
    An operation unit that generates an operation signal when the user operates it,
    An operation signal output unit that outputs the operation signal of the user to the operation object, and
    A camera for photographing the operation object is provided.
    The size of the surrogate body is variable according to the size of the operation target object, and is variable.
    An object operation system characterized in that the image of the operation object displayed on the image display unit is a subjective image seen from the surrogate body.
  17.  前記代理身体は複数の大きさを有し、前記複数の大きさは前記操作対象物に対して動的に切替可能であることを特徴とする請求項16に記載の対象物操作システム。 The object operation system according to claim 16, wherein the surrogate body has a plurality of sizes, and the plurality of sizes can be dynamically switched with respect to the operation object.
  18.  前記代理身体は前記操作対象物に対する複数の位置を有し、前記複数の位置は動的に切替可能であることを特徴とする請求項17に記載の対象物操作システム。 The object operation system according to claim 17, wherein the surrogate body has a plurality of positions with respect to the operation object, and the plurality of positions can be dynamically switched.
  19.  前記カメラは、位置を変えるための移動機構を備えることを特徴とする請求項17または18に記載の対象物操作システム。 The object operation system according to claim 17 or 18, wherein the camera includes a moving mechanism for changing the position.
  20.  前記カメラは、撮影方向を変えるための回転機構を備えることを特徴とする請求項19に記載の対象物操作システム。 The object operation system according to claim 19, wherein the camera includes a rotation mechanism for changing the shooting direction.
  21.  前記移動機構と前記回転機構は、ユーザの操作方向と反対方向にカメラを動かし、
     画像表示部に表示される画像は、前記カメラが撮影した画像の左右を反転させたものであることを特徴とする請求項20に記載の対象物操作システム。
    The moving mechanism and the rotating mechanism move the camera in a direction opposite to the user's operating direction.
    The object operation system according to claim 20, wherein the image displayed on the image display unit is a left-right inverted image taken by the camera.
  22.  前記カメラは、2つのカメラを備えたステレオカメラであることを特徴とする請求項16から21のいずれかに記載の対象物操作システム。 The object operation system according to any one of claims 16 to 21, wherein the camera is a stereo camera including two cameras.
  23.  前記ステレオカメラの眼間距離は、前記代理身体の大きさに応じて可変であることを特徴とする請求項22に記載の対象物操作システム。 The object operation system according to claim 22, wherein the inter-eye distance of the stereo camera is variable according to the size of the surrogate body.
  24.  前記カメラは、前記操作対象物までの距離をリアルタイムに検知する深度カメラであることを特徴とする請求項16に記載の対象物操作システム。 The object operation system according to claim 16, wherein the camera is a depth camera that detects the distance to the operation object in real time.
  25.  前記カメラは、それぞれ異なる視野を撮影する複数のカメラを備えることを特徴とする請求項16に記載の対象物操作システム。 The object operation system according to claim 16, wherein the camera includes a plurality of cameras that capture different fields of view.
  26.  操作対象物の大きさの情報を取得するステップと、
     操作対象物の大きさに応じた代理身体を画像表示部に描画するステップと、
     操作対象物の画像を前記代理身体の主観画像として画像表示部に表示するステップと、を備え、
     前記代理身体の大きさは前記操作対象物の大きさに応じて可変であり、
     前記操作対象物の画像は前記代理身体から見た主観画像であることを特徴とする操作対象物提示方法。
    Steps to get information on the size of the operation target,
    A step of drawing a surrogate body on the image display according to the size of the operation object,
    A step of displaying an image of an operation object on an image display unit as a subjective image of the surrogate body is provided.
    The size of the surrogate body is variable according to the size of the operation object, and
    A method for presenting an operation object, wherein the image of the operation object is a subjective image seen from the surrogate body.
  27.  前記代理身体は複数の大きさを有し、前記複数の大きさは前記操作対象物に対して動的に切替可能であることを特徴とする請求項26に記載の操作対象物提示方法。 The operation object presentation method according to claim 26, wherein the surrogate body has a plurality of sizes, and the plurality of sizes can be dynamically switched with respect to the operation object.
  28.  前記代理身体は前記操作対象物に対する複数の位置を有し、前記複数の位置は動的に切替可能であることを特徴とする請求項27に記載の操作対象物提示方法。 The operation object presentation method according to claim 27, wherein the surrogate body has a plurality of positions with respect to the operation object, and the plurality of positions can be dynamically switched.
  29.  操作対象物の大きさの情報を取得するステップと、
     操作対象物の大きさに応じた代理身体を画像表示部に描画するステップと、
     操作対象物の画像を前記代理身体の主観画像として画像表示部に表示するステップと、をコンピュータに実行させ、
     前記代理身体の大きさは前記操作対象物の大きさに応じて可変であり、
     前記操作対象物の画像は前記代理身体から見た主観画像であることを特徴とする操作対象物提示プログラム。
    Steps to get information on the size of the operation target,
    Steps to draw a surrogate body on the image display according to the size of the operation object,
    The computer is made to execute the step of displaying the image of the operation object on the image display unit as the subjective image of the surrogate body.
    The size of the surrogate body is variable according to the size of the operation object, and
    An operation object presentation program characterized in that the image of the operation object is a subjective image seen from the surrogate body.
  30.  前記代理身体は複数の大きさを有し、前記複数の大きさは前記操作対象物に対して動的に切替可能であることを特徴とする請求項29に記載の操作対象物提示プログラム。 The operation object presentation program according to claim 29, wherein the surrogate body has a plurality of sizes, and the plurality of sizes can be dynamically switched with respect to the operation object.
  31.  前記代理身体は前記操作対象物に対する複数の位置を有し、前記複数の位置は動的に切替可能であることを特徴とする請求項30に記載の操作対象物提示プログラム。 The operation object presentation program according to claim 30, wherein the surrogate body has a plurality of positions with respect to the operation object, and the plurality of positions can be dynamically switched.
PCT/JP2020/019706 2019-05-20 2020-05-19 Image interface device, image manipulation device, manipulation-object manipulation device, manipulation-object manipulation system, manipulation-object presentation method, and manipulation-object presentation program WO2020235541A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2021520790A JP7536312B2 (en) 2019-05-20 2020-05-19 Image interface device, image operation device, operation object operation device, operation object operation system, operation object presentation method, and operation object presentation program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962850046P 2019-05-20 2019-05-20
US62/850,046 2019-05-20

Publications (1)

Publication Number Publication Date
WO2020235541A1 true WO2020235541A1 (en) 2020-11-26

Family

ID=73459415

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/019706 WO2020235541A1 (en) 2019-05-20 2020-05-19 Image interface device, image manipulation device, manipulation-object manipulation device, manipulation-object manipulation system, manipulation-object presentation method, and manipulation-object presentation program

Country Status (2)

Country Link
JP (1) JP7536312B2 (en)
WO (1) WO2020235541A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05189484A (en) * 1991-07-12 1993-07-30 Toshiba Corp Information retrieval device
JPH08257948A (en) * 1995-03-20 1996-10-08 Yaskawa Electric Corp Remote control device for robot
JP2011110620A (en) * 2009-11-24 2011-06-09 Toyota Industries Corp Method of controlling action of robot, and robot system
WO2018097223A1 (en) * 2016-11-24 2018-05-31 国立大学法人京都大学 Robot control system, machine control system, robot control method, machine control method, and recording medium
JP2018111165A (en) * 2017-01-12 2018-07-19 ファナック株式会社 Calibration device of visual sensor, method and program
JP2019038048A (en) * 2017-08-23 2019-03-14 株式会社日立製作所 Robot procurement device and robot procurement method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05189484A (en) * 1991-07-12 1993-07-30 Toshiba Corp Information retrieval device
JPH08257948A (en) * 1995-03-20 1996-10-08 Yaskawa Electric Corp Remote control device for robot
JP2011110620A (en) * 2009-11-24 2011-06-09 Toyota Industries Corp Method of controlling action of robot, and robot system
WO2018097223A1 (en) * 2016-11-24 2018-05-31 国立大学法人京都大学 Robot control system, machine control system, robot control method, machine control method, and recording medium
JP2018111165A (en) * 2017-01-12 2018-07-19 ファナック株式会社 Calibration device of visual sensor, method and program
JP2019038048A (en) * 2017-08-23 2019-03-14 株式会社日立製作所 Robot procurement device and robot procurement method

Also Published As

Publication number Publication date
JPWO2020235541A1 (en) 2020-11-26
JP7536312B2 (en) 2024-08-20

Similar Documents

Publication Publication Date Title
US12017351B2 (en) Remote control system, information processing method, and non-transitory computer-readable recording medium
JP4172816B2 (en) Remote operation method and system with a sense of reality
WO2018086224A1 (en) Method and apparatus for generating virtual reality scene, and virtual reality system
KR20130028878A (en) Combined stereo camera and stereo display interaction
JP7454818B2 (en) Animation production method
JP2022184958A (en) animation production system
JP2023116432A (en) animation production system
JP7104539B2 (en) Simulation system and program
WO2017191702A1 (en) Image processing device
JP7169130B2 (en) robot system
JP2019219702A (en) Method for controlling virtual camera in virtual space
JP2020031413A (en) Display device, mobile body, mobile body control system, manufacturing method for them, and image display method
JP6964302B2 (en) Animation production method
WO2020235541A1 (en) Image interface device, image manipulation device, manipulation-object manipulation device, manipulation-object manipulation system, manipulation-object presentation method, and manipulation-object presentation program
CN108014491A (en) A VR game system
US8307295B2 (en) Method for controlling a computer generated or physical character based on visual focus
JP4546953B2 (en) Wheel motion control input device for animation system
JP2000047563A (en) Holding action simulation device for object
JP7667639B2 (en) Animation Production System
CN115033105B (en) A large-space mobile platform that supports natural interaction of multiple tactile sensations with bare hands
JP7390542B2 (en) Animation production system
JP7546400B2 (en) Animation Production System
JP7218872B2 (en) animation production system
KR20100004438A (en) Method for handling 3d object
Casals Assisted Teleoperation Through the Merging of Real and Virtual Images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20808926

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021520790

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20808926

Country of ref document: EP

Kind code of ref document: A1