CN109697002B - Method, related equipment and system for editing object in virtual reality - Google Patents
Method, related equipment and system for editing object in virtual reality Download PDFInfo
- Publication number
- CN109697002B CN109697002B CN201711005203.XA CN201711005203A CN109697002B CN 109697002 B CN109697002 B CN 109697002B CN 201711005203 A CN201711005203 A CN 201711005203A CN 109697002 B CN109697002 B CN 109697002B
- Authority
- CN
- China
- Prior art keywords
- target
- target object
- adsorption plane
- determining
- grid
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 238000012545 processing Methods 0.000 claims abstract description 146
- 238000001179 sorption measurement Methods 0.000 claims abstract description 130
- 239000013598 vector Substances 0.000 claims description 42
- 238000013519 translation Methods 0.000 claims description 29
- 238000010586 diagram Methods 0.000 description 29
- 238000004364 calculation method Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 9
- 230000008569 process Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000010422 painting Methods 0.000 description 4
- 238000010521 absorption reaction Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- YTAHJIFKAKIKAV-XNMGPUDCSA-N [(1R)-3-morpholin-4-yl-1-phenylpropyl] N-[(3S)-2-oxo-5-phenyl-1,3-dihydro-1,4-benzodiazepin-3-yl]carbamate Chemical compound O=C1[C@H](N=C(C2=C(N1)C=CC=C2)C1=CC=CC=C1)NC(O[C@H](CCN1CCOCC1)C1=CC=CC=C1)=O YTAHJIFKAKIKAV-XNMGPUDCSA-N 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/048—Indexing scheme relating to G06F3/048
- G06F2203/04806—Zoom, i.e. interaction techniques or interactors for controlling the zooming operation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2016—Rotation, translation, scaling
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Architecture (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the invention discloses a method, related equipment and a system for editing an object in virtual reality. The method provided by the embodiment of the invention comprises the following steps: the processing equipment receives the operation of selecting the object detected by the input equipment; determining a target object to be edited according to the operation of selecting the object; determining a target adsorption plane of a target object in a space editing area; receiving an operation of a moving object detected by an input device; determining a target position of a target object according to the operation of the moving object; and moving the target object to the target position and displaying the target object through the display device. The embodiment of the invention also provides processing equipment and a system, which are used for quickly editing the object and providing a simple method for editing the object.
Description
Technical Field
The invention relates to the field of computers, in particular to a method, related equipment and a system for editing an object in virtual reality.
Background
Virtual Reality (VR) technology is a computer simulation system that can create and experience a Virtual world, which uses a computer to create a simulated environment, which is a system simulation of multi-source information-fused, interactive three-dimensional dynamic views and physical behaviors, and immerses users in the environment.
In virtual reality, editing of objects in virtual reality is often required. For example, moving, rotating, zooming, etc. the object. In the traditional method, editing of objects is realized through a Unity3D engine, Unity3D is a multi-platform comprehensive game development tool for creating interactive contents of types such as three-dimensional video games, building visualizations, real-time three-dimensional animations and the like, and is a fully integrated game engine, and Unity3D in the prior art can edit objects under VR, but has high operation difficulty. For example, editing an object through Unity3D requires the user to understand the menu, view interface, etc. of Unity3D, understand the coordinate system in the scene, input the system, etc.; basic elements of the learning resource import aspect: mesh, texture, map, animation, etc.
In the traditional mode, objects are edited in VR, and common users are difficult to master due to complex operation.
Disclosure of Invention
The embodiment of the invention provides a method, related equipment and a system for editing an object in virtual reality, which are used for quickly editing the object and providing a simple method for editing the object.
In a first aspect, an embodiment of the present invention provides a method for editing an object in virtual reality, including:
receiving an operation of selecting an object;
determining a target object to be edited according to the operation of selecting the object;
determining a target adsorption plane of the target object in a space editing area;
receiving an operation of a moving object detected by the input device;
determining a target position of the target object according to the operation of the moving object;
and moving the target object to the target position and displaying the target object through a display device.
In a second aspect, an embodiment of the present invention provides a processing device, including:
the first receiving module is used for receiving the operation of selecting the object;
the object determining module is used for determining a target object to be edited according to the operation of the selected object received by the first receiving module;
the adsorption plane determining module is used for determining a target adsorption plane of the target object in the space editing area, which is determined by the object determining module;
the second receiving module is used for receiving the operation of the moving object detected by the input equipment;
a position determining module, configured to determine a target position of the target object according to an intersection of the target adsorption planes determined by the operation of the moving object received by the second receiving module;
and the moving module is used for moving the target object to the target position determined by the position determining module and displaying the target object through display equipment.
In a third aspect, an embodiment of the present invention provides a processing device, including:
a memory for storing computer executable program code;
a transceiver, and
a processor coupled with the memory and the transceiver;
wherein the program code comprises instructions which, when executed by the processor, cause the processing device to perform the method of the first aspect.
In a fourth aspect, an embodiment of the present invention provides a virtual reality system, including:
the method comprises the following steps: the display device and the input device are connected with the processing device;
the input device detects an operation of selecting an object;
the processing device receives an operation of selecting an object detected by the input device;
the processing equipment determines a target object to be edited according to the operation of the selected object;
the processing equipment determines a target adsorption plane of the target object in a space editing area;
the processing device receives operation of a moving object detected by the input device;
the processing equipment generates rays according to the operation of the moving object and sends the data of the rays to the display equipment;
the display device displays the ray;
the processing equipment determines the target position of the target object according to the intersection point of the ray and the target adsorption plane;
the processing device moving the target object to the target location;
the display device displays that the target object moves to the target position.
According to the technical scheme, the embodiment of the invention has the following advantages:
the processing equipment receives the operation of selecting the object detected by the input equipment; then, determining a target object to be edited according to the operation of selecting the object; determining a target adsorption plane of a target object in a space editing area; receiving an operation of a moving object detected by an input device; determining a target position of a target object according to the operation of the moving object; and moving the target object to the target position and displaying the target object through the display device. The embodiment of the invention can quickly edit the object, provides a simple method for editing the object, has small calculated amount, is simple in the object editing method, and can be simply mastered by a common user.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings.
FIG. 1 is a schematic diagram of a virtual reality system according to an embodiment of the invention;
FIG. 2 is a schematic view of an adsorption plane in an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating steps of a method for editing an object in virtual reality according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a scene displayed in the display device according to the embodiment of the present invention;
FIG. 5 is a diagram illustrating a spatial editing region according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a scenario in an embodiment of the present invention;
FIG. 7 is a schematic diagram of a side view scenario for determining an intersection point in an embodiment of the present invention;
FIG. 8 is a schematic diagram of a side view scenario for determining an intersection point in an embodiment of the present invention;
FIG. 9 is a diagram illustrating a center position of a grid where a crossing point is located according to an embodiment of the present invention;
FIG. 10 is a diagram illustrating an offset vector according to an embodiment of the present invention;
FIG. 11 is a schematic side view of a target grid in an embodiment of the invention;
FIG. 12 is a schematic side view of an offset vector according to an embodiment of the present invention;
FIG. 13 is a schematic diagram of a scene in which a target object moves to a target position according to an embodiment of the invention;
FIG. 14 is a schematic side view of an embodiment of the present invention illustrating the movement of a target object to a target position;
FIG. 15 is a diagram illustrating a scenario for confirming placement of a target object according to an embodiment of the present invention;
FIG. 16 is a schematic side view of an embodiment of the present invention illustrating the movement of the target object to the target position;
FIG. 17 is a schematic view of the direction of translation according to an embodiment of the present invention;
FIG. 18 is a diagram illustrating a scenario of an edit mode in accordance with an embodiment of the present invention;
FIG. 19 is a schematic view of the direction of rotation of an object in an embodiment of the present invention;
FIG. 20 is a schematic diagram of an embodiment of a processing apparatus according to an embodiment of the invention;
FIG. 21 is a schematic structural diagram of another embodiment of a processing apparatus according to an embodiment of the present invention;
FIG. 22 is a schematic structural diagram of another embodiment of a processing apparatus according to an embodiment of the present invention;
FIG. 23 is a schematic structural diagram of another embodiment of a processing apparatus according to an embodiment of the present invention;
FIG. 24 is a schematic structural diagram of another embodiment of a processing apparatus according to an embodiment of the present invention;
FIG. 25 is a schematic structural diagram of another embodiment of a processing apparatus according to an embodiment of the present invention;
FIG. 26 is a schematic structural diagram of another embodiment of a processing apparatus according to an embodiment of the present invention;
fig. 27 is a schematic structural diagram of another embodiment of a processing apparatus according to an embodiment of the present invention.
Detailed Description
The embodiment of the invention provides a method, related equipment and a system for editing an object in virtual reality, which are used for quickly editing the object and providing a simple method for editing the object.
In order to make the technical solutions of the present invention better understood, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments that can be derived from the embodiments of the present invention by a person of ordinary skill in the art are intended to fall within the scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
An embodiment of the present invention provides a method for editing an object in virtual reality, which is applied to a virtual reality VR system, please refer to fig. 1, where fig. 1 is a schematic diagram of the VR system, the VR system includes a display device 101, an input device 102, and a processing device 103, and both the input device 102 and the display device 101 are connected to the processing device 103. In one application scenario, the processing device 103 may be a computer, a mobile phone, a palm computer, or the like. The display device 101 may be a VR head-mounted display (virtual reality head-mounted display device). And can be used as VR glasses, VR helmets, etc. The VR head display is a device which utilizes a head-mounted display to seal the vision and the hearing of people to the outside and guide a user to generate the feeling of being in a virtual environment. The input device 102 is a device capable of mapping real world environment data to a virtual world, and is a device for inputting a user command to the VR system, and may be a sensing glove, a sensing handle, or the like.
The embodiment of the invention provides a method for editing a virtual object in a VR system. The input device detects an operation of selecting an object, for example, an object in the VR, which may be a bed, a box, a table, etc. in the VR, is selected by the input device; the input equipment detects the operation of a selection object input by a user to generate corresponding operation data, then the input equipment transmits the operation data to the processing equipment, and the processing equipment determines a target object to be edited according to the operation data; the processing equipment calculates the space editing area of the target object and determines a target adsorption plane of the target object in the space editing area; when the position of the input equipment in the actual environment changes, the input equipment detects the operation of the moving object, the input equipment transmits the data of the moving operation to the processing equipment, the processing equipment generates rays according to the received data of the operation of the moving object, the data of the rays are sent to the display equipment, and the display equipment displays the rays; the processing equipment determines the target position of the target object according to the intersection point of the ray and the target adsorption plane and moves the target object to the target position; the display device displays that the target object moves to the target position.
For convenience of understanding, words involved in the embodiments of the present invention are explained first.
Adsorption plane: according to the plane on which the objects should be placed according to the respective attributes in the real scene, the adsorption plane can be preset according to the attributes of the objects. For example, a bed or a table is placed on the ground, a wall painting is placed on the wall, and a ceiling lamp is attached to the ceiling.
In a virtual scene, the form of X, Y, Z plus signs may be used to represent six faces of an editable area, where the editable area may be understood as a rectangular solid area, and all six faces of the editable rectangular solid area may be used as adsorption planes, please understand the adsorption planes in conjunction with fig. 2 and table 1 below, and fig. 2 is a schematic diagram of the adsorption planes:
TABLE 1
An object, also understood as an object in a virtual space, may have N suction planes, where N is a positive integer greater than or equal to 1, and different objects may have different numbers of suction planes. As shown in table 1 above, the attribute of the bed or the table is an adsorption floor, the adsorption plane corresponding to the adsorption floor is a (-Z) plane, the attribute of the ceiling lamp is an adsorption ceiling, the adsorption plane corresponding to the adsorption ceiling is a (+ Z) plane, the attribute of the mural is an adsorption wall, and the adsorption plane corresponding to the adsorption wall may be one of (+ X), (-X), (+ Y) and (-Y) planes, that is, the attribute of the object has a corresponding relationship with the adsorption plane, and if the attribute of the object is determined, the attribute of the object can be determined according to the attribute and the corresponding relationship. As can be seen from the example in table 1 above, the mural may correspond to 4 adsorption planes, when the adsorption plane corresponding to an object is greater than 1, a preset adsorption plane of the object may be preset, and when the user does not rotate the object, the adsorption plane of the object is the preset adsorption plane. For example, the preset suction plane corresponding to the ceiling lamp may be set to be (-X), and if the ceiling lamp is rotated by 90 degrees around the Y axis, the suction plane corresponding to the ceiling lamp is set to be (+ Y).
It should be noted that, in table 1, the 6 adsorption planes are represented by the form of plus or minus signs X, Y, Z, which are only illustrative and do not limit the embodiments of the present invention. For example, A, B, C, D, E, F may be used to indicate different absorption planes, as long as different planes of the editing area can be distinguished, and the forms are various, which are not exhaustive here.
Referring to fig. 3, an embodiment of the present invention provides an embodiment of a method for editing an object in virtual reality, where the embodiment is described with a processing device as an execution subject, and a specific flow of steps includes:
The input device detects an operation of a selection object input by a user, the input device transmits data of the operation of the selection object to the processing device, and the processing device receives the operation of the selection object detected by the input device.
As can be understood in conjunction with fig. 4, fig. 4 is a schematic view of a scene displayed in the display device. In one application scenario, a display device displays objects included in a menu, a user sees objects in the menu in the display device, for example, objects in the menu include beds, glasses, tables, wall paintings and the like, the user operates an input device (such as a sensing handle), a processing device receives an operation of a selection object detected by the input device, a ray is generated according to the operation of the selection object, and the display device displays the ray.
In the virtual environment, the ray selects a target object, e.g., the ray points to a "bed", the target object is a "bed", and when the processing device determines the target object, the VR system enters an editing mode for the target object.
After the processing device determines the target object, the processing device calculates a spatial editing region of the target object. Please refer to fig. 5 and fig. 6, in which fig. 5 is a schematic diagram of a spatial editing area, and fig. 6 is a schematic diagram of a scene. In an application scenario, a cuboid can be set as the space editing area, and then the length, the width and the height of the cuboid are equally divided to form a grid structure in a three-dimensional space, wherein the side length of each equally divided grid of the grid can be 0.5 m. It should be noted that the size of the grid in the spatial editing area is illustrated herein for convenience only and is not meant to limit the invention.
It should be noted that step 303 is an optional step, and is not performed every time the target object is edited, and this step is performed once in the initialization process, and may not be performed in the subsequent editing process after the initialization is completed, that is, after the spatial editing area of the target object has been calculated.
Moving the target object and placing it in the proper position requires determining the target's suction plane, it being understood that the "bed" needs to be placed on the floor and not on the ceiling, and the "ceiling lamp" needs to be placed on the ceiling and not on the wall, thus requiring determining the suction plane of the target object space editing area.
The processing equipment determines the attribute of the target object, and then determines the target adsorption plane of the target object according to the corresponding relation between the attribute and the adsorption plane.
The target adsorption plane is a preset adsorption plane, and it can be understood that the number of the adsorption planes corresponding to the target object is at least one.
In one case, the adsorption plane of the target object has only one, for example, when the target object is a "bed", the "bed" has only one adsorption plane (-Z) plane, which is a preset adsorption plane.
In another case, the target object has at least two adsorption planes. For example, when the target object is a "mural", since there are 4 suction planes of the "mural", which may be a (+ X) plane, a (-X) plane, a (+ Y) plane or a (-Y) plane, the preset suction plane of the "mural" is preset to be the (+ X) plane, that is, after the target object is selected, if the user does not rotate the "mural", the target suction plane of the target object is the preset suction plane.
In another case, the number of the adsorption planes corresponding to the target object is at least two, the target adsorption plane is an adsorption plane after the target object rotates, and an angle of each rotation of the target object may be set to 90 degrees. Firstly, the processing device determines a preset adsorption plane according to the attributes of the target object and the corresponding relation between the attributes and the adsorption plane, if the processing device determines that the preset adsorption plane of the wall painting is a (+ X) plane, the processing device receives the operation of the rotating object detected by the input device, and then the processing device determines the target adsorption plane corresponding to the target object after the target object rotates on the basis of the preset adsorption plane according to the operation of the rotating object. For example, the input device detects an operation of rotating a "mural" input by a user, the "mural" is correspondingly rotated by 90 degrees around the Y-axis, and after the "mural" is rotated, the corresponding target adsorption plane is a (-Y) plane.
The user performs an operation of moving the object, for example, the user holds an input device (e.g., a handle), changes the position of the handle in the actual environment, the input device detects the operation of moving the object, generates data of the operation of moving the object, and then transmits the data of the operation to the processing device, and the processing device receives the operation of moving the object detected by the input device.
The processing device generates a ray according to the operation of the moving object and then displays the ray through the display device, that is, the display device displays the current virtual scene while displaying the ray, and the user sees one ray through the display device.
In one application scenario, a user is in a real space, e.g., a room, wears a VR headset, holds a sensing handle, points the sensing handle to the floor of the real room, points rays in the VR headset to the floor of the virtual space, points the sensing handle to the ceiling of the real room, and points rays in the VR headset to the ceiling of the virtual space.
And 307, determining the target position of the target object according to the intersection point of the ray and the target adsorption plane by the processing equipment.
The target position is a position where the user wishes to place the target object in the virtual space displayed by the display device.
For the operation situation of the user, there may be two cases:
1. as will be understood in conjunction with fig. 7, fig. 7 is a schematic side view of a scene from which intersections are determined. The user operates correctly, the virtual device points to the target adsorption plane, the ray has an intersection point with the target adsorption plane, for example, the target adsorption plane of the 'bed' is a (-Z) plane, the user operates through the input device, the ray points to the ground (-Z) plane, and the intersection point of the ray a and the (-Z) plane is o.
2. As will be understood in conjunction with fig. 8, fig. 8 is a schematic diagram of a side view scene for determining intersection points. The user operation is not accurate, the virtual device is not pointed to the target adsorption plane, the ray does not intersect with the target adsorption plane, for example, the ray is pointed to the ceiling (+ Z) plane, that is, in this case, the ray does not intersect with the target adsorption plane, the ray can be mirrored, as shown in fig. 8, the mirrored ray of the ray b is c, and the intersection of the mirrored ray c is d.
In a first possible implementation manner, the processing device determines an intersection point of the ray and the target adsorption plane, and the processing device determines the intersection point as a target position, where the target object is placed.
In a second possible implementation manner, the processing device determines an intersection point of the ray and the target adsorption plane, and in the present embodiment, the case shown in fig. 7 is taken as an example for explanation. Then, the processing device calculates the center position of the grid where the intersection point is located, please understand with reference to fig. 9, where fig. 9 is a schematic diagram of the center position of the grid where the intersection point is located. The grid where the grid intersection point o is located is denoted by g, the center position of the grid g on the target adsorption plane is denoted by f, and the center position of the grid where the intersection point is located is a point f.
Then, the processing device calculates a target position of the target object according to the center position of the grid and the offset vector, wherein the target position of the target object is calculated by the following formula 1:
p _ center is P _ exp + V _ offset, where P _ center is the target position, V _ offset is the offset vector, and P _ exp is the center position of grid g.
The offset vector is a vector from a central point on a side surface corresponding to the target adsorption plane on the target grid to a preset point on the target object, and the target grid is a minimum grid area of a bounding box for accommodating the target object. Please refer to fig. 10 for understanding the offset vector in the embodiment of the present invention, wherein fig. 10 is a schematic diagram of the offset vector.
The offset vector can be calculated in two ways:
1. the pre-calculation mode is as follows: before this step 307, the method for specifically calculating the offset vector includes:
step i: the processing device determines N adsorption planes for each object according to the property of each object, where N is a positive integer greater than or equal to 1, for example, in one scenario, the menu of items includes a plurality of objects, such as beds, tables, murals, ceiling lamps, etc., and the processing device determines N adsorption planes for each object according to the property of each object, for example, the adsorption plane of a bed is a (-Z) plane, the adsorption plane of a table is a (-Z) plane, the adsorption plane of a mural can be a (+ X) plane, a (-X) plane, a (+ Y) plane, or a (-Y) plane, and the adsorption plane of a ceiling lamp is a (+ X) plane. It should be noted that the plurality of objects in the present embodiment are examples for convenience of description, and do not limit the present invention.
Step j: the processing device calculates a bounding box for each of a plurality of objects to be selected.
Please be understood in conjunction with fig. 10, fig. 10 is a schematic diagram of a bounding box. The bounding box 1002 is the smallest rectangular parallelepiped spatial region that houses the object 1001.
The processing equipment calculates the bounding box of the bed, the bounding box of the wall painting, the bounding box of the desk and the bounding box of the ceiling lamp.
Step k: the processing device calculates a target grid from the bounding box, the target grid 1101 being the smallest grid area that accommodates the bounding box 1002.
Please refer to fig. 11, wherein fig. 11 is a schematic side view of the target grid. The processing device calculates a target mesh for each of a plurality of objects.
Step l: the processing device determines a side of the target mesh corresponding to each of the N adsorption planes of each object.
As will be understood from fig. 11, the "bed" has one adsorption plane, the adsorption plane of the "bed" is a (-Z) plane, and in the target grid 1101, a side surface 11011 corresponding to the (-Z) plane is a bottom surface.
In this embodiment, a "bed" among a plurality of subjects will be described as an example, and other subjects can be understood with reference to the "bed".
Step m: the processing device calculates an offset vector, which is a vector of the center point of the side of the target mesh to a preset point on the object.
Please refer to fig. 12, which is a side view of the offset vector in fig. 12. The offset vector is a vector 1203 from a center point 1201 of a side surface (e.g., bottom surface) of the target grid to a preset point 1202 on the object when the target surface of the bounding box of the object is aligned with the side surface of the target grid.
In this embodiment of the present invention, the preset point may be any preset point on the object, and the preset point may be a central point of the object.
Offset vectors corresponding to a plurality of objects are calculated in advance, and then, an offset vector corresponding to a target object is selected from the offset vectors corresponding to the plurality of objects.
2. The real-time calculation mode is as follows:
in a second possible implementation manner, the offset vectors of all objects do not need to be calculated before step 307, but in this step, the offset vector of the target object is directly calculated, and the specific method is as follows:
step o: the processing device determines a target adsorption plane of the target object based on the properties of the target object, e.g., in one scenario, the target object is a bed, and the processing device determines the adsorption plane of the "bed" to be a (-Z) plane.
Step p: the processing device calculates a bounding box for each of the target objects.
Step q: the processing device calculates a target mesh from the bounding box of the target object.
Step r: the processing device determines a side of the target grid corresponding to the target adsorption plane of the target object.
As will be understood from fig. 11, the "bed" has one adsorption plane, the adsorption plane of the "bed" is a (-Z) plane, and in the target grid 1101, a side surface 11011 corresponding to the (-Z) plane is a bottom surface.
Step m: the processing device calculates an offset vector corresponding to the target object, wherein the offset vector is a vector from a central point of the side surface of the target grid to a preset point on the object.
And 308, moving the target object to the target position by the processing device and displaying the target object through the display device.
In a first implementation manner, please understand with reference to fig. 13, fig. 13 is a schematic view of a scene in which a target object moves to a target position.
The processing device calculates a bounding box 1301 of the target object 1306; determining an object side 1303 corresponding to the object adsorption plane 1302; the target side 1303 and the target suction plane are overlapped, and a center point 1304 of the target side 1303 corresponds to an intersection 1305 of the ray and the target suction plane.
In a second implementation manner, please refer to fig. 14 and 15, in which fig. 14 is a schematic side view of the target object moving to the target position, and fig. 15 is a schematic view of a scene for confirming placement of the target object. A preset point on the target object (e.g. a center point) is moved to the target position 1501 for the processing device.
It should be noted that, in this implementation, the grid in the spatial editing region provides an alignment standard, and since calculating the offset vector requires determining coordinates of two points, that is, a start point coordinate and an end point coordinate, and determining the end point coordinate as a preset point on the object, the start point coordinate is a central point of a side of the target grid, the target grid includes at least one cube grid, and the central point of the side of the target grid is a central point of a side of the cube. Therefore, in the scene corresponding to fig. 9, it is necessary to calculate the center position of the mesh where the intersection point is located as a point f from the intersection point o.
It should be noted that there is a possibility that the central position calculated according to the above formula 1 is illegal, that is, the target grid for accommodating the object is located outside the editing area, and at this time, the position of the object needs to be moved to the nearest legal position, that is, the target grid for accommodating the target object is completely located inside the editing area, please understand with reference to fig. 16, and fig. 16 is a schematic side view of the target object moving to the target position.
Finally, the intersection of the target object with other placed objects is determined. If there is an intersection (i.e., the object model intersects, and/or the target grid to which the target object belongs intersects with the grid to which the placed object belongs), then the target object is displayed in a red bounding box and the player is not allowed to confirm the placement. If there is no intersection, the object is displayed in a green bounding box, allowing confirmation of placement.
In the embodiment of the invention, the user can edit the object quickly, the calculated amount is small, the object editing method is simple, the common user can simply master the object, and the operation mode can accord with the intuition habit of the user.
Further, the above embodiment is a method for moving and placing an object, and on the basis of the above embodiment, another embodiment is provided in the embodiments of the present invention, and in this embodiment, methods for finely editing a target object, such as translation, rotation, and zooming, are provided.
1. Translation:
this embodiment may be a Gizmo-based fine editing mode. In this fine editing mode, the user is free to adjust the position, rotation, and zoom of the object. Please refer to fig. 17 and 18 for understanding, fig. 17 is a schematic view of a panning direction, and fig. 18 is a schematic view of a scene of an editing mode. The object can be translated in both positive and negative directions in each of the three axes X, Y, Z, so there are a total of 6 directions of movement. The 6 moving directions are + X direction, -X direction, + Y direction, -Y direction, + Z direction, -Z direction, respectively, and 6 arrows pointing to different axial directions are provided as the translation Gizmo1801 for translating the target object in the embodiment of the present invention.
The input device detects an operation of selecting a translation object input by a user, the input device generates operation data according to the operation of selecting the translation object and sends the operation data to the processing device, then the processing device generates a ray according to the operation data, the ray points to an arrow in an upward direction (a + Z direction), the processing device determines that the translation direction of the target object is a + Z direction, and generates a first position coordinate according to the operation data, wherein the first position coordinate is a coordinate of the translation Gizmo (a coordinate of the arrow in the upward direction and is marked as 'P _ Gizmo'); for example, if the user selects the up arrow, the processing device determines that the translation direction of the target object (e.g., a checkerboard) is moving in the positive direction of the Z-axis, while the X and Y coordinates do not change.
The processing device calculates a second position coordinate (denoted "P _ Center") of a preset point on the target object, which may be a Center point on the target object.
When a user moves the position of the input device up and down in a real environment, for example, the user translates the input device up, the input device detects an operation of translating an object input by the user, the input device generates operation data and sends the operation data to the processing device, the processing device generates a ray for translating a target object according to the operation data and calculates a third position coordinate according to the operation data, the third position coordinate is on a coordinate axis (such as an X axis) corresponding to the translation operation, the coordinate axis is regarded as a spatial straight line, the ray is also regarded as a spatial straight line, and the corresponding coordinate axis and the ray have a shortest distance, for example, the distance between the third position coordinate (denoted as "P _ Near") on the X axis and a target point on the ray is the shortest distance between the two spatial straight lines, the third position coordinate is determined.
The processing device calculates a fourth position coordinate (denoted as 'P _ New') of the preset point after the target object is translated according to the first position coordinate, the second position coordinate and the third position coordinate; the specific formula 2 for calculating the fourth position coordinate is as follows:
P_New=P_Center+P_Near-P_Gizmo;
and the processing device moves the preset point of the target object to the fourth position coordinate in the translation direction and displays the preset point through the display device.
2. Rotating:
the object can rotate around three axes of X, Y and Z, as can be understood with reference to fig. 19, fig. 19 is a schematic view of the rotation direction of the object. One rotation Gizmo (shown as a corner-shaped Gizmo1802 in fig. 18) for rotating the object is arranged outside the four edges of the bounding box corresponding to each axis, wherein the rotation Gizmo1901 corresponding to the X axis, the rotation Gizmo1902 corresponding to the Y axis, the rotation Gizmo1903 corresponding to the Z axis, and the rotation Gizmo are 12 in total.
This way, the player can edit a certain attribute of the object separately.
The processing equipment receives an operation of selecting a rotating object detected by the input equipment; the player can make targeted edits by ray-selecting any of the 12 spins Gizmo and then turning the handle.
The processing device determines a rotation direction of the target object according to the operation of selecting the rotation object and records a current first rotation component (denoted as "R") of the target object and a current first euler angle of the input device according to the operation of selecting the rotation object, wherein the first euler angle comprises a Pitch value and a Yaw value, the Pitch value is an initial value of an angle of rotation of the input device around an X axis, denoted as "R _ Pitch", and the Yaw value is an initial value of an angle of rotation of the input device around a Y axis, denoted as "R _ Yaw";
the processing device detects an operation of the rotating object detected by the input device. For example, when the user selects the rotation Gizmo corresponding to the Y-axis and rotates the input device (e.g., the handle) up and down, the object rotates around the Y-axis, and the rotation angle on the other axes remains unchanged.
The processing device calculates a second Euler angle after the input device is rotated according to the operation of the rotating object, wherein the second Euler angle comprises Pitch and Yaw values after the input device is rotated, the Pitch after the rotation is recorded as 'R _ NewPitch', and the Yaw value after the rotation is recorded as 'R _ Newyaw';
the processing device calculates a second rotation component after the target object is rotated according to the first rotation component, the first euler angle and the second euler angle, and a calculation formula 3 for calculating the second rotation component is as follows:
r _ New ═ R + k × (R _ NewPitch + R _ NewYaw-R _ Pitch-R _ Yaw); where k is the sensitivity factor, the user can customize the setting, and k × (R _ NewPitch + R _ NewYaw-R _ Pitch-R _ Yaw) is the rotation value.
The processing device rotates the target object by the second rotation component on the basis of the first rotation component and displays it by the display device.
3. Zoom
Zooming is similar to the principle of rotation, and zooming can be set to only one zoom Gizmo (left cube Gizmo1803 in FIG. 18).
The processing device receives an operation of selecting a zoom object detected by the input device, for example, in an application scenario, a user selects zoom Gizmo, which can also be understood as that the user inputs a zoom instruction.
The processing device records a current first space size (denoted as "M") of the target object, which is a volume of the target object before the target object is not zoomed, according to an operation of selecting the zoom object, and records a current third euler angle of the input device, which includes a Pitch value, which is an initial value of an angle of rotation of the input device about the X-axis, denoted as "R _ Pitch", and a Yaw value, which is an initial value of an angle of rotation of the input device about the Y-axis, denoted as "R _ Yaw".
The processing device receives an operation of zooming the object detected by the input device, for example, a user rotates the input device, and the input device detects an operation of zooming the target object.
The processing device calculates a fourth Euler angle after the input device is rotated according to the operation of the zoom object, wherein the fourth Euler angle comprises Pitch and Yaw values after the input device is rotated, the Pitch after the rotation is recorded as 'R _ NewPitch', and the Yaw value after the rotation is recorded as 'R _ Newyaw';
the processing device calculates a second space size of the target object after scaling according to the first space size, the third euler angle and the fourth euler angle, and a calculation formula 4 for calculating the second space size is as follows:
m _ New ═ M × k × (R _ NewPitch + R _ NewYaw-R _ Pitch-R _ Yaw); k is a sensitivity coefficient, and a user can customize the setting; k × (R _ NewPitch + R _ NewYaw-R _ Pitch-R _ Yaw) is the zoom value calculated from the user turning the input device.
The processing device zooms the target object to a second space size based on the first space size and displays the target object through the display device.
According to the embodiment of the invention, the convenience degree of editing operation on the object under VR can be greatly improved, the editing method provided by the embodiment of the invention can enable a common player to quickly and conveniently put the object, and can also finely adjust the pose and the zoom attribute of the object, so that the player can freely edit the object in VR.
Referring to fig. 20, an embodiment of the present invention provides a processing device 2000, which is applied to a virtual reality VR system, where the VR system includes a display device, an input device, and a processing device, and the processing device includes:
a first receiving module 2001 for receiving an operation of selecting an object detected by an input device;
an object determining module 2002 for determining a target object to be edited according to the operation of selecting the object received by the first receiving module 2001;
an absorption plane determining module 2003, configured to determine a target absorption plane of the target object in the space editing region determined by the object determining module 2002;
a second receiving module 2004 for receiving an operation of moving an object detected by the input device;
a first generating module 2005 for generating rays according to the operation of the moving object received by the second receiving module 2004 and displaying through the display device;
a position determination module 2006, configured to determine a target position of the target object according to an intersection point of the ray generated by the first generation module 2005 and the target adsorption plane determined by the adsorption plane determination module 2003;
a moving module 2007, configured to move the target object determined by the object determining module 2002 to the target position determined by the position determining module 2006 and display the target object through the display device.
Optionally, the space editing area includes a plurality of space grids, and the position determining module 2006 is further configured to determine an intersection point of the ray and the target adsorption plane;
calculating the central position of the grid in which the intersection point is positioned;
and calculating the target position of the target object according to the central position of the grid and the offset vector, wherein the offset vector is a vector from a central point on the side surface corresponding to the target adsorption plane on the target grid to a preset point on the target object, and the target grid is a minimum grid area of a bounding box for accommodating the target object.
Referring to fig. 21, on the basis of the embodiment shown in fig. 20, another embodiment of a processing device 2100 according to the present invention includes:
the location determination module 2006 further comprises an intersection determination unit 20061 and a location determination unit 20062; an intersection point determining unit 20061 for determining an intersection point of the ray and the target adsorption plane;
a position determination unit 20062 for determining the intersection point determined by the intersection point determination unit 20061 as the target position.
The processing device is optional, and the moving module 2007 is further configured to:
calculating a bounding box of the target object;
determining a target side corresponding to the target adsorption plane;
and overlapping the target side surface and the target adsorption plane, wherein the central point of the target side surface corresponds to the intersection point.
Referring to fig. 22, on the basis of the embodiment corresponding to fig. 20, another embodiment of a processing device 2200 is further provided, where the another embodiment further includes a first computing module 2008; the first calculating module 2008 is configured to calculate a spatial editing region according to the target object, where the spatial editing region is divided into a plurality of grid regions.
The processing device is optional, the number of the adsorption planes corresponding to the target object is at least one, and the adsorption plane determination module 2003 is further configured to determine a preset adsorption plane according to the attribute of the target object and the corresponding relationship between the attribute and the adsorption plane;
receiving an operation of rotating an object detected by an input device;
and determining a target adsorption plane corresponding to the target object after the target object rotates on the basis of a preset adsorption plane according to the operation of the rotating object.
Referring to fig. 23, based on the embodiment shown in fig. 22, another embodiment of a processing apparatus 2300 is provided in the embodiment of the present invention:
the processing apparatus further includes: a first determination module 2009, a second calculation module 2010, a third calculation module 2011 and a fourth calculation module 2012 and a second determination module 2013;
a first determining module 2009, configured to determine, according to the attribute of each object, N adsorption planes of each object, where N is a positive integer greater than or equal to 1;
a second calculation module 2010 for calculating bounding boxes for each of a plurality of objects to be selected;
a third calculation module 2011, configured to calculate a target grid according to the bounding box calculated by the second calculation module 2010, where the target grid is a minimum grid area for accommodating the bounding box;
a second determining module 2013, configured to determine a side of the target grid calculated by the third calculating module 2011 corresponding to each adsorption plane in the N adsorption planes of each object determined by the first determining module 2009;
a fourth calculating module 2012, configured to calculate a shift vector, where the shift vector is a vector from the central point of the side surface of the target grid determined by the second determining module 2013 to a preset point on the target object;
the position determining module 2006 is further configured to determine a target position of the target object according to an intersection point of the ray generated by the first generating module 2005 and the target adsorption plane determined by the adsorption plane determining module 2003 and the offset vector calculated by the fourth calculating module 2012.
Referring to fig. 24, on the basis of the embodiment shown in fig. 20, another embodiment of a processing device 2400 is provided in the embodiment of the present invention: the system further comprises a third receiving module 2014, a translation direction determining module 2015, a second generating module 2016, a fifth calculating module 2017, a fourth receiving module 2018, a sixth calculating module 2019, a fifth calculating module 2017, a seventh calculating module 2020 and a translation module 2021;
a third receiving module 2014, configured to receive an operation of selecting a translation object detected by the input device;
a translation direction determining module 2015, configured to determine a translation direction for the target object according to the operation of selecting a translation object received by the third receiving module 2014;
a second generating module 2016, configured to generate a first position coordinate according to the operation of selecting the translation object received by the third receiving module 2014;
a fifth calculating module 2017, configured to calculate a second position coordinate of the preset point on the target object;
a fourth receiving module 2018, configured to receive an operation of translating an object detected by an input device;
a sixth calculating module 2019, configured to calculate a third position coordinate according to the operation of the translation object received by the fourth receiving module 2018;
a seventh calculating module 2020, configured to calculate a fourth position coordinate of the preset point after the translation of the target object according to the first position coordinate generated by the second generating module 2016, the second position coordinate calculated by the fifth calculating module 2017, and the third position coordinate calculated by the sixth calculating module 2019;
a translation module 2021, configured to move the preset point of the target object in the translation direction determined by the translation direction determining module 2015 to the fourth position coordinate calculated by the seventh calculating module 2020 and display the preset point on the display device.
Referring to fig. 25, on the basis of the embodiment corresponding to fig. 20, another embodiment of a processing apparatus 2500 is further provided in the embodiment of the present invention: also included are a fifth receiving module 2022, a rotation direction determining module 2023, a first recording module 2024, a sixth receiving module 2025, an eighth calculating module 2026, a rotation component calculating module 2027, and a rotation module 2028;
a fifth receiving module 2022, configured to receive an operation of selecting a rotating object detected by the input device;
a rotation direction determining module 2023, configured to determine a rotation direction of the target object according to the operation of selecting the rotation object received by the fifth receiving module 2022;
a first recording module 2024, configured to record a current first rotation component of the target object and a current first euler angle of the input device according to the operation of selecting the rotation object received by the fifth receiving module 2022;
a sixth receiving module 2025, configured to receive an operation of rotating an object detected by the input device;
an eighth calculating module 2026, configured to calculate a second euler angle after the input device is rotated according to the operation of the rotating object received by the sixth receiving module 2025;
a rotation component calculation module 2027 configured to calculate a second rotation component after the target object rotates, based on the first rotation component recorded by the first recording module 2024, the first euler angle, and the second euler angle calculated by the eighth calculation module 2026;
a rotation module 2028, configured to rotate the target object by the second rotation component based on the first rotation component and display the target object by the display device.
Referring to fig. 26, on the basis of the embodiment corresponding to fig. 20, another embodiment of a processing device 2600 according to the present invention is further provided: further comprising:
a seventh receiving module 2029, configured to receive an operation of selecting a zoom object detected by the input device;
a second recording module 2030, configured to record the current first space size of the target object and record the current third euler angle of the input device according to the operation of selecting the zoom object received by the seventh receiving module 2029;
an eighth receiving module 2031, configured to receive an operation of zooming an object detected by the input device;
a ninth calculating module 2033 for calculating a fourth euler angle after the input device is rotated, according to the operation of the zoom object;
a tenth calculating module 2034, configured to calculate a second space size after the target object is scaled according to the first space size, the third euler angle recorded by the second recording module 2030 and the fourth euler angle calculated by the ninth calculating module 2033;
a zooming module 2035, configured to zoom the target object to the second space size calculated by the tenth calculating module 2034 based on the first space size recorded by the second recording module 2030 and display the target object through the display device.
Further, the processing devices in fig. 20 to 26 are presented in the form of functional modules. A "module" as used herein may refer to an application-specific integrated circuit (ASIC), an electronic circuit, a processor and memory that execute one or more software or firmware programs, an integrated logic circuit, and/or other devices that provide the described functionality. In a simple embodiment, the processing apparatus of fig. 20 to 26 may take the form shown in fig. 27. The modules may be implemented by the processor, transceiver, and memory of fig. 3.
As shown in fig. 27, for convenience of description, only the parts related to the embodiment of the present invention are shown, and details of the specific technology are not disclosed, please refer to the method part of the embodiment of the present invention. The processing device may be a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a computer, or the like.
Fig. 27 is a block diagram showing a partial structure of a processing device related to a terminal provided in an embodiment of the present invention. Referring to fig. 27, the processing apparatus includes: a transceiver 2710, a memory 2720, an input unit 2730, a display unit 2740, an audio circuit 2760, a wireless fidelity (WiFi) module 2770, a processor 2780, and a power supply 2790. Those skilled in the art will appreciate that the processing device configuration shown in fig. 27 does not constitute a limitation of the processing device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The respective constituent components of the processing apparatus will be specifically described below with reference to fig. 27:
the transceiver 2710 may be configured to receive and transmit signals during information transmission and reception or during a call, and in particular, to process downlink information of a base station after receiving the downlink information to the processor 1180; in addition, the data for designing uplink is transmitted to the base station. Generally, the transceiver 2710 includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the transceiver 2710 can also communicate with a network and other devices through wireless communication.
The memory 2720 may be used to store software programs and modules, and the processor 2780 executes various functional applications and data processing of the processing device by operating the software programs and modules stored in the memory 2720. The memory 2720 may mainly include a program storage area and a data storage area, where the program storage area may store an operating system, an application program (such as a sound playing function, an image playing function, and the like) required by at least one function, and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the processing device, and the like. Further, the memory 2720 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The input unit 2730 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the processing apparatus. Specifically, the input unit 2730 may include a touch panel 2731 and other input devices 2732. The touch panel 2731, also referred to as a touch screen, may collect touch operations of a user on or near the touch panel 2731 (e.g., operations of a user on or near the touch panel 2731 using any suitable object or accessory such as a finger, a stylus, etc.) and drive the corresponding connection device according to a preset program. Alternatively, the touch panel 2731 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts it to touch point coordinates, and sends the touch point coordinates to processor 2780, where it can receive and execute commands from processor 2780. In addition, the touch panel 2731 may be implemented by using various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 2731, the input unit 2730 may include other input devices 2732. In particular, other input devices 2732 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 2740 may be used to display information input by a user or information provided to the user and various menus of the processing device. The Display unit 2740 may include a Display panel 2741, and optionally, the Display panel 2741 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 2731 can overlay the display panel 2741, and when the touch panel 2731 detects a touch operation thereon or nearby, the touch panel 2731 can transmit the touch operation to the processor 2780 to determine the type of the touch event, and then the processor 2780 can provide a corresponding visual output on the display panel 2741 according to the type of the touch event. Although in fig. 27, the touch panel 2731 and the display panel 2741 are implemented as two separate components to implement the input and output functions of the processing device, in some embodiments, the touch panel 2731 and the display panel 2741 may be integrated to implement the input and output functions of the processing device.
The processor 2780 is a control center of the processing device, connects various parts of the entire processing device using various interfaces and lines, and performs various functions of the processing device and processes data by running or executing software programs and/or modules stored in the memory 2720 and calling data stored in the memory 2720, thereby integrally monitoring the processing device. Alternatively, processor 2780 may include one or more processing units; preferably, the processor 2780 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into processor 2780.
WiFi belongs to short-range wireless transmission technology, and the processing device may receive data transmitted by an input device or transmit data to a display device through the WiFi module 2770.
The bluetooth module 2750 belongs to a short-distance wireless transmission technology, and the processing device may also receive data sent by the input device or send data to the display device through the bluetooth module 2750.
The processing device also includes a power supply 2790 (e.g., a battery) that provides power to the various components, which may preferably be logically coupled to the processor 2780 via a power management system to manage charging, discharging, and power consumption management functions via the power management system.
In an embodiment of the invention, the processor 2780 included in the processing device is further configured to cause the processing device to perform the method steps in the method embodiment.
Embodiments of the present invention further provide a computer-readable storage medium, in which instructions are stored, and when the instructions are executed on a computer, the computer is enabled to execute the method described in the above method embodiments.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (10)
1. A method of object editing in virtual reality, comprising:
receiving an operation of selecting an object;
determining a target object to be edited according to the operation of selecting the object;
calculating a spatial editing region according to the target object, wherein the spatial editing region is divided into a plurality of grid regions;
determining a target adsorption plane of the target object in the space editing area, wherein the target adsorption plane is determined according to the corresponding relation between the attribute of the target object and the adsorption plane;
receiving an operation of a moving object detected by an input device;
determining a target position of the target object according to the operation of the moving object;
moving the target object to the target position and displaying the target object through a display device;
wherein the determining the target position of the target object according to the operation of the moving object comprises:
generating rays according to the operation of the moving object;
determining the target position of the target object according to the intersection point of the ray and the target adsorption plane;
wherein the determining the target position of the target object according to the intersection point of the ray and the target adsorption plane comprises: determining the intersection point of the ray and the target adsorption plane;
calculating the central position of the grid in which the intersection point is positioned;
and calculating the target position of the target object according to the central position of the grid and an offset vector, wherein the offset vector is a vector from a central point on the side surface corresponding to the target adsorption plane on a target grid to a preset point on the target object, and the target grid is a minimum grid area of a bounding box for accommodating the target object.
2. The method of claim 1, wherein moving the target object to the target location comprises:
calculating a bounding box of the target object;
determining a target side corresponding to the target adsorption plane;
and overlapping the target side surface and the target adsorption plane, wherein the central point of the target side surface corresponds to the intersection point.
3. The method according to claim 1, wherein the number of the adsorption planes corresponding to the target object is at least one, and the determining the target adsorption plane of the target object according to the attribute of the target object and the corresponding relationship between the attribute and the adsorption plane comprises: determining a preset adsorption plane according to the attribute of the target object and the corresponding relation between the attribute and the adsorption plane;
receiving an operation of rotating an object detected by the input device;
and determining a target adsorption plane corresponding to the target object after the target object rotates on the basis of the preset adsorption plane according to the operation of the rotating object.
4. The method of claim 3, wherein prior to the operation of receiving the selection object, the method further comprises: determining N adsorption planes of each object according to the attribute of each object, wherein N is a positive integer greater than or equal to 1;
calculating a bounding box for each of a plurality of objects to be selected;
calculating a target grid according to the bounding box, wherein the target grid is a minimum grid area for accommodating the bounding box;
determining the side surface of the target grid corresponding to each adsorption plane in the N adsorption planes of each object;
calculating an offset vector from a central point of the side surface of the target grid to a preset point on the target object;
calculating the target position of the target object according to the intersection point of the ray and the target adsorption plane comprises:
and calculating the target position of the target object according to the intersection point of the ray and the target adsorption plane and the offset vector.
5. The method of any one of claims 1 to 4, wherein after moving the target object to the target location and displaying it via a display device, the method further comprises:
receiving an operation of selecting a translation object detected by the input device;
determining the translation direction of the target object according to the operation of selecting the translation object and generating a first position coordinate according to the operation of selecting the translation object;
calculating a second position coordinate of a preset point on the target object;
receiving an operation of translating an object detected by the input device;
calculating a third position coordinate according to the operation of the translation object; calculating a fourth position coordinate of the preset point after the target object is translated according to the first position coordinate, the second position coordinate and the third position coordinate;
and moving the preset point of the target object to the fourth position coordinate in the translation direction and displaying the preset point through the display equipment.
6. The method of any one of claims 1 to 4, wherein after moving the target object to the target location and displaying it via the display device, the method further comprises:
receiving an operation of selecting a rotating object detected by the input device;
determining the rotating direction of the target object according to the operation of selecting the rotating object and recording a current first rotating component of the target object and a current first Euler angle of an input device according to the operation of selecting the rotating object;
detecting an operation of a rotating object detected by the input device;
calculating a second Euler angle after the input device is rotated according to the operation of the rotating object;
calculating a second rotation component after the target object rotates according to the first rotation component, the first Euler angle and the second Euler angle;
rotating the target object by a second rotational component based on the first rotational component and displaying through the display device.
7. The method of any one of claims 1 to 4, wherein after moving the target object to the target location and displaying it via the display device, the method further comprises:
receiving an operation of selecting a zoom object detected by the input device;
recording the current first space size of the target object and recording the current third Euler angle of the input device according to the operation of selecting the zooming object;
receiving an operation of zooming an object detected by the input device;
calculating a fourth Euler angle after the input device is rotated according to the operation of the zoom object;
calculating a second spatial size of the target object after scaling according to the first spatial size, the third Euler angle and the fourth Euler angle;
and zooming the target object to a second space size on the basis of the first space size and displaying the target object through the display device.
8. An apparatus for editing an object in virtual reality, comprising:
the first receiving module is used for receiving the operation of selecting the object;
the object determining module is used for determining a target object to be edited according to the operation of the selected object received by the first receiving module;
the adsorption plane determining module is used for calculating a space editing area according to the target object determined by the object determining module, the space editing area is divided into a plurality of grid areas, and is also used for determining a target adsorption plane of the target object in the space editing area, and the target adsorption plane is determined according to the corresponding relation between the attribute of the target object and the adsorption plane;
the second receiving module is used for receiving the operation of the moving object detected by the input equipment;
the first generation module is used for generating rays according to the operation of the moving object and displaying the rays through display equipment;
a position determining module, configured to determine a target position of the target object according to an intersection of the target adsorption planes determined by the operation of the moving object received by the second receiving module;
the position determining module is further used for determining an intersection point of the ray and the target adsorption plane;
calculating the central position of the grid in which the intersection point is positioned; calculating the target position of the target object according to the central position of the grid and an offset vector, wherein the offset vector is a vector from a central point on a side surface corresponding to the target adsorption plane on a target grid to a preset point on the target object, and the target grid is a minimum grid area of a bounding box for accommodating the target object;
and the moving module is used for moving the target object to the target position determined by the position determining module and displaying the target object through display equipment.
9. A processing device, comprising:
a memory for storing computer executable program code;
a transceiver, and
a processor coupled with the memory and the transceiver;
wherein the program code comprises instructions which, when executed by the processor, cause the processing device to carry out the method of any one of claims 1 to 7.
10. A virtual reality system, comprising: the display device and the input device are connected with the processing device;
the input device detects an operation of selecting an object;
the processing device receives an operation of selecting an object detected by the input device;
the processing equipment determines a target object to be edited according to the operation of the selected object;
the processing device calculates a spatial editing region according to the target object, wherein the spatial editing region is divided into a plurality of grid regions;
the processing equipment determines a target adsorption plane of the target object in the space editing area, wherein the target adsorption plane is determined according to the corresponding relation between the attribute of the target object and the adsorption plane;
the processing device receives operation of a moving object detected by the input device;
the processing equipment generates rays according to the operation of the moving object and sends the data of the rays to the display equipment;
the display device displays the ray;
the processing equipment determines the target position of the target object according to the intersection point of the ray and the target adsorption plane;
the processing device moving the target object to the target location;
the display device displays that the target object moves to the target position;
wherein the determining the target position of the target object according to the intersection point of the ray and the target adsorption plane comprises: determining the intersection point of the ray and the target adsorption plane;
calculating the central position of the grid in which the intersection point is positioned;
and calculating the target position of the target object according to the central position of the grid and an offset vector, wherein the offset vector is a vector from a central point on the side surface corresponding to the target adsorption plane on a target grid to a preset point on the target object, and the target grid is a minimum grid area of a bounding box for accommodating the target object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711005203.XA CN109697002B (en) | 2017-10-23 | 2017-10-23 | Method, related equipment and system for editing object in virtual reality |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711005203.XA CN109697002B (en) | 2017-10-23 | 2017-10-23 | Method, related equipment and system for editing object in virtual reality |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109697002A CN109697002A (en) | 2019-04-30 |
CN109697002B true CN109697002B (en) | 2021-07-16 |
Family
ID=66229367
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711005203.XA Active CN109697002B (en) | 2017-10-23 | 2017-10-23 | Method, related equipment and system for editing object in virtual reality |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109697002B (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110322571B (en) * | 2019-05-30 | 2023-08-11 | 腾讯科技(上海)有限公司 | Page processing method, device and medium |
CN110381111A (en) * | 2019-06-03 | 2019-10-25 | 华为技术有限公司 | A kind of display methods, location determining method and device |
CN111429580A (en) * | 2020-02-17 | 2020-07-17 | 浙江工业大学 | Space omnidirectional simulation system and method based on virtual reality technology |
CN113554724A (en) * | 2020-04-24 | 2021-10-26 | 西安诺瓦星云科技股份有限公司 | Method and device for zooming and adsorbing graph |
CN111757081B (en) * | 2020-05-27 | 2022-07-08 | 海南车智易通信息技术有限公司 | Movement limiting method for virtual scene, client, server and computing equipment |
CN111782053B (en) * | 2020-08-10 | 2023-04-28 | Oppo广东移动通信有限公司 | Model editing method, device, equipment and storage medium |
CN112203076B (en) * | 2020-09-16 | 2022-07-29 | 青岛小鸟看看科技有限公司 | Alignment method and system for exposure center points of multiple cameras in VR system |
CN114404989B (en) * | 2021-12-13 | 2024-12-24 | 杭州闪电玩网络科技有限公司 | Game design interactive interface display method, system, device and medium |
CN114332294A (en) * | 2021-12-28 | 2022-04-12 | 杭州安恒信息技术股份有限公司 | Method and system for generating graph adsorption line, electronic equipment and storage medium |
CN114327063B (en) * | 2021-12-28 | 2024-08-16 | 亮风台(上海)信息科技有限公司 | Interaction method and device of target virtual object, electronic equipment and storage medium |
US20240193884A1 (en) * | 2022-12-12 | 2024-06-13 | Microsoft Technology Licensing, Llc | Empty Space Matrix Condensation |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105264461A (en) * | 2013-05-13 | 2016-01-20 | 微软技术许可有限责任公司 | Interactions of virtual objects with surfaces |
CN105912110A (en) * | 2016-04-06 | 2016-08-31 | 北京锤子数码科技有限公司 | Method, device and system for performing target selection in virtual reality space |
CN106055090A (en) * | 2015-02-10 | 2016-10-26 | 李方炜 | Virtual reality and augmented reality control with mobile devices |
CN106575153A (en) * | 2014-07-25 | 2017-04-19 | 微软技术许可有限责任公司 | Gaze-based object placement within a virtual reality environment |
CN107111979A (en) * | 2014-12-19 | 2017-08-29 | 微软技术许可有限责任公司 | The object of band auxiliary in three-dimension visible sysem is placed |
CN107229393A (en) * | 2017-06-02 | 2017-10-03 | 三星电子(中国)研发中心 | Real-time edition method, device, system and the client of virtual reality scenario |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10338688B2 (en) * | 2015-12-24 | 2019-07-02 | Samsung Electronics Co., Ltd. | Electronic device and method of controlling the same |
WO2017139509A1 (en) * | 2016-02-12 | 2017-08-17 | Purdue Research Foundation | Manipulating 3d virtual objects using hand-held controllers |
-
2017
- 2017-10-23 CN CN201711005203.XA patent/CN109697002B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105264461A (en) * | 2013-05-13 | 2016-01-20 | 微软技术许可有限责任公司 | Interactions of virtual objects with surfaces |
CN106575153A (en) * | 2014-07-25 | 2017-04-19 | 微软技术许可有限责任公司 | Gaze-based object placement within a virtual reality environment |
CN107111979A (en) * | 2014-12-19 | 2017-08-29 | 微软技术许可有限责任公司 | The object of band auxiliary in three-dimension visible sysem is placed |
CN106055090A (en) * | 2015-02-10 | 2016-10-26 | 李方炜 | Virtual reality and augmented reality control with mobile devices |
CN105912110A (en) * | 2016-04-06 | 2016-08-31 | 北京锤子数码科技有限公司 | Method, device and system for performing target selection in virtual reality space |
CN107229393A (en) * | 2017-06-02 | 2017-10-03 | 三星电子(中国)研发中心 | Real-time edition method, device, system and the client of virtual reality scenario |
Also Published As
Publication number | Publication date |
---|---|
CN109697002A (en) | 2019-04-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109697002B (en) | Method, related equipment and system for editing object in virtual reality | |
CN102830795B (en) | Utilize the long-range control of motion sensor means | |
KR102249577B1 (en) | Hud object design and method | |
WO2020114271A1 (en) | Image rendering method and apparatus, and storage medium | |
JP5752715B2 (en) | Projector and depth camera for deviceless augmented reality and interaction | |
US20150193979A1 (en) | Multi-user virtual reality interaction environment | |
CN112313605A (en) | Object placement and manipulation in augmented reality environments | |
US20120208639A1 (en) | Remote control with motion sensitive devices | |
US20190050132A1 (en) | Visual cue system | |
CN114138106B (en) | Transition between states in a hybrid virtual reality desktop computing environment | |
CN103324400A (en) | Method and device for displaying menus in 3D model | |
CN108920053A (en) | A kind of alignment schemes and mobile terminal | |
JP2014521174A (en) | Method and apparatus for generating dynamic wallpaper | |
CN113826060A (en) | Build and use virtual assets on tangible objects in Augmented Reality (AR) and Virtual Reality (VR) | |
CN111445563A (en) | Image generation method and related device | |
CN108089713A (en) | A kind of interior decoration method based on virtual reality technology | |
US20180165877A1 (en) | Method and apparatus for virtual reality animation | |
Sun et al. | Enabling participatory design of 3D virtual scenes on mobile devices | |
CN110717993A (en) | Interaction method, system and medium of split type AR glasses system | |
Muhammad Nizam et al. | A Scoping Review on Tangible and Spatial Awareness Interaction Technique in Mobile Augmented Reality‐Authoring Tool in Kitchen | |
JP5767371B1 (en) | Game program for controlling display of objects placed on a virtual space plane | |
GB2533777A (en) | Coherent touchless interaction with steroscopic 3D images | |
CN116301494A (en) | Toolbar interaction method and device, computer readable storage medium and electronic equipment | |
Caruso et al. | Interactive augmented reality system for product design review | |
CN114327174A (en) | Virtual reality scene display method and cursor three-dimensional display method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |