CN105787971B - Information processing method and electronic equipment - Google Patents
Information processing method and electronic equipment Download PDFInfo
- Publication number
- CN105787971B CN105787971B CN201610170355.4A CN201610170355A CN105787971B CN 105787971 B CN105787971 B CN 105787971B CN 201610170355 A CN201610170355 A CN 201610170355A CN 105787971 B CN105787971 B CN 105787971B
- Authority
- CN
- China
- Prior art keywords
- display object
- target display
- target
- dimensional
- dimensional display
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/001—Model-based coding, e.g. wire frame
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the application provides an information processing method and electronic equipment, which are used for realizing dimension conversion of a display object according to user requirements. The method comprises the following steps: detecting a first operation of an operation body, wherein the first operation is used for controlling at least one display object; determining a first target display object corresponding to the first operation from the at least one display object; the first target display object is an M-dimensional display object; converting the first target display object from the M-dimensional display object to an N-dimensional display object based on the first operation; m, N are all positive integers, and M ≠ N.
Description
Technical Field
The present invention relates to the field of electronic technologies, and in particular, to an information processing method and an electronic device.
Background
VR (Virtual Reality) and AR (Augmented Reality) are emerging visual technologies in recent years. VR virtualizes a whole scene for a user, including display objects and space; AR is based on real space, where the display object is virtualized.
However, no matter what display object is virtualized by VR or AR, the number of dimensions is mostly default, such as one-dimensional, two-dimensional or three-dimensional, and the display object dimension cannot be converted based on the operation of the user.
Disclosure of Invention
The embodiment of the application provides an information processing method and electronic equipment, which are used for realizing dimension conversion of a display object according to user requirements.
In a first aspect, the present application provides an information processing method, including:
detecting a first operation of an operation body, wherein the first operation is used for controlling at least one display object;
determining a first target display object corresponding to the first operation from the at least one display object; the first target display object is an M-dimensional display object;
converting the first target display object from the M-dimensional display object to an N-dimensional display object based on the first operation; m, N are all positive integers, and M ≠ N.
Optionally, the detecting a first operation of the operation body includes:
detecting the motion tracks of at least two sub operation bodies included by the operation body;
when the motion tracks of the at least two sub operation bodies meet a first preset condition, determining that the first operation is an operation for converting the first target display object from an M-dimensional display object to an N-dimensional display object; n is more than M.
Optionally, the detecting a first operation of the operation body includes:
detecting the motion tracks of at least two sub operation bodies included by the operation body;
when the motion track meets a second preset condition, determining that the first operation is an operation for converting the first target display object from an M-dimensional display object to an N-dimensional display object; m is more than N.
Optionally, the method further includes:
and controlling the dimension of other display objects except the first target display object in the at least one display object to be kept unchanged.
Optionally, the method further includes:
obtaining a position parameter of a viewer;
judging whether a third preset condition is met between the position parameter and a second target display object in the at least one display object; the second target display object is a K-dimensional display object in the at least one display object, K is a positive integer, and K is not equal to N;
and when the third preset condition is met between the position parameter and the second target display object, converting the second target display object from the K-dimensional display object to an N-dimensional display object based on the position parameter.
In a second aspect, the present application provides an electronic device comprising:
a detection unit configured to detect a first operation of an operation body, the first operation being used to control at least one display object;
a determining unit, configured to determine a first target display object corresponding to the first operation from the at least one display object; the first target display object is an M-dimensional display object;
a conversion unit configured to convert the first target display object from the M-dimensional display object to an N-dimensional display object based on the first operation; m, N are all positive integers, and M ≠ N.
Optionally, the detection unit is configured to detect motion trajectories of at least two sub-operation bodies included in the operation body; when the motion tracks of the at least two sub operation bodies meet a first preset condition, determining that the first operation is an operation for converting the first target display object from an M-dimensional display object to an N-dimensional display object; n is more than M.
Optionally, the detection unit is configured to detect motion trajectories of at least two sub-operation bodies included in the operation body; when the motion track meets a second preset condition, determining that the first operation is an operation for converting the first target display object from an M-dimensional display object to an N-dimensional display object; m is more than N.
Optionally, the electronic device further includes:
and the control unit is used for controlling the dimensionality of other display objects except the first target display object in the at least one display object to be kept unchanged.
Optionally, the electronic device further includes:
an obtaining unit configured to obtain a position parameter of a viewer;
the judging unit is used for judging whether a third preset condition is met between the position parameter and a second target display object in the at least one display object; the second target display object is a K-dimensional display object in the at least one display object, K is a positive integer, and K is not equal to N;
the conversion unit is further configured to convert the second target display object from the K-dimensional display object to an N-dimensional display object based on the position parameter when the third preset condition is satisfied between the position parameter and the second target display object.
In a third aspect, the present application provides an electronic device, comprising:
display means for displaying at least one display object;
the detection device is used for detecting a first operation of an operation body, and the first operation is used for controlling the at least one display object;
a processor configured to determine a first target display object corresponding to the first operation from the at least one display object; the first target display object is an M-dimensional display object; converting the first target display object from the M-dimensional display object to an N-dimensional display object based on the first operation; m, N are all positive integers, and M ≠ N.
Optionally, the detection device is configured to detect a motion trajectory of at least two sub-operation bodies included in the operation body;
the processor is used for determining that the first operation is an operation for converting the first target display object from an M-dimensional display object into an N-dimensional display object when the motion tracks of the at least two sub operation bodies meet a first preset condition; n is more than M.
Optionally, the detection device is configured to detect a motion trajectory of at least two sub-operation bodies included in the operation body;
the processor is used for determining that the first operation is an operation for converting the first target display object from an M-dimensional display object into an N-dimensional display object when the motion track meets a second preset condition; m is more than N.
Optionally, the processor is further configured to control dimensions of other display objects than the first target display object in the at least one display object to remain unchanged.
Optionally, the processor is further configured to obtain a location parameter of the viewer; judging whether a third preset condition is met between the position parameter and a second target display object in the at least one display object; the second target display object is a K-dimensional display object in the at least one display object, K is a positive integer, and K is not equal to N; and when the third preset condition is met between the position parameter and the second target display object, converting the second target display object from the K-dimensional display object to an N-dimensional display object based on the position parameter.
One or more technical solutions in the embodiments of the present application have at least one or more of the following technical effects:
in the technical scheme of the embodiment of the application, a first operation of an operation body is detected, a corresponding first target display object in at least one display object where the first operation is located is determined, and then the dimension of the first target display object is converted based on the first operation. Therefore, dimension conversion of the display object according to the user requirement is achieved.
Drawings
FIG. 1 is a flow chart of an information processing method in an embodiment of the present application;
FIG. 2 is a first operational schematic diagram of an embodiment of the present application;
FIG. 3 is another first operational schematic diagram of an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating positions of a user and a second target display object in a display coordinate system according to an embodiment of the present application;
FIG. 5 is a diagram of an electronic device according to an embodiment of the present application;
FIG. 6 is a schematic external view of another electronic device in an embodiment of the present application;
fig. 7 is a schematic structural diagram of the electronic device shown in fig. 6.
Detailed Description
The embodiment of the application provides an information processing method and electronic equipment, which are used for realizing dimension conversion of a display object according to user requirements.
In order to solve the technical problems, the technical scheme provided by the application has the following general idea:
in the technical scheme of the embodiment of the application, a first operation of an operation body is detected, a corresponding first target display object in at least one display object where the first operation is located is determined, and then the dimension of the first target display object is converted based on the first operation. Therefore, dimension conversion of the display object according to the user requirement is achieved.
The technical solutions of the present invention are described in detail below with reference to the drawings and specific embodiments, and it should be understood that the specific features in the embodiments and examples of the present invention are described in detail in the technical solutions of the present application, and are not limited to the technical solutions of the present application, and the technical features in the embodiments and examples of the present application may be combined with each other without conflict.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
A first aspect of the present application provides an information processing method applied to an electronic device. In this embodiment of the application, the electronic device is specifically an AR device or a VR device, or other electronic devices capable of displaying an object, which is not limited in this application. The electronic device may display the at least one display object, for example, display the at least one display object on a display unit of the electronic device, or project the at least one display object on a projection surface through a projection unit.
Referring to fig. 1, an information processing method in an embodiment of the present application includes:
s101: detecting a first operation of an operation body;
s102: determining a first target display object corresponding to the first operation from the at least one display object;
s103: converting the first target display object from the M-dimensional display object to an N-dimensional display object based on the first operation.
Specifically, M and N are both positive integers, and M ≠ N. In S101 the electronic device detects a first operation of the operation body. In this embodiment of the application, the operation body may specifically be a finger or an arm of the user, and may also be a touch pen, a motion sensing device, and the like, which is not limited in this application. The first operation is used to control the at least one display object, such as moving the at least one display object, enlarging the at least one display object, adjusting a dimension of the at least one display object, and the like. The first operation is, for example, double-clicking in a space or plane area corresponding to one of the display objects, or as shown in fig. 2, moving both hands from contact to separation from each other, or as shown in fig. 3, moving both hands from separation to contact.
Next, after the first operation is detected, in S102, a first target display object corresponding to the first operation needs to be determined from at least one display object.
Specifically, in the embodiment of the present application, the first target display object is an M-dimensional display object, and M is a positive integer. For example, if M is 2, the first target display object is a two-dimensional display object; or M is 3, the first target display object is a three-dimensional display object. The electronic equipment displays at least one display object, so that the first target display object corresponding to the first operation is determined, and the first target display object to be adjusted by a user is correctly adjusted.
Of course, in a specific implementation process, the first target display object may be specifically one display object, or may be multiple display objects. When the first target display object is a plurality of display objects, the processing manner of each display object is similar, and in the embodiment of the present application, one of the display objects will be explained.
In the embodiment of the present application, the manner of determining the first target display object is various.
For example, the first operation as shown in fig. 2 or fig. 3 is detected with the image acquisition unit. Due to the contact of the left hand and the right hand, the left thumb and the index finger and the right thumb and the index finger just enclose an approximately rectangular area. Therefore, in the display coordinate system, based on the straight lines of the left thumb and the index finger and the straight lines of the right thumb and the index finger, the display object in the area surrounded by the four straight lines to form the approximate rectangle is determined as the first target display object.
Alternatively, assume that the first operation is embodied as the user pushing the right hand out of the body and then clenching the fist and retracting, similar to making an action of grabbing something in front of the body. Then, the image acquisition unit detects the first operation of the user, and further determines that the display object corresponding to the position of the right hand when the fist is clenched in the display coordinate system is the first target display object.
In a specific implementation, the manner of determining the first target display object includes, but is not limited to, the above two examples. Those of ordinary skill in the art to which the present application pertains may set the setting according to the practice, and the present application is not particularly limited.
After the first display object is determined, in S103, the first target display object is converted from an M-dimensional display object to an N-dimensional display object based on the first operation. Wherein N is also a positive integer, and M ≠ N. Specifically, in the embodiment of the present application, N may be a positive integer greater than M, or may be a positive integer smaller than M. When N is a positive integer greater than M, then the dimension of the first target display object is increased based on the first operation; conversely, when N is a positive integer smaller than M, then the dimension of the first target display object is reduced based on the first operation. In the specific implementation process, a person having ordinary skill in the art to which the present application belongs may set the implementation according to practice, and the present application is not particularly limited.
For example as shown in fig. 2. Assume that the first operation is specifically an operation in which the left and right hands are separated from the touch. The electronic device determines a display object in an approximately rectangular area surrounded by straight lines of the left index finger and the thumb and the right index finger and the thumb as a first target display object. Further, the first target display object is converted from a two-dimensional display object to a three-dimensional display object based on the first operation. Therefore, the user will observe that, as the left and right hands separate, the display objects within the approximately rectangular region enclosed by the straight lines on which the left index finger and the thumb and the right index finger and the thumb are located are converted from two dimensions to three dimensions.
Alternatively, as shown in FIG. 3. Assume that the first operation is specifically an operation in which the left and right hands come closer from the beginning of the separation. The electronic device determines a display object in an approximately rectangular area surrounded by straight lines of the left index finger and the thumb and the right index finger and the thumb as a first target display object. Further, the first target display object is converted from a three-dimensional display object to a two-dimensional display object based on the first operation. Therefore, the user will observe that, as the left and right hands approach, the display objects in the approximately rectangular region surrounded by the straight lines of the left index finger and the thumb and the right index finger and the thumb are converted from three dimensions to two dimensions.
Alternatively, assume that the first operation is embodied as the user pushing the right hand out of the body and then clenching the fist and retracting, similar to making an action of grabbing something in front of the body. The electronic equipment determines that a display object corresponding to the right hand when the fist is clenched is a first target display object in the display coordinate system. When the user takes a fist and returns the hand, the electronic equipment converts the first target display object from two dimensions to three dimensions. Therefore, the user will observe that one of the at least one display object is converted from a planar display object to a stereoscopic display object by being grasped by his right hand.
Alternatively, assume that the first operation is embodied in that the user pushes the right-hand punch out of the body, then releases the punch, opens the palm and continues to push forward, similar to making an action while pushing against an object. The electronic equipment determines that a corresponding display object is a first target display object when the fist is gripped by the right hand in the display coordinate system. When the user opens the palm to push the palm forward, the electronic device converts the first target display object from three dimensions to two dimensions. Therefore, the user will observe that one of the at least one display object is pushed right into a two-dimensional display object by himself.
In a specific implementation process, the implementation manner of S101 is various. Two of which will be described in detail below, including but not limited to the following two in particular implementations.
The first method comprises the following steps:
in a first implementation, the first operation of detecting the operation body includes the following processes:
detecting the motion tracks of at least two sub operation bodies included by the operation body;
when the motion tracks of the at least two sub operation bodies meet a first preset condition, determining that the first operation is an operation for converting the first target display object from an M-dimensional display object to an N-dimensional display object; n is more than M.
Specifically, the operation body performs the first operation, specifically, by moving at least two sub-operation bodies. The sub-operation body is, for example, a hand. Taking the operation body as the two hands of the user as an example, the at least two sub-operation bodies are the left hand and the right hand of the user.
And detecting the motion tracks of the left hand and the right hand of the user respectively, and further obtaining the motion tracks of the two hands. In order to detect the movement track of the sub-operation body, the detection can be performed by a motion sensing device or an image acquisition unit or other devices in the embodiment of the application.
Specifically, if detected by a motion sensing device, the user holds two portions of the motion sensing device in the left and right hands, respectively, for movement. Then, in the motion process, the motion trail of the left hand is detected by the first part of the motion sensing device, the motion trail of the left hand is sent to the electronic device, the motion trail of the right hand is detected by the second part of the motion sensing device, and the motion trail of the right hand is sent to the electronic device. And finally, the electronic equipment combines the left-hand motion trail and the right-hand motion trail to obtain the left-hand and right-hand motion trails.
Alternatively, as shown in fig. 2, the user can extend the left and right hands to the front of the body to perform the actions shown in fig. 2 while using the electronic device. The image acquisition unit arranged on the electronic equipment acquires images of the left and right hand movements of the user and sends the images to the electronic equipment. The electronic equipment is pre-stored with the conversion relation among the image acquisition unit coordinate system, the user visual angle coordinate system and the display coordinate system, so that the left-hand and right-hand coordinates are recognized from the image transmitted by the image acquisition unit, and the condition seen by the user during the left-right movement and the coordinates of the movement in the display coordinate system can be determined through the conversion relation.
Then, whether the motion tracks of the at least two sub operation bodies meet a first preset condition is judged. In the embodiment of the present application, the first preset condition is a condition indicating that the dimension of the first target display object is increased. In a specific implementation process, the first preset condition is, for example, that the left-right hand movement trajectory indicates that the left hand and the right hand are separated from each other, or that the left-right hand movement trajectory indicates that the left hand and the right hand are close to each other, and the application is not particularly limited.
When the motion tracks of the at least two sub operation bodies meet a first preset condition, determining that the first operation is an operation for adjusting the first target display object from the M-dimensional display object to the N-dimensional display object. In a first implementation, N > M.
Therefore, in the first implementation, the user can increase the dimension of the display object by performing an operation satisfying the first preset condition. Therefore, the use is convenient for users.
And the second method comprises the following steps:
in a first implementation, the first operation of detecting the operation body includes the following processes:
detecting the motion tracks of at least two sub operation bodies included by the operation body;
when the motion track meets a second preset condition, determining that the first operation is an operation for converting the first target display object from an M-dimensional display object to an N-dimensional display object; m is more than N.
In the second implementation manner, the motion trajectories of at least two sub-operation bodies included in the detection operation body are similar to those of the first implementation manner, and therefore the parts similar to those of the first implementation manner are not repeated here.
And after the motion tracks of the at least two sub operation bodies are detected, judging whether the motion tracks meet a second preset condition. In the embodiment of the present application, the second preset condition is a condition different from the first preset condition, and the second preset condition is a condition indicating that the dimension of the display object is reduced. In a specific implementation process, the second preset condition may be set as a condition opposite to the first preset condition. For example, the first preset condition is that the left-right hand movement locus indicates that the left hand and the right hand are separated from each other, and the second preset condition is that the left-right hand movement locus indicates that the left hand and the right hand are close to each other; or the first preset condition is that the left-right hand movement locus indicates that the left hand and the right hand are close to each other, and the second preset condition is that the left-right hand movement locus indicates that the left hand and the right hand are separated from each other. Those of ordinary skill in the art to which the present application pertains may set the implementation according to practice, and the present application is not limited to this.
And when the motion tracks of the at least two sub operation bodies meet a second preset condition, determining that the first operation is an operation for adjusting the first target display object from the M-dimensional display object to the N-dimensional display object. In a first implementation, M > N.
Therefore, in the second implementation manner, the user can implement the reduction of the dimension of the display object by performing the operation satisfying the second preset condition. Therefore, the use is convenient for users.
In a specific implementation process, the implementation manner of S101 may select the first implementation manner, so that a user may increase the dimension of the display object as needed; the second implementation may also be selected so that the user may reduce the dimensionality of the displayed object as desired. Or, the first implementation manner and the second implementation manner may also be combined, so that when the user needs to increase the dimension of the display object, the operation with the motion trajectory satisfying the first preset condition is performed, and when the user needs to decrease the dimension of the display object, the operation with the motion trajectory satisfying the second preset condition is performed. Those of ordinary skill in the art to which the present application pertains may set the setting according to the practice, and the present application is not particularly limited.
Further, in another embodiment of the present application, the method further includes:
and controlling the dimension of other display objects except the first target display object in the at least one display object to be kept unchanged.
Specifically, in the embodiment of the present application, when the dimension number of the first target display object is controlled to be increased or decreased, it is also necessary to control the dimension numbers of other display objects than the first target display object in the at least one display object to be kept unchanged.
Furthermore, the user can observe that the dimension of part of the display objects is adjusted through own operation, and the display objects which are not aimed at by the user are not changed randomly, so that the user experience is improved.
Of course, in a specific implementation process, the dimension of another display object other than the first target display object may be controlled to increase or decrease with the dimension of the first target display object, and the application is not limited in particular.
Further, in another embodiment of the present application, the method further includes:
obtaining a position parameter of a user;
judging whether a third preset condition is met between the position parameter and a second target display object in the at least one display object; the second target display object is a K-dimensional display object in the at least one display object, K is a positive integer, and K is not equal to N;
and when the third preset condition is met between the position parameter and the second target display object, converting the second target display object from the K-dimensional display object to an N-dimensional display object based on the position parameter.
Specifically, in the embodiment of the present application, the electronic device obtains the location parameter of the user. The electronic device can obtain the position parameters of the user through the image acquisition unit or the gravity sensor. Or, the electronic device may also receive the location parameter of the user, which is acquired and sent by other devices, and the application is not limited specifically. The position parameter is a parameter indicating a position of the user corresponding to the display object.
Next, based on the position parameter, it is determined whether a third preset condition is satisfied between the position parameter and a second target display object of the at least one display object. Specifically, the second target display object is any one of the at least one display object. And, the dimension of the second target display object is K dimension. K is a positive integer, and K may be the same as or different from M, and is not specifically limited in this application. However, K is different from N, i.e., K ≠ N.
In this embodiment of the present application, the third preset condition is a condition that a user cannot observe the second target display object at the current position. For example, a two-dimensional display object can be viewed both in front of and behind the two-dimensional display object, but cannot be viewed on the side of the two-dimensional display object.
Therefore, when the third preset condition is satisfied between the position parameter and the second target display object, it indicates that the user cannot observe the second target display object at the current position at this time. Then, in order to facilitate the user to view the second target display object, the second target display object will be converted from the K-dimensional display object to the N-dimensional display object based on the position parameter.
Specifically, as shown in fig. 4, a schematic diagram of the positions of the objects in the display coordinate system is displayed for the user and the second target. The circle represents the user position and the black horizontal line represents the second target display object, and in particular the two-dimensional text "a". Since the user stands at the side of the character "a" at this time, the character "a" cannot be seen, so that the position parameter and the second target display object satisfy the third preset condition. Further, the electronic device converts the two-dimensional character "a" into the three-dimensional character "a". Since the three-dimensional display object can see at least one surface at any angle, the user can view the three-dimensional character "a" at this time.
As can be seen from the above description, when the third preset condition is satisfied between the position parameter and the second target display object, that is, when the user cannot observe the second target display object at the current position, the second target display object is converted into the N-dimension, so that the user can observe the second target display object at the current position.
Further, when a plurality of electronic devices share the same at least one display object at the same time, if a new user joins, the newly joined user can be prevented from being unable to observe all the display objects, so that the user experience is improved.
Based on the same inventive concept as the information processing method in the foregoing embodiment, a second aspect of the present application further provides an electronic device, as shown in fig. 5, including:
a detection unit 501 configured to detect a first operation of an operation body, the first operation being used to control at least one display object;
a determining unit 502, configured to determine a first target display object corresponding to the first operation from the at least one display object; the first target display object is an M-dimensional display object;
a conversion unit 503, configured to convert the first target display object from the M-dimensional display object to an N-dimensional display object based on the first operation; m, N are all positive integers, and M ≠ N.
Specifically, the detection unit 501 is configured to detect the motion trajectories of at least two sub-operation bodies included in the operation body; when the motion tracks of the at least two sub operation bodies meet a first preset condition, determining that the first operation is an operation for converting the first target display object from an M-dimensional display object to an N-dimensional display object; n is more than M.
Or, the detecting unit 501 is configured to detect the motion trajectories of at least two sub-operation bodies included in the operation body; when the motion track meets a second preset condition, determining that the first operation is an operation for converting the first target display object from an M-dimensional display object to an N-dimensional display object; m is more than N.
Further, the electronic device further includes:
and the control unit is used for controlling the dimensionality of other display objects except the first target display object in the at least one display object to be kept unchanged.
Still further, the electronic device further includes:
an obtaining unit configured to obtain a position parameter of a viewer;
the judging unit is used for judging whether a third preset condition is met between the position parameter and a second target display object in the at least one display object; the second target display object is a K-dimensional display object in the other display objects, K is a positive integer, and K is not equal to N;
the conversion unit is further configured to convert the second target display object from the K-dimensional display object to an N-dimensional display object based on the position parameter when the third preset condition is satisfied between the position parameter and the second target display object.
Various changes and specific examples of the information processing method in the foregoing embodiments of fig. 1 to 4 are also applicable to the electronic device of this embodiment, and those skilled in the art can clearly know the implementation method of the electronic device in this embodiment through the foregoing detailed description of the information processing method, so that the detailed description is omitted here for the sake of brevity of the description.
Based on the same inventive concept as the information processing method in the foregoing embodiment, the third aspect of the present application further provides an electronic device, as shown in fig. 6 and 7, including:
a display device 701 for displaying at least one display object;
a detecting device 702, configured to detect a first operation of an operation body, where the first operation is used to control the at least one display object;
a processor 703 configured to determine a first target display object corresponding to the first operation from the at least one display object; the first target display object is an M-dimensional display object; converting the first target display object from the M-dimensional display object to an N-dimensional display object based on the first operation; m, N are all positive integers, and M ≠ N.
Specifically, the detecting device 702 is configured to detect the motion trajectories of at least two sub-operation bodies included in the operation body;
the processor 703 is configured to determine that the first operation is an operation representing converting the first target display object from an M-dimensional display object to an N-dimensional display object when the motion trajectories of the at least two sub operation bodies meet a first preset condition; n is more than M.
Alternatively, the detecting device 702 is configured to detect the motion trajectories of at least two sub-operation bodies included in the operation body;
further, the processor 703 is configured to determine that the first operation is an operation indicating to convert the first target display object from an M-dimensional display object to an N-dimensional display object when the motion trajectory satisfies a second preset condition; m is more than N.
Further, the processor 703 is further configured to control dimensions of other display objects than the first target display object in the at least one display object to remain unchanged.
Further, the processor 703 is also configured to obtain a position parameter of the viewer; judging whether a third preset condition is met between the position parameter and a second target display object in the at least one display object; the second target display object is a K-dimensional display object in the at least one display object, K is a positive integer, and K is not equal to N; and when the third preset condition is met between the position parameter and the second target display object, converting the second target display object from the K-dimensional display object to an N-dimensional display object based on the position parameter.
Various changes and specific examples of the information processing method in the foregoing embodiments of fig. 1 to 4 are also applicable to the electronic device of this embodiment, and those skilled in the art can clearly know the implementation method of the electronic device in this embodiment through the foregoing detailed description of the information processing method, so that the detailed description is omitted here for the sake of brevity of the description.
One or more technical solutions in the embodiments of the present application have at least one or more of the following technical effects:
as can be seen from the above description, in the technical solution of the embodiment of the present application, a first operation of an operation body is first detected, a corresponding first target display object in at least one display object where the first operation is located is then determined, and then the dimension of the first target display object is converted based on the first operation. Therefore, dimension conversion of the display object according to the user requirement is achieved.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Specifically, the computer program instructions corresponding to an information processing method in the embodiment of the present application may be stored on a storage medium such as an optical disc, a hard disc, or a usb disk, and when the computer program instructions corresponding to the first information processing method in the storage medium are read or executed by an electronic device, the method includes the following steps:
detecting a first operation of an operation body, wherein the first operation is used for controlling at least one display object;
determining a first target display object corresponding to the first operation from the at least one display object; the first target display object is an M-dimensional display object;
converting the first target display object from the M-dimensional display object to an N-dimensional display object based on the first operation; m, N are all positive integers, and M ≠ N.
Optionally, the computer instruction stored in the storage medium and corresponding to the first operation of the step detection operation body specifically includes, in a specific executed process, the following steps:
detecting the motion tracks of at least two sub operation bodies included by the operation body;
when the motion tracks of the at least two sub operation bodies meet a first preset condition, determining that the first operation is an operation for converting the first target display object from an M-dimensional display object to an N-dimensional display object; n is more than M.
Optionally, the computer instruction stored in the storage medium and corresponding to the first operation of the step detection operation body specifically includes, in a specific executed process, the following steps:
detecting the motion tracks of at least two sub operation bodies included by the operation body;
when the motion track meets a second preset condition, determining that the first operation is an operation for converting the first target display object from an M-dimensional display object to an N-dimensional display object; m is more than N.
Optionally, the storage medium further stores other computer instructions, and the computer instructions when executed include the following steps:
and controlling the dimension of other display objects except the first target display object in the at least one display object to be kept unchanged.
Optionally, the storage medium further stores other computer instructions, and the computer instructions when executed include the following steps:
obtaining a position parameter of a viewer;
judging whether a third preset condition is met between the position parameter and a second target display object in the at least one display object; the second target display object is a K-dimensional display object in the at least one display object, K is a positive integer, and K is not equal to N;
and when the third preset condition is met between the position parameter and the second target display object, converting the second target display object from the K-dimensional display object to an N-dimensional display object based on the position parameter.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (9)
1. An information processing method is applied to an electronic device, the electronic device is an AR device or a VR device, and the method comprises the following steps:
detecting a first operation of an operation body, wherein the first operation is used for controlling at least one display object, and the first operation is an operation in a space or a plane area corresponding to the display object;
determining a first target display object corresponding to the first operation from the at least one display object; the first target display object is an M-dimensional display object;
converting the first target display object from the M-dimensional display object to an N-dimensional display object based on the first operation; m, N are all positive integers, and M is not equal to N;
the method further comprises the following steps:
obtaining a position parameter of a viewer;
judging whether a third preset condition is met between the position parameter and a second target display object in the at least one display object; the second target display object is a K-dimensional display object in the at least one display object, K is a positive integer, and K is not equal to N;
when the third preset condition is met between the position parameter and the second target display object, converting the second target display object from the K-dimensional display object to an N-dimensional display object based on the position parameter; the third preset condition is a condition that the user cannot observe the second target display object at the current position.
2. The method of claim 1, wherein detecting a first operation of an operator comprises:
detecting the motion tracks of at least two sub operation bodies included by the operation body;
when the motion tracks of the at least two sub operation bodies meet a first preset condition, determining that the first operation is an operation for converting the first target display object from an M-dimensional display object to an N-dimensional display object; n is more than M.
3. The method of claim 1, wherein detecting a first operation of an operator comprises:
detecting the motion tracks of at least two sub operation bodies included by the operation body;
when the motion track meets a second preset condition, determining that the first operation is an operation for converting the first target display object from an M-dimensional display object to an N-dimensional display object; m is more than N.
4. The method of claim 2 or 3, wherein the method further comprises:
and controlling the dimension of other display objects except the first target display object in the at least one display object to be kept unchanged.
5. An electronic device, the electronic device being an AR device or a VR device, comprising:
a detection unit configured to detect a first operation of an operation body, the first operation being used to control at least one display object, and the first operation being an operation within a space or a planar area corresponding to the display object;
a determining unit, configured to determine a first target display object corresponding to the first operation from the at least one display object; the first target display object is an M-dimensional display object;
a conversion unit configured to convert the first target display object from the M-dimensional display object to an N-dimensional display object based on the first operation; m, N are all positive integers, and M is not equal to N; the detection unit is configured to: obtaining a position parameter of a viewer; the determination unit is configured to determine whether a third preset condition is satisfied between the position parameter and a second target display object of the at least one display object; the second target display object is a K-dimensional display object in the at least one display object, K is a positive integer, and K is not equal to N;
when the third preset condition is met between the position parameter and the second target display object, converting the second target display object from the K-dimensional display object to an N-dimensional display object based on the position parameter; the third preset condition is a condition that the user cannot observe the second target display object at the current position.
6. The electronic device according to claim 5, wherein the detection unit is configured to detect motion trajectories of at least two sub-operators included in the operator; when the motion tracks of the at least two sub operation bodies meet a first preset condition, determining that the first operation is an operation for converting the first target display object from an M-dimensional display object to an N-dimensional display object; n is more than M.
7. The electronic device according to claim 5, wherein the detection unit is configured to detect motion trajectories of at least two sub-operators included in the operator; when the motion track meets a second preset condition, determining that the first operation is an operation for converting the first target display object from an M-dimensional display object to an N-dimensional display object; m is more than N.
8. The electronic device of claim 6 or 7, wherein the electronic device further comprises:
and the control unit is used for controlling the dimensionality of other display objects except the first target display object in the at least one display object to be kept unchanged.
9. An electronic device, the electronic device being an AR device or a VR device, comprising:
display means for displaying at least one display object;
detecting means for detecting a first operation of an operation body, the first operation being for controlling the at least one display object, and the first operation being an operation within a space or a planar area corresponding to the display object;
a processor configured to determine a first target display object corresponding to the first operation from the at least one display object; the first target display object is an M-dimensional display object; converting the first target display object from the M-dimensional display object to an N-dimensional display object based on the first operation; m, N are all positive integers, and M is not equal to N; the detection device further comprises: obtaining a position parameter of a viewer; the processor is configured to determine whether a third preset condition is met between the position parameter and a second target display object in the at least one display object; the second target display object is a K-dimensional display object in the at least one display object, K is a positive integer, and K is not equal to N; when the third preset condition is met between the position parameter and the second target display object, converting the second target display object from the K-dimensional display object to an N-dimensional display object based on the position parameter; the third preset condition is a condition that the user cannot observe the second target display object at the current position.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610170355.4A CN105787971B (en) | 2016-03-23 | 2016-03-23 | Information processing method and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610170355.4A CN105787971B (en) | 2016-03-23 | 2016-03-23 | Information processing method and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105787971A CN105787971A (en) | 2016-07-20 |
CN105787971B true CN105787971B (en) | 2019-12-24 |
Family
ID=56390666
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610170355.4A Active CN105787971B (en) | 2016-03-23 | 2016-03-23 | Information processing method and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105787971B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111104821A (en) * | 2018-10-25 | 2020-05-05 | 北京微播视界科技有限公司 | Image generation method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101952818A (en) * | 2007-09-14 | 2011-01-19 | 智慧投资控股67有限责任公司 | Processing based on the user interactions of attitude |
CN102053771A (en) * | 2009-11-06 | 2011-05-11 | 神达电脑股份有限公司 | Method for adjusting information presented on handheld electronic device |
CN102298493A (en) * | 2010-06-28 | 2011-12-28 | 株式会社泛泰 | Apparatus for processing interactive three-dimensional object |
CN102541442A (en) * | 2010-12-31 | 2012-07-04 | Lg电子株式会社 | Mobile terminal and hologram controlling method thereof |
CN103955275A (en) * | 2014-04-21 | 2014-07-30 | 小米科技有限责任公司 | Application control method and device |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5902346B2 (en) * | 2012-03-29 | 2016-04-13 | インテル コーポレイション | Creating 3D graphics using gestures |
CN103246351B (en) * | 2013-05-23 | 2016-08-24 | 刘广松 | A kind of user interactive system and method |
JP6096634B2 (en) * | 2013-10-17 | 2017-03-15 | 株式会社ジオ技術研究所 | 3D map display system using virtual reality |
-
2016
- 2016-03-23 CN CN201610170355.4A patent/CN105787971B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101952818A (en) * | 2007-09-14 | 2011-01-19 | 智慧投资控股67有限责任公司 | Processing based on the user interactions of attitude |
CN102053771A (en) * | 2009-11-06 | 2011-05-11 | 神达电脑股份有限公司 | Method for adjusting information presented on handheld electronic device |
CN102298493A (en) * | 2010-06-28 | 2011-12-28 | 株式会社泛泰 | Apparatus for processing interactive three-dimensional object |
CN102541442A (en) * | 2010-12-31 | 2012-07-04 | Lg电子株式会社 | Mobile terminal and hologram controlling method thereof |
CN103955275A (en) * | 2014-04-21 | 2014-07-30 | 小米科技有限责任公司 | Application control method and device |
Also Published As
Publication number | Publication date |
---|---|
CN105787971A (en) | 2016-07-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11157725B2 (en) | Gesture-based casting and manipulation of virtual content in artificial-reality environments | |
EP2907004B1 (en) | Touchless input for a user interface | |
US9939914B2 (en) | System and method for combining three-dimensional tracking with a three-dimensional display for a user interface | |
EP3639120B1 (en) | Displacement oriented interaction in computer-mediated reality | |
CN102662577B (en) | A kind of cursor operating method based on three dimensional display and mobile terminal | |
JP6057396B2 (en) | 3D user interface device and 3D operation processing method | |
KR101171660B1 (en) | Pointing device of augmented reality | |
US20140015831A1 (en) | Apparatus and method for processing manipulation of 3d virtual object | |
US20170140552A1 (en) | Apparatus and method for estimating hand position utilizing head mounted color depth camera, and bare hand interaction system using same | |
EP2558924B1 (en) | Apparatus, method and computer program for user input using a camera | |
JP6524589B2 (en) | Click operation detection device, method and program | |
Debarba et al. | Disambiguation canvas: A precise selection technique for virtual environments | |
CN105630155B (en) | Computing device and method for providing three-dimensional (3D) interaction | |
JP6632681B2 (en) | Control device, control method, and program | |
KR20190059727A (en) | Interactive system for controlling complexed object of virtual reality environment | |
JP5863984B2 (en) | User interface device and user interface method | |
CN105787971B (en) | Information processing method and electronic equipment | |
Lee et al. | Tunnelslice: Freehand subspace acquisition using an egocentric tunnel for wearable augmented reality | |
Caputo et al. | Single-Handed vs. Two Handed Manipulation in Virtual Reality: A Novel Metaphor and Experimental Comparisons. | |
CN111068309A (en) | Display control method, device, equipment, system and medium for virtual reality game | |
EP3702008A1 (en) | Displaying a viewport of a virtual space | |
Halim et al. | Designing ray-pointing using real hand and touch-based in handheld augmented reality for object selection | |
EP3599538B1 (en) | Method and apparatus for adding interactive objects to a virtual reality environment | |
CN113034701A (en) | Method for modifying the rendering of a region of a 3D scene in an immersive environment | |
WO2014014461A1 (en) | System and method for controlling an external system using a remote device with a depth sensor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |