Notification method for guiding visual attention of user to specific interaction target by using facial tactile feedback
Technical Field
The invention relates to the field of human-computer interaction, in particular to a notification method for guiding visual attention of a user to a specific interaction target by using facial tactile feedback.
Background
In the use scenario of a head-mounted virtual (augmented, mixed) reality device (XR/VR/AR-oriented field), some tasks require a user to quickly transfer line of sight to an interactive target located outside the peripheral or field of view in a complex visual environment. These interaction targets include, but are not limited to, people, items, windows, icons, etc. that exist in a virtual or real scene. Task scenes such as three-dimensional space navigation, finding items in a virtual/real environment, perceiving social objects, identifying virtual devices or windows, etc. In contrast to audiovisual perception, haptic sensations have the potential to provide more accurate spatial orientation information than auditory perception while avoiding the excessive visual sensory burden.
The prior art has provided haptic feedback for the head and face through the wearable device itself (e.g. a head mounted display HMD) for spatial guidance. Related studies have demonstrated that specific haptic feedback for the head and face can intuitively coordinate the mapping with the user's related body movements and eye movements, helping people divert attention on both in-line and out-of-line targets. Facepush is a device that can generate normal force on the left or right side of the user's face, directing the user's attention to off-site objects, feedback intensity versus rotation angle mapping. Masque add 6 actuators to the facial area in contact with the HMD to provide lateral skin stretching. They devised several patterns of changing both stretch direction and stretch point to guide the user to see or move to a particular direction in the room. Tseng et al propose a skin-stroking haptic device mounted inside an HMD, producing stroking feedback on eye circles, which can be used as an off-screen indicator. In general, these devices use tactile stimuli to indicate the basic direction of body movement, taking into account the number and position of the actuators. Some studies explored the use of tactile stimuli to navigate both in-line and out-of-line. VirtualWhiskers are mounted with 2 robotic arms on either side of the face and stimulate the target point calculated from the relative position of the face, covering 180 of the azimuth plane and 90 of the elevation plane. deJesusOliveira et al deployed five vibration actuators at the forehead and two vibration actuators near the temple. The vibrotactile HMD points in the direction of an object in the azimuth plane (202.5 ° field of view). The vibration frequency peaks at the correct elevation angle (from-22.5 deg. to 45 deg.). In addition to devices connected to the HMD, the tabletop ultrasound device may also provide haptic feedback to the face. Whiskers apply a set of 3.5 mm apart tactile stimuli over the user's cheeks, forehead centre and eyebrows. The direction of the haptic stimulus motion directs the user's line of sight within a 60 ° field of view.
These studies contain basic bearing cues for the interaction target, but lack an indication of the spatial distance of the interaction target relative to the user. This results in a user being unable to precisely divert visual attention to a particular interactive target when multiple interactive targets are in the same orientation but at different distances from the user, resulting in some degree of occlusion of the interactive target. In addition, the former approach is not a feedback cell array, and can provide lower haptic resolution. Finally, the foregoing approaches lack the ability to provide a variety of haptic stimuli such as force pressure, vibration, temperature, etc., and fail to provide a rich haptic feedback effect.
Disclosure of Invention
The present invention provides a notification method that uses facial haptic feedback to direct the visual attention of a user to a particular interaction target. The invention provides the tactile stimulus with azimuth direction for the surface of the skin around the eyes of the user through the tactile feedback unit which simultaneously provides force, vibration and temperature feedback, such as the Shape Memory Alloy (SMA) tactile feedback device, thereby helping the user to perform visual touch fusion man-machine interaction. Several haptic feedback units are integrated into existing head mounted displays to provide haptic feedback for the user's forehead, cheeks, eye periphery face of the temple, helping the user locate interaction targets originating from specific spatial locations and distances in virtual/augmented reality scenarios. When the system identifies the object to be indicated, the system calculates the relative position and the relative distance of the current object relative to the center of the visual field of the user, helps the sight of the user to be positioned quickly, and reduces the visual search time. In addition, the invention can display the multidimensional information notification of the source interaction target, such as urgency, event type, progress, emotional state, movement and the like by controlling the haptic stimulus parameters such as the intensity, amplitude, space-time mode and the like of the haptic feedback unit.
Technical proposal
A notification method for guiding a user's visual attention to a specific interaction target using facial tactile feedback is an interaction target that uses tactile feedback generated by a facial tactile array to indicate a relative position and a relative distance with respect to a user's visual field center, and presents multi-dimensional notification information by a change in tactile parameters.
The invention provides a notification method for guiding visual attention of a user to a specific interaction target by using facial tactile feedback, which comprises the following method steps:
(1) Judging the distance between the object and the user and the azimuth angle relative to the facing direction of the current user by identifying the connecting line of the target object at the user's eyebrow position;
(2) When the user starts to move to the target object, continuously carrying out feedback change in real time according to the dynamic change of the user and the direction thereof, such as from left to right;
(3) When a user starts to move towards the target object, continuously carrying out feedback change in real time according to the dynamic change of the distance between the user and the target object, such as the closer the user is, the stronger the feedback change is;
(4) Multidimensional information notification of source interaction targets, such as urgency, information type, progress, emotional state and the like, is displayed by controlling design parameters such as strength, amplitude, space-time mode and the like of the haptic feedback unit;
(5) And judging whether the sight line of the user is positioned to the target object or not through the eye movement focus, and stopping the tactile feedback.
Preferably, in the step (2), the tactile stimulus on the left side of the face directs the user to look to the left, and the tactile stimulus on the right side of the face directs the user to look to the right.
Preferably, the step (3) is performed by using the principle of 'near-far-small' similar to vision, wherein the closer the target is to the user, the larger the touch stimulus range is, and the learning cost of the user is reduced.
Preferably, in step (4), the multi-dimensional information notification of the source interaction target is displayed by controlling the design parameters such as the strength, the amplitude, the space-time mode and the like of the haptic feedback unit, including but not limited to urgency, information type, progress, emotional state, movement and the like, wherein the stronger the strength is represented as more urgent the event, and the two consecutive haptic stimuli are represented as social software information and the like.
The invention has the beneficial effects that:
1. The invention provides real-time and dynamic prompt of relative azimuth and distance prompt, and can more accurately position the interactive target of information source in complex scene.
2. The invention fuses multi-mode haptic feedback of force, vibration, heat and the like and provides rich haptic experience.
3. The invention displays multidimensional information notification of source interaction targets, including but not limited to urgency, information type, progress, emotional state, movement, etc., by controlling design parameters such as force, amplitude, spatiotemporal pattern, etc., of the haptic feedback unit.
4. The haptic feedback cell array of the present invention provides a higher resolution haptic stimulus.
Drawings
In order to more clearly illustrate the technical solutions of the present disclosure, the drawings that need to be used in some embodiments of the present disclosure will be briefly described below, and it is apparent that the drawings in the following description are only drawings of some embodiments of the present disclosure, and other drawings may be obtained according to these drawings to those of ordinary skill in the art. Furthermore, the drawings in the following description may be regarded as schematic diagrams, not limiting the actual size of the products, the actual flow of the methods, the actual timing of the signals, etc. according to the embodiments of the present disclosure.
FIG. 1 is a reference schematic diagram of facial positioning;
FIG. 2 is a schematic view of a user usage scenario 1;
FIG. 3 is a schematic view of a user usage scenario 2;
fig. 4 is a schematic view of a user usage scenario 3.
Detailed Description
In order to make the technical solution of the present application better understood by those skilled in the art, the technical solution of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, shall fall within the scope of the present application.
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
The present invention will be described in further detail by way of examples.
Example 1
As shown in fig. 1, a notification method for guiding visual attention of a user to a specific interaction target by using facial tactile feedback comprises the following method steps:
(1) Judging the distance between the object and the user and the azimuth angle relative to the facing direction of the current user by identifying the connecting line of the target object at the user's eyebrow position;
(2) Guiding the user to look at the corresponding azimuth by driving the feedback unit of the corresponding position in real time (e.g., tactile stimulus on the left side of the face guides the user to look to the left);
(3) And indicates the distance of the target object and the like through the quantity, the force, the amplitude, the duration, the driving quantity and the like of the tactile feedback units (for example, the closer the target is to the user, the larger the tactile stimulus range is, and the principle of 'near-far-small' similar to vision is adopted, so that the learning cost of the user is reduced);
(4) Multidimensional information notifications of source interaction targets, including but not limited to urgency, information type, progress, emotional state, etc., are presented by controlling design parameters of the haptic feedback unit, such as force, amplitude, spatiotemporal pattern, etc. For example, the stronger the force is, the more urgent the event is represented, the two successive tactile stimulations are represented by social software information and the like, and when the user starts to move towards the target object, feedback changes are continuously carried out in real time according to the dynamic changes of the distance and the direction of the user, such as the closer the user is, the stronger the user is, and the left to right.
(5) And judging whether the sight line of the user is positioned to the target object or not through the eye movement focus, and stopping the tactile feedback.
Example 2
As shown in FIG. 2, a user uses social object recognition in a scene 1:XR scene
In an XR scenario, a user needs to perceive the location of a virtual social object and a real social object at the same time. When a user is immersed in a virtual reality scene, the relative position and distance of social objects in the reality scene can be perceived through haptic feedback without disrupting the sense of immersion in the virtual reality. The user may perceive the social state and intent of the social object through haptic feedback. When the user exits from the virtual reality, seeing the real social object, the haptic feedback stops.
Firstly, a camera identifies people in a room, the space coordinates of a real object and the space coordinates of a virtual object are converted into coordinate positions taking the eyebrow of a user as the center, the central sight line is taken as an X-axis, the distance between the camera and the user is output according to the coordinate positions, then the actuating parameters (such as a space-time mode, duration, pressure, temperature and the like) of a touch feedback unit are output according to the coordinate positions, the position relative to the center of the visual field of the user is output, the touch feedback unit for the position starts actuating the touch unit according to the fed-back parameters and the unit, the eye movement data of the user is output to gaze the social object, and finally the actuation is stopped.
Example 3
As shown in FIG. 3, a user uses scenario 2, message tracing for multiple virtual windows
In a home scenario of XR, a user may place multiple windows simultaneously at different spatial locations. For example, the kitchen has a timer window, the living room has a television window, etc. If the timer of the kitchen has a notification pop-up when the user views the television, the user can accurately perceive the window of the notification from a far kitchen direction through tactile feedback, so that the spatial source of the notification is accurately positioned. And judging the progress state of cooking by the intensity of the touch stimulus.
The method comprises the specific steps of firstly notifying a chat window of information, obtaining related information (such as urgency, information type and the like) of the notification, outputting space coordinates of a virtual window according to the information notification of the chat window, converting the coordinates into coordinate positions which take the eyebrow of a user as a center and the central sight line as an X axis, outputting the distance from the user according to the coordinate positions, then outputting the position relative to the visual field center of the user according to the coordinate positions, outputting the position of the haptic feedback unit according to the actuating parameters (such as the driving quantity and the like), and starting to actuate the haptic unit according to the actuating parameters and the unit fed back for the position, outputting eye movement data of the user to gaze the window, and finally stopping actuation.
Example 4
As shown in FIG. 4, the user uses scene 3-find objects in a real scene
In some situations, such as at a museum visit, a user may want to view a particular drawing, but not know the specific location, and need to find it. The position of the target drawing relative to the center of the visual field of the user can be judged through the position information combined by the camera and indoor positioning, and the user can be helped to find the corresponding drawing through the prompt of the tactile feedback. As the user approaches the target object, the haptic feedback is gradually enhanced and navigation instructions are made through the dynamic changes in orientation, such as left to right, bottom to top, etc., of the haptic array until the target object appears within the user's field of view.
Firstly, a camera recognizes the drawing in a plurality of directions, recognizes the spatial information of a museum and the current position of a user to obtain the spatial coordinates of a real object, converts the coordinates into coordinate positions which take the eyebrow of the user as the center and the central sight line as the X axis, outputs the distance from the user according to the coordinate positions, then outputs the position relative to the center of the visual field of the user according to the coordinate positions by an actuating parameter (such as a space-time mode and the like) of a tactile feedback unit, and starts to actuate the tactile unit according to the fed-back actuating parameter and the unit to output eye movement data staring window of the user, and finally stops actuation.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.
While the fundamental principles and main features of the present invention and advantages of the present invention have been described above and illustrated, it will be apparent to those skilled in the art that the present invention is not limited to the details of the above exemplary embodiments, but may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present disclosure describes embodiments, not every embodiment is provided with a separate embodiment, and that this description is provided for clarity only, and that the disclosure is not limited to the embodiments described in detail below, and that the embodiments described in the examples may be combined as appropriate to form other embodiments that will be apparent to those skilled in the art.