[go: up one dir, main page]

CN111352506A - Image processing method, device, equipment and computer readable storage medium - Google Patents

Image processing method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN111352506A
CN111352506A CN202010082758.XA CN202010082758A CN111352506A CN 111352506 A CN111352506 A CN 111352506A CN 202010082758 A CN202010082758 A CN 202010082758A CN 111352506 A CN111352506 A CN 111352506A
Authority
CN
China
Prior art keywords
data
imu
displayed
image
attitude data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010082758.XA
Other languages
Chinese (zh)
Inventor
杨东清
张振飞
范锡睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202010082758.XA priority Critical patent/CN111352506A/en
Publication of CN111352506A publication Critical patent/CN111352506A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The embodiment of the application discloses an image processing method, an image processing device, image processing equipment and a computer readable storage medium, wherein the method comprises the following steps: acquiring attitude data of the electronic equipment in a preset time period, wherein the attitude data comprises IMU deviation and IMU data; determining predicted attitude data of the electronic equipment at the moment to be displayed according to the IMU deviation and the IMU data in the preset time period; acquiring an image to be displayed according to the predicted attitude data; and when the time to be displayed arrives, displaying the image to be displayed.

Description

Image processing method, device, equipment and computer readable storage medium
Technical Field
The embodiment of the application relates to the technical field of augmented reality, and relates to but is not limited to an image processing method, an image processing device, image processing equipment and a computer-readable storage medium.
Background
For an electronic device with image capture and display functionality, delays in display may occur because there may be some delay from actual movement to movement, and time is also required to present the captured images on the display. Sources of delay include many, such as: sensing delays, processing delays, data smoothing, transmission delays, rendering delays, and frame rate delays, among others. Without a sufficiently low delay, it is not possible at all to bring a good user experience, i.e. the brain does not regard the picture seen by the eyes as real. Therefore, a method for reducing the display delay of the electronic device in the motion state is needed.
Disclosure of Invention
In view of this, embodiments of the present application provide an image processing method, an apparatus, a device, and a computer-readable storage medium.
The technical scheme of the embodiment of the application is realized as follows:
in a first aspect, an embodiment of the present application provides an image processing method, including:
acquiring attitude data of the electronic equipment in a preset time period, wherein the attitude data comprises IMU deviation and IMU data;
determining predicted attitude data of the electronic equipment at the moment to be displayed according to the IMU deviation and the IMU data in the preset time period;
acquiring an image to be displayed according to the predicted attitude data;
and when the time to be displayed arrives, displaying the image to be displayed.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring attitude data of the electronic equipment in a preset time period, and the attitude data comprises IMU deviation and IMU data;
the determining module is used for determining predicted attitude data of the electronic equipment at the moment to be displayed according to the IMU deviation and the IMU data in the preset time period;
the second acquisition module is used for acquiring an image to be displayed according to the predicted attitude data;
and the display module is used for displaying the image to be displayed when the moment to be displayed comes.
In a third aspect, an embodiment of the present application provides an image processing apparatus, including at least: a processor and a storage medium configured to store executable instructions, wherein: the processor is configured to execute stored executable instructions;
the executable instructions are configured to perform the image processing method described above.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium having stored therein computer-executable instructions configured to perform the image processing method described above.
According to the image processing method, the image processing device, the image processing equipment and the computer readable storage medium, the predicted attitude data of the electronic equipment at the moment to be displayed is determined according to the IMU deviation and the IMU data in the preset time period, so that the image to be displayed is obtained according to the preset attitude data. Therefore, the predicted attitude data of the time to be displayed can be accurately predicted according to the IMU deviation and the IMU data, so that an accurate image to be displayed can be drawn according to the accurate predicted attitude data, and when the image to be displayed is displayed at the time to be displayed, the display time delay can be greatly reduced due to the fact that the error between the image to be displayed and an actual image is small.
Drawings
In the drawings, which are not necessarily drawn to scale, like reference numerals may describe similar components in different views. Like reference numerals having different letter suffixes may represent different examples of similar components. The drawings illustrate generally, by way of example, but not by way of limitation, various embodiments discussed herein.
Fig. 1A is an alternative schematic flow chart of an image processing method provided in an embodiment of the present application;
fig. 1B is a schematic view of an application scenario of an image processing method provided in an embodiment of the present application;
FIG. 2 is a schematic flow chart of an alternative image processing method provided by the embodiment of the present application;
FIG. 3 is a schematic flow chart of an alternative image processing method provided by the embodiment of the present application;
FIG. 4 is a schematic flow chart diagram illustrating a method for training a predictive model of pose data according to an embodiment of the present disclosure;
FIG. 5 is a schematic flow chart of an alternative image processing method provided by the embodiment of the present application;
FIG. 6 is a schematic flow chart of an alternative image processing method provided by the embodiment of the present application;
FIG. 7 is a schematic flow chart of an alternative image processing method provided by the embodiment of the present application;
FIG. 8 is a schematic flow chart of an alternative image processing method provided by the embodiment of the present application;
fig. 9 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, specific technical solutions of the present application will be described in further detail below with reference to the accompanying drawings in the embodiments of the present application. The following examples are intended to illustrate the present application but are not intended to limit the scope of the present application.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for the convenience of description of the present application, and have no specific meaning by themselves. Thus, "module", "component" or "unit" may be used mixedly.
An embodiment of the present application provides an image processing method, where an image processing device implementing the image processing method according to the embodiment of the present application may be implemented as various types of electronic devices such as Augmented Reality (AR) glasses, a notebook computer, a tablet computer, a desktop computer, a mobile device (e.g., a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, and a portable game device), and may also be implemented as a server. For convenience of description, in the embodiments of the present application, the electronic device is an AR glasses as an example.
Since the electronic device (e.g., AR glasses) may move within a certain range, such as a linear motion or a rotational motion, when the electronic device moves, there may be some delay between the actual movement to the movement, and time is also required to present the image while the movement on the display of the electronic device. Sources of delay include many, such as: sensing delays, processing delays, data smoothing, transmission delays, rendering delays, and frame rate delays, among others.
Without a sufficiently low delay, it is not possible at all to bring a good user experience, i.e. the brain does not regard the virtual picture seen by the eyes as real. The term "real" as used herein does not mean that it cannot be recognized by eyes whether they are virtual or not, but that it cannot be perceived that an image is different from a real object when moving a head, eyes or a body. The key to this is that the virtual object must always remain in the correct position as the electronic device moves. If it takes too long from when the user turns his head to when the picture is drawn at a new position, the picture is shifted very far, causing a jitter or smear in the display feeling.
In view of the foregoing problems, embodiments of the present application provide an image processing method, which can reduce display delay of an electronic device in a motion state.
Fig. 1A is a schematic flow chart of an alternative image processing method provided in an embodiment of the present application, where the image processing method provided in the embodiment of the present application may be applied to an electronic device, as shown in fig. 1A, and the method includes the following steps:
step S101, acquiring attitude data of the electronic equipment in a preset time period, wherein the attitude data comprises IMU deviation and IMU data.
Here, the electronic device acquires posture data of itself within a preset time period. The attitude data comprises Inertial Measurement Unit (IMU) deviation and IMU data, wherein the IMU deviation comprises IMU zero deviation, the IMU zero deviation refers to data measured by an IMU on the electronic equipment when the electronic equipment is in a static state, and the IMU data comprises data of angular velocity, acceleration and the like of the electronic equipment in a three-dimensional space within a preset time period.
In the embodiment of the application, the IMU is installed on the electronic equipment, and the electronic equipment obtains attitude data of the electronic equipment in a preset time period through IMU measurement.
And S102, determining the predicted attitude data of the electronic equipment at the moment to be displayed according to the IMU deviation and the IMU data in the preset time period.
Here, the acquired IMU deviation and IMU data are used as basic data for prediction, and predicted attitude data of a time to be displayed, which may be the next display time or any time after the current display time, is predicted from the basic data for prediction.
The predicted attitude data comprises data such as predicted positions and angles, and the predicted attitude data is used for representing the positions and angles of the electronic equipment operated at the moment to be displayed.
And step S103, acquiring an image to be displayed according to the predicted attitude data.
Here, the electronic device acquires an image to be displayed at the predicted position and angle of the electronic device from the predicted attitude data.
In some embodiments, the image capturing unit of the electronic device may capture images of the electronic device at different positions and angles in advance, or the electronic device may pre-store images of the electronic device at different positions and angles. In this way, after the attitude data is predicted at the prediction site, the image to be displayed can be directly acquired from the predicted attitude data.
In other embodiments, the electronic device may further draw the image to be displayed according to the predicted pose data based on the image acquired within the preset time period, that is, due to a limited movement speed of the electronic device, a difference between the image to be displayed at the time to be displayed and the image within the preset time period may not be very large, and due to continuity of movement and continuity of image acquisition, the image to be displayed at the time to be displayed may be predicted and drawn according to the predicted pose data of the electronic device at the time to be displayed and the continuous image acquired within the preset time period.
And step S104, when the time to be displayed arrives, displaying the image to be displayed.
Here, since the image to be displayed at the time to be displayed has been predicted, when the time to be displayed arrives, the image to be displayed can be directly displayed without acquiring an image in real time and displaying the acquired image, so that the display delay can be reduced.
Fig. 1B is a schematic view of an application scenario of the image processing method provided in the embodiment of the present application, as shown in fig. 1B, a user wears AR glasses 101 (i.e., an electronic device), and head motion drives the AR glasses 101 to move, that is, as shown in fig. 1B, the head motion of the user drives the AR glasses 101 to move from a first posture (left image in fig. 1B) to a second posture (right image in fig. 1B), where an angular direction 102 of the first posture is different from an angular direction 103 of the second posture. Then, in order to reduce the display delay of the AR glasses 101 on the acquired image, the method provided by the embodiment of the present application may be adopted, first, obtaining pose data of the AR glasses 101 in a preset time period, where the pose data includes an IMU deviation and IMU data; then, according to the IMU deviation and IMU data in the preset time period, determining predicted posture data of the AR glasses 101 at the time to be displayed; acquiring an image to be displayed according to the predicted attitude data; finally, when the time to be displayed arrives, the AR glasses 101 display the image to be displayed.
According to the image processing method provided by the embodiment of the application, the predicted attitude data of the electronic equipment at the moment to be displayed is determined according to the IMU deviation and the IMU data in the preset time period, so that the image to be displayed is obtained according to the preset attitude data. Therefore, the predicted attitude data of the time to be displayed can be accurately predicted according to the IMU deviation and the IMU data, so that an accurate image to be displayed can be drawn according to the accurate predicted attitude data, and when the image to be displayed is displayed at the time to be displayed, the display time delay can be greatly reduced due to the fact that the error between the image to be displayed and an actual image is small.
In some embodiments, the image processing method may also be applied to a server, and fig. 2 is an optional flowchart of the image processing method provided in the embodiments of the present application, and as shown in fig. 2, the method includes the following steps:
step S201, the electronic device collects posture data of itself within a preset time period.
Step S202, the electronic equipment sends the acquired attitude data to a server.
Step S203, the server determines the predicted attitude data of the electronic equipment at the time to be displayed according to the IMU deviation and the IMU data in the preset time period.
And step S204, the server acquires an image to be displayed according to the predicted attitude data.
In step S205, the server sends the image to be displayed to the electronic device.
And step S206, when the time to be displayed arrives, the electronic equipment displays the image to be displayed.
In the embodiment of the application, the electronic device acquires the attitude data and sends the attitude data to the server, so that the server predicts the predicted attitude data, obtains the image to be displayed according to the predicted attitude data, and finally displays the image to be displayed on the electronic device.
Based on fig. 1, fig. 3 is an optional flowchart of the image processing method provided in the embodiment of the present application, and as shown in fig. 3, step S102 may be implemented by the following steps:
step S301, determining IMU error of the electronic equipment according to IMU deviation in a preset time period.
Here, the IMU error of the electronic device may be obtained by calculating an average of IMU deviations in a preset time period, or performing fitting processing on IMU deviations in a preset time period.
Step S302, an IMU estimation value of the electronic equipment is determined according to IMU data in a preset time period.
Here, a change rule of the angular velocity and the acceleration of the electronic device within a preset time period is determined according to the angular velocity and the acceleration of the electronic device within the preset time period, and then the angular velocity and the acceleration (i.e., IMU estimated value) of the electronic device at the next time or the time to be displayed are determined according to the change rule.
Step S303, determining the predicted attitude data of the electronic equipment at the time to be displayed according to the IMU error and the IMU estimated value.
And predicting the position and the angle of the electronic equipment at the moment to be displayed according to the IMU error and the angular speed and the acceleration at the moment to be displayed to obtain predicted attitude data.
In some embodiments, a trained attitude data prediction model may also be used to implement the step of determining predicted attitude data in the image processing method according to the embodiment of the present application, that is, the attitude data prediction model is used to predict IMU zero bias and IMU data in the preset time period, so as to obtain predicted attitude data of the electronic device at the time of display. The attitude data prediction model comprises an error estimation model, a motion model and a motion prediction model, wherein the error estimation model is used for determining IMU sample errors, the motion model is used for determining IMU sample estimation values, and the motion prediction model predicts prediction data used for representing predicted attitudes according to the IMU sample errors and the IMU sample estimation values.
The display sending time refers to the time when the image to be displayed is transmitted and displayed, and the ideal state is that the time when the image to be displayed is transmitted and the time when the image to be displayed is actually displayed are the same, so that display delay does not exist, and user experience is improved.
Here, a method for training a posture data prediction model is provided, as shown in fig. 4, which is a schematic flow chart of a method for training a posture data prediction model provided in an embodiment of the present application, and the method includes:
step S401, inputting the deviation sample data in a specific time period into an error estimation model to obtain an IMU sample error.
In the embodiment of the application, the sample data of the training posture data prediction model comprises deviation sample data in a specific time period and IMU sample data in the specific time period.
Here, the error estimation model is used to perform error estimation on the offset sample data within a certain time period to obtain an IMU sample error of the sample data.
Step S402, inputting the IMU sample data in the specific time period into a motion model to obtain an IMU sample estimation value.
The motion model is used for determining a change rule of IMU sample data in a certain time period and estimating to obtain an IMU sample estimation value according to the change rule.
And S403, inputting the IMU sample error and the IMU sample estimation value into a motion prediction model to obtain prediction data.
Here, the motion prediction model is used to integrate the IMU sample error and IMU sample estimate to predict prediction data such as position and angle.
And S404, inputting the prediction data into a preset loss model to obtain a loss result.
Here, the preset loss model is used for comparing the prediction data with preset attitude data to obtain the loss result. The preset loss model comprises a loss function, the distance between the prediction data and the preset attitude data can be calculated through the loss function, namely, the difference between the prediction data and the preset attitude data is calculated, and the difference is determined as the loss result.
Step S405, according to the loss result, correcting the error estimation model, the motion model and the motion prediction model to obtain the attitude data prediction model.
Here, when the difference is greater than a preset loss threshold, the loss result indicates that an error estimation model in the current attitude data prediction model cannot accurately perform error estimation, or a motion model cannot accurately estimate to obtain an IMU sample estimation value, or a motion prediction model cannot accurately predict prediction data such as a position and an angle. Therefore, if the current attitude data prediction model needs to be corrected, the error estimation model, the motion model and the motion prediction model can be corrected according to the difference, and the corresponding attitude data prediction model is determined as the trained attitude data prediction model until the prediction data output by the attitude data prediction model and the preset attitude data meet the preset conditions.
According to the training method of the attitude data prediction model, deviation sample data in a specific time period in the sample data are input into the error estimation model, and IMU sample errors are obtained; inputting IMU sample data in a specific time period in the sample data into a motion model to obtain an IMU sample estimation value; and inputting the IMU sample error and the IMU sample estimation value into a motion prediction model to obtain prediction data. And according to the loss function, the prediction data is compared with preset attitude data, so that the error estimation model, the motion model and the motion prediction model can be corrected according to the loss result, the obtained attitude data prediction model can accurately determine the prediction attitude data of the time to be displayed, and an accurate image to be displayed can be drawn according to the accurate prediction attitude data, so that when the image to be displayed is displayed at the time to be displayed, the display time delay can be greatly reduced due to the small error with an actual image.
In some embodiments, after the predicted attitude data is obtained through prediction, if a difference exists between the predicted attitude data and the actual attitude data at the time to be displayed, the accuracy of the predicted attitude data can be checked according to the actual attitude data. Fig. 5 is an alternative flowchart of an image processing method according to an embodiment of the present application, and as shown in fig. 5, the method includes the following steps:
step S501, acquiring attitude data of the electronic equipment in a preset time period, wherein the attitude data comprises IMU deviation and IMU data.
Step S502, according to the IMU deviation and IMU data in the preset time period, determining the predicted attitude data of the electronic equipment at the time to be displayed.
Step S503, acquiring actual posture data of the electronic device at the time to be displayed.
Here, the time pose data of the electronic device may be acquired when the time to be displayed is reached.
Step S504, the predicted attitude data is verified according to the actual attitude data, and whether a deviation exists between the predicted attitude data and the actual attitude data is judged.
If the judgment result is yes, executing step S505; if the judgment result is no, step S507 is executed.
And step S505, correcting the predicted attitude data according to the verification result.
Here, since the verification result indicates that there is a deviation between the predicted attitude data and the actual attitude data, a deviation value between the predicted attitude data and the actual attitude data is acquired, and then the predicted attitude data is corrected according to the deviation value.
And step S506, acquiring an image to be displayed according to the corrected predicted attitude data.
Here, the corrected posture data can accurately reflect the position posture of the electronic device, and therefore, the image to be displayed of the electronic device is acquired based on the accurate position posture.
And step S507, when the time to be displayed arrives, displaying the image to be displayed.
According to the image processing method provided by the embodiment of the application, the predicted attitude data is corrected according to the deviation between the actual attitude data and the predicted attitude data, so that accurate data reflecting the position attitude of the electronic equipment is obtained, an accurate image to be displayed is obtained according to the corrected predicted attitude data, and as the predicted attitude data and the actual attitude data are only slightly deviated, only the slightly deviated part is required to be corrected during correction, so that the predicted attitude data can be quickly corrected to obtain the accurate attitude data, the image to be displayed is quickly determined, and the display time delay is reduced.
In some embodiments, after obtaining the predicted pose data by prediction and obtaining the image to be displayed according to the predicted pose data, the accuracy of the obtained image to be displayed may be checked according to an actual target display image, and fig. 6 is an optional schematic flow chart of the image processing method provided in the embodiment of the present application, and as shown in fig. 6, the method includes the following steps:
step S601, obtaining attitude data of the electronic equipment in a preset time period, wherein the attitude data comprises IMU deviation and IMU data.
Step S602, according to the IMU deviation and IMU data in the preset time period, determining the predicted attitude data of the electronic equipment at the time to be displayed.
And step S603, acquiring an image to be displayed according to the predicted attitude data.
Step S604, acquiring actual posture data of the electronic device at the time to be displayed.
And step S605, acquiring a target display image acquired by the electronic equipment based on the actual attitude data.
Here, the target display image is an image to be actually displayed, and the target display image may be the same as or different from the image to be displayed, but since the movement speed of the electronic device is limited, that is, the movement of the head of the person is not particularly fast, the deviation between the predicted posture data obtained by prediction and the actual posture data is not particularly large, and therefore, the deviation between the image to be displayed obtained from the predicted posture data and the target display image to be actually displayed is not particularly large, and is a local detail deviation.
Step S606, the predicted attitude data is verified according to the actual attitude data, and whether a pixel difference position exists between the image to be displayed and the target display image is judged.
If the judgment result is yes, executing step S607; if the judgment result is no, step S609 is executed.
And step S607, correcting the image to be displayed according to the checking result.
Here, when the image to be displayed is corrected, the pixel difference position between the image to be displayed and the target display image may be acquired, and then the correction of the image to be displayed may be performed based on the pixel data of the pixel difference position. In some embodiments, step S607 may be implemented by:
step S6071, replacing the pixel data of the image to be displayed at the pixel difference position with the pixel data of the target display image at the pixel difference position, so as to correct the image to be displayed, and obtain a corrected image to be displayed.
In the embodiment of the application, when the image to be displayed is corrected, only the pixel data at the pixel difference position needs to be corrected, and the pixel data at other positions do not need to be corrected because the pixel data and the target display image have no difference.
And step S608, when the time to be displayed arrives, displaying the corrected image to be displayed.
And step S609, when the time to be displayed arrives, displaying the image to be displayed.
According to the image processing method provided by the embodiment of the application, after the image to be displayed is obtained, the image to be displayed is corrected according to the pixel difference position between the actual image at the moment to be displayed and the image to be displayed, only part of pixel data located at the pixel difference position on the image to be displayed needs to be corrected during correction, and other part of pixel data does not need to be corrected. Therefore, the image to be displayed is acquired and loaded before the moment to be displayed, and only limited pixel data is required to be corrected when the moment to be displayed arrives, so that the loading speed of the image to be displayed can be greatly increased, and the display time delay is reduced.
In some embodiments, a display delay time of the electronic device may also be determined, and the pose data prediction model is optimized according to the determined display delay time, fig. 7 is an optional flowchart of the image processing method provided in the embodiment of the present application, and as shown in fig. 7, the method includes the following steps:
step S701, acquiring attitude data of the electronic equipment in a preset time period, wherein the attitude data comprises IMU deviation and IMU data.
Step S702, according to the IMU deviation and the IMU data in the preset time period, determining the predicted attitude data of the electronic equipment at the time to be displayed.
Step S703, acquiring actual posture data of the electronic device at the time to be displayed.
Step S704, determining a display delay time of the electronic device according to the predicted posture data and the actual posture data.
The display delay time is used for representing the time difference between the actual display time and the time to be displayed.
Step S705, when determining the predicted pose data at any time after the time to be displayed, determining the predicted pose data of the electronic device at any time after the time to be displayed according to the IMU zero offset, the IMU data, and the display delay time.
Here, after the display delay time of this time is determined, in a subsequent prediction process, the display delay time of this time may also be used as reference data for prediction, and the attitude data prediction model may be further optimized according to the display delay time of this time. Therefore, model optimization is performed circularly according to the display delay time, and finally the attitude data prediction model with extremely low display delay can be obtained, more accurate predicted attitude data can be obtained through prediction, and the display delay is further greatly reduced.
Fig. 8 is an optional flowchart of the image processing method according to the embodiment of the present application, and as shown in fig. 8, step S704 may be implemented by:
step S801, acquiring an actual operating speed of the electronic device.
Here, the actual operation speed includes an actual operation speed and an angular speed. In some embodiments, when the actual operation is not a uniform motion, the acceleration and the angular acceleration of the electronic device may also be acquired.
Step S802, determining the distance between the predicted position and the actual position of the electronic equipment according to the predicted attitude data and the actual attitude data.
Here, the distance between the predicted position and the actual position includes a straight-line distance and an angle difference.
And step S803, determining the display delay time of the electronic equipment according to the distance and the actual running speed.
In the embodiment of the application, according to the distance and the actual movement speed, the display delay time between the actual display time of the electronic equipment and the time to be displayed can be calculated, so that the attitude data prediction model can be further optimized based on the display delay time.
Next, an exemplary application of the embodiment of the present application in a practical application scenario will be described.
The embodiment of the application provides a predictive tracking method (namely, the image processing method) based on AR glasses, and the tracking method can accurately calculate the position and the posture (namely, predicted posture data) of a picture needing to be sent and displayed through accurate motion model calculation, error estimation model calculation and motion prediction model calculation, acquire the posture and the position of the moment when the real moment (namely, the moment to be displayed) is sent and displayed, can display the posture and the position only by slightly changing, greatly reduces the display delay and enables the display effect to be more fit with the actual motion of a person.
In the embodiment of the present application, the optical tracking and IMU are fused by a sensor in a more intuitive manner. The IMU can provide a low-latency state, while optical tracking can be used to correct for IMU aberrations.
According to the method provided by the embodiment of the application, when the posture of the AR glasses is tracked, the deviation (namely IMU error) of the time needing to be displayed and the IMU estimation value can be calculated in real time by establishing the error estimation model and the motion model. And determining predicted attitude data based on the deviation and the IMU estimate
The application program draws a three-dimensional scene picture based on the obtained predicted attitude data, accurately calculates the time of display delay at this stage, calculates the attitude, the position and the like of an image to be rendered through the established motion prediction model based on the motion model data such as deviation data, acceleration angular velocity and the like estimated by the model and the attitude data calculated by an algorithm.
Before the image is scanned out, the real data is acquired for verification and appropriate correction, and then transmitted to the display screen.
According to the method, the error estimation model, the motion model and the motion prediction model are simultaneously established, accurate prediction is carried out on the error model and the motion model, and meanwhile accurate predictive tracking is achieved by accurately calculating the delay time. Moreover, by acquiring the data of the nearest neighbor frame for verification, appropriate adjustment can be performed, the prediction accuracy is ensured, and the stable output of the picture display is realized.
The following are the steps that the image processing method goes through:
in step S11, the pose, that is, the position and orientation in the real world, of the head-mounted display (e.g., AR glasses) is tracked while an error estimation model, a motion model, and a motion prediction model are built.
The error estimation model inputs IMU zero-offset data, establishes a cubic spline curve estimation model according to the IMU zero-offset data of a certain time window, and outputs an IMU zero-offset error to be predicted; inputting original data of the IMU by a motion model, estimating a cubic spline motion curve motion model by the motion model according to the original data of the IMU in a certain time window, and outputting IMU data to be predicted; the motion prediction model inputs the predicted IMU zero offset error, the predicted IMU data, the current position posture and the predicted time step length, calculates the input data to obtain predicted posture data such as the predicted position posture and the like, and outputs the predicted posture data.
And step S12, drawing a stereoscopic scene picture by the application program based on the obtained attitude data, accurately calculating the display delay time, establishing a motion prediction model, and accurately predicting and tracking.
And step S13, obtaining final accurate pose data through nearest neighbor frame check and adjustment, where the graphics hardware needs to transmit the drawn scene picture to a screen of a head-mounted display for scan output.
In step S14, each pixel of the screen is displayed based on the received pixel data.
Based on the foregoing embodiments, the present application provides an image processing apparatus, which includes modules and components included in the modules, and can be implemented by a processor in an image processing device; of course, it may also be implemented by logic circuitry; in the implementation process, the Processor may be a Central Processing Unit (CPU), a microprocessor Unit (MPU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), or the like.
Fig. 9 is a schematic diagram of a composition structure of an image processing apparatus according to an embodiment of the present application, and as shown in fig. 9, the image processing apparatus 900 includes:
a first obtaining module 901, configured to obtain pose data of an electronic device in a preset time period, where the pose data includes an IMU deviation and IMU data;
a determining module 902, configured to determine predicted pose data of the electronic device at a time to be displayed according to the IMU deviation and IMU data in the preset time period;
a second obtaining module 903, configured to obtain an image to be displayed according to the predicted pose data;
a display module 904, configured to display the image to be displayed when the time to be displayed arrives.
In some embodiments, the determining module is further configured to: determining an IMU error of the electronic equipment according to the IMU deviation in the preset time period; determining an IMU estimation value of the electronic equipment according to the IMU data in the preset time period; and determining the predicted attitude data of the electronic equipment at the moment to be displayed according to the IMU error and the IMU estimated value.
In some embodiments, the apparatus further comprises: the processing module is used for predicting IMU zero offset and IMU data in the preset time period by adopting a posture data prediction model to obtain predicted posture data of the electronic equipment at the display sending time; the attitude data prediction model is obtained by training through the following steps: inputting the deviation sample data in a specific time period into an error estimation model to obtain an IMU sample error; inputting IMU sample data in the specific time period into a motion model to obtain an IMU sample estimation value; inputting the IMU sample error and the IMU sample estimation value into a motion prediction model to obtain prediction data; inputting the prediction data into a preset loss model to obtain a loss result; and correcting the error estimation model, the motion model and the motion prediction model according to the loss result to obtain the attitude data prediction model.
In some embodiments, the apparatus further comprises: the third acquisition module is used for acquiring actual attitude data of the electronic equipment at the moment to be displayed; the calibration module is used for calibrating the predicted attitude data according to the actual attitude data; and the correction module is used for correcting the image to be displayed or the predicted attitude data according to the verification result.
In some embodiments, the apparatus further comprises: the fourth acquisition module is used for acquiring a target display image acquired by the electronic equipment based on the actual attitude data; the correction module is further configured to: and when the verification result shows that a pixel difference position exists between the image to be displayed and the target display image, adopting the pixel data of the target display image at the pixel difference position to replace the pixel data of the image to be displayed at the pixel difference position so as to realize the correction of the image to be displayed.
In some embodiments, the apparatus further comprises: a second determining module, configured to determine a display delay time of the electronic device according to the predicted attitude data and the actual attitude data; correspondingly, the determining module is further configured to: and when the predicted attitude data of the electronic equipment at any moment after the moment to be displayed is determined, determining the predicted attitude data of the electronic equipment at any moment after the moment to be displayed according to the IMU zero offset, the IMU data and the display delay time.
In some embodiments, the second determination module is further configured to: acquiring the actual running speed of the electronic equipment; determining a distance between a predicted position and an actual position of the electronic device according to the predicted attitude data and the actual attitude data; and determining the display delay time of the electronic equipment according to the distance and the actual running speed.
In the embodiment of the present application, if the image processing method is implemented in the form of a software functional module and sold or used as a standalone product, the image processing method may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a terminal to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Correspondingly, an embodiment of the present application provides an image processing apparatus, fig. 10 is a schematic diagram of a composition structure of the image processing apparatus provided in the embodiment of the present application, and as shown in fig. 10, the image processing apparatus 1000 at least includes: a processor 1001, a communication interface 1002, and a storage medium 1003 configured to store executable instructions, wherein the processor 1001 generally controls the overall operation of the image processing apparatus.
The communication interface 1002 may enable the image processing apparatus to communicate with other terminals or servers via a network.
The storage medium 1003 is configured to store instructions and applications executable by the processor 1001, and may also cache data to be processed or processed by each module in the processor 1001 and the image processing apparatus 1000, and may be implemented by a FLASH Memory (FLASH) or a Random Access Memory (RAM).
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in the various embodiments of the present application, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application. The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
It should be noted that, in this document, the terms "comprises", "comprising" or any other variation thereof are intended to cover a non-exclusive inclusion, so that a process, a method or an apparatus including a series of elements includes not only those elements but also other elements not explicitly listed or inherent to such process, method or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described device embodiments are merely illustrative, for example, the division of the unit is only a logical functional division, and there may be other division ways in actual implementation, such as: multiple units or components may be combined, or may be integrated into another system, or some features may be omitted, or not implemented.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units; can be located in one place or distributed on a plurality of network units; some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiments of the present application. Those of ordinary skill in the art will understand that: all or part of the steps for realizing the method embodiments can be completed by hardware related to program instructions, the program can be stored in a computer readable storage medium, and the program executes the steps comprising the method embodiments when executed; and the aforementioned storage medium includes: various media that can store program code, such as removable storage devices, read-only memories, magnetic or optical disks, etc. Alternatively, the integrated units described above in the present application may be stored in a computer-readable storage medium if they are implemented in the form of software functional modules and sold or used as independent products. Based on such understanding, the technical solutions of the embodiments of the present application may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a terminal to execute all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a removable storage device, a ROM, a magnetic or optical disk, or other various media that can store program code.
The above description is only for the embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. An image processing method comprising:
acquiring attitude data of the electronic equipment in a preset time period, wherein the attitude data comprises inertial measurement unit IMU deviation and IMU data;
determining predicted attitude data of the electronic equipment at the moment to be displayed according to the IMU deviation and the IMU data in the preset time period;
acquiring an image to be displayed according to the predicted attitude data;
and when the time to be displayed arrives, displaying the image to be displayed.
2. The method of claim 1, wherein determining predicted pose data of the electronic device at the moment to be displayed according to the IMU deviation and IMU data within the preset time period comprises:
determining an IMU error of the electronic equipment according to the IMU deviation in the preset time period;
determining an IMU estimation value of the electronic equipment according to the IMU data in the preset time period;
and determining the predicted attitude data of the electronic equipment at the moment to be displayed according to the IMU error and the IMU estimated value.
3. The method of claim 1, further comprising:
predicting IMU zero offset and IMU data in the preset time period by adopting a posture data prediction model to obtain predicted posture data of the electronic equipment at the time of display and transmission; the attitude data prediction model is obtained by training through the following steps:
inputting the deviation sample data in a specific time period into an error estimation model to obtain an IMU sample error;
inputting IMU sample data in the specific time period into a motion model to obtain an IMU sample estimation value;
inputting the IMU sample error and the IMU sample estimation value into a motion prediction model to obtain prediction data;
inputting the prediction data into a preset loss model to obtain a loss result;
and correcting the error estimation model, the motion model and the motion prediction model according to the loss result to obtain the attitude data prediction model.
4. The method of claim 1, further comprising:
acquiring actual attitude data of the electronic equipment at the moment to be displayed;
verifying the predicted attitude data according to the actual attitude data;
and correcting the image to be displayed or the predicted attitude data according to the verification result.
5. The method of claim 4, further comprising: acquiring a target display image acquired by the electronic equipment based on the actual attitude data;
the correcting the image to be displayed according to the verification result comprises the following steps:
and when the verification result shows that a pixel difference position exists between the image to be displayed and the target display image, adopting the pixel data of the target display image at the pixel difference position to replace the pixel data of the image to be displayed at the pixel difference position so as to realize the correction of the image to be displayed.
6. The method of claim 4, further comprising:
determining display delay time of the electronic equipment according to the predicted attitude data and the actual attitude data;
correspondingly, when the predicted attitude data at any time after the time to be displayed is determined,
and determining the predicted attitude data of the electronic equipment at any moment after the moment to be displayed according to the IMU zero offset, the IMU data and the display delay time.
7. The method of claim 6, the determining a display delay time of the electronic device from the predicted pose data and the actual pose data, comprising:
acquiring the actual running speed of the electronic equipment;
determining a distance between a predicted position and an actual position of the electronic device according to the predicted attitude data and the actual attitude data;
and determining the display delay time of the electronic equipment according to the distance and the actual running speed.
8. An image processing apparatus comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring attitude data of the electronic equipment in a preset time period, and the attitude data comprises IMU deviation and IMU data;
the determining module is used for determining predicted attitude data of the electronic equipment at the moment to be displayed according to the IMU deviation and the IMU data in the preset time period;
the second acquisition module is used for acquiring an image to be displayed according to the predicted attitude data;
and the display module is used for displaying the image to be displayed when the moment to be displayed comes.
9. An image processing apparatus, the apparatus comprising at least: a processor and a storage medium configured to store executable instructions, wherein: the processor is configured to execute stored executable instructions;
the executable instructions are configured to perform the image processing method provided by any of the above claims 1 to 7.
10. A computer-readable storage medium having stored therein computer-executable instructions configured to perform the image processing method provided by any of the above claims 1 to 7.
CN202010082758.XA 2020-02-07 2020-02-07 Image processing method, device, equipment and computer readable storage medium Pending CN111352506A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010082758.XA CN111352506A (en) 2020-02-07 2020-02-07 Image processing method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010082758.XA CN111352506A (en) 2020-02-07 2020-02-07 Image processing method, device, equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN111352506A true CN111352506A (en) 2020-06-30

Family

ID=71195691

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010082758.XA Pending CN111352506A (en) 2020-02-07 2020-02-07 Image processing method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111352506A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183521A (en) * 2020-09-30 2021-01-05 中国银行股份有限公司 Intelligent input method, system, equipment and readable storage medium
CN112486318A (en) * 2020-11-26 2021-03-12 北京字跳网络技术有限公司 Image display method, image display device, readable medium and electronic equipment
CN117294832A (en) * 2023-11-22 2023-12-26 湖北星纪魅族集团有限公司 Data processing method, device, electronic equipment and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103674009A (en) * 2012-09-04 2014-03-26 百度在线网络技术(北京)有限公司 Method and device for obtaining movement locus of mobile terminal and mobile terminal
CN103791905A (en) * 2012-10-30 2014-05-14 雅马哈株式会社 Attitude estimation method and apparatus
CN105593924A (en) * 2013-12-25 2016-05-18 索尼公司 Image processing device, image processing method, computer program, and image display system
CN106095113A (en) * 2016-06-27 2016-11-09 南京睿悦信息技术有限公司 The measuring and calculating of user's attitude and the virtual reality follow-up method that a kind of nine axle sensors merge

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103674009A (en) * 2012-09-04 2014-03-26 百度在线网络技术(北京)有限公司 Method and device for obtaining movement locus of mobile terminal and mobile terminal
CN103791905A (en) * 2012-10-30 2014-05-14 雅马哈株式会社 Attitude estimation method and apparatus
CN105593924A (en) * 2013-12-25 2016-05-18 索尼公司 Image processing device, image processing method, computer program, and image display system
CN106095113A (en) * 2016-06-27 2016-11-09 南京睿悦信息技术有限公司 The measuring and calculating of user's attitude and the virtual reality follow-up method that a kind of nine axle sensors merge

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112183521A (en) * 2020-09-30 2021-01-05 中国银行股份有限公司 Intelligent input method, system, equipment and readable storage medium
CN112183521B (en) * 2020-09-30 2023-09-12 中国银行股份有限公司 Intelligent input method, system, equipment and readable storage medium
CN112486318A (en) * 2020-11-26 2021-03-12 北京字跳网络技术有限公司 Image display method, image display device, readable medium and electronic equipment
CN112486318B (en) * 2020-11-26 2024-07-26 北京字跳网络技术有限公司 Image display method and device, readable medium and electronic equipment
CN117294832A (en) * 2023-11-22 2023-12-26 湖北星纪魅族集团有限公司 Data processing method, device, electronic equipment and computer readable storage medium
CN117294832B (en) * 2023-11-22 2024-03-26 湖北星纪魅族集团有限公司 Data processing method, device, electronic equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
KR102208329B1 (en) Image processing device, image processing method, computer program, and image display system
CN114543797B (en) Pose prediction method and device, equipment and medium
EP2933605A1 (en) A device orientation correction method for panorama images
CN109743626B (en) Image display method, image processing method and related equipment
KR20200065033A (en) Key point detection method and apparatus, electronic device and storage medium
US20170332018A1 (en) Real-time video stabilization for mobile devices based on on-board motion sensing
KR20190098003A (en) Method for estimating pose of device and thereof
CN106782260B (en) Display method and device for virtual reality motion scene
CN111352506A (en) Image processing method, device, equipment and computer readable storage medium
US10545215B2 (en) 4D camera tracking and optical stabilization
KR101699202B1 (en) Method and system for recommending optimum position of photographing
EP3316117A1 (en) Controlling content displayed in a display
CN105706036A (en) Tilting to scroll
US12198283B2 (en) Smooth object correction for augmented reality devices
JP2022531186A (en) Information processing methods, devices, electronic devices, storage media and programs
CN108804161B (en) Application initialization method, device, terminal and storage medium
US11030820B1 (en) Systems and methods for surface detection
CN111866493A (en) Image correction method, device and equipment based on head-mounted display equipment
KR20180061956A (en) Method and apparatus for estimating eye location
KR102753215B1 (en) VR motion prediction device
CN119182899B (en) Panoramic video display control method, device, computer equipment, and storage medium
KR102833425B1 (en) VR motion prediction device
JP6437811B2 (en) Display device and display method
US20240323331A1 (en) Image processing apparatus, image processing method, and storage medium
CN118170249A (en) Eye movement tracking system and corresponding method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200630

RJ01 Rejection of invention patent application after publication