CN115297313A - Micro-display dynamic compensation method and system - Google Patents
Micro-display dynamic compensation method and system Download PDFInfo
- Publication number
- CN115297313A CN115297313A CN202211224253.8A CN202211224253A CN115297313A CN 115297313 A CN115297313 A CN 115297313A CN 202211224253 A CN202211224253 A CN 202211224253A CN 115297313 A CN115297313 A CN 115297313A
- Authority
- CN
- China
- Prior art keywords
- frame
- module
- marking
- feature extraction
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/122—Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/139—Format conversion, e.g. of frame-rate or size
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/189—Recording image signals; Reproducing recorded image signals
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Television Systems (AREA)
Abstract
The invention relates to a micro-display dynamic compensation method and a micro-display dynamic compensation system. The system comprises: alternately caching video source data into an odd frame cache region and an even frame cache region; adopting a configurable feature extraction and marking module to perform feature extraction and marking on pixel data in the frame buffer area to obtain a feature set; the method comprises the steps of locking partial information of a dynamic target position by comparing an odd frame feature set 1 with an even frame feature set 2, marking a complete target by edge detection, and updating a feature extraction marking window; indexing corresponding pixels in a blanking area of a frame according to the position information of the dynamic target and storing the corresponding pixels in a dynamic target cache area; after analysis, the output frame control module inserts the fusion frame or the copy frame and the sequence of the current frame according to the previous frame, and outputs the pixel value of the physical pixel point for display. The invention can weaken the phenomenon of smear when the micro-display system displays the video source with the fast moving target, thereby achieving the purpose of improving the display clarity and smoothness of the video source and improving the viewing experience.
Description
Technical Field
The present application relates to the field of micro display AR or VR, and in particular, to a micro display dynamic compensation method and system.
Background
Micro-display is one of indispensable key technologies in the fields of AR, VR and MR, the definition of a displayed picture, the fineness and smoothness of the picture and whether more action details can be captured or not greatly influence the viewing and using experience. Under the condition that the content of a high-frame-rate video source is lack at present, a technical means capable of improving the watching and using experience is necessary, for example, fighting scenes which are originally dazzling and difficult to distinguish in a film become smooth and clear; when watching sports events programs such as football or racing, the fluency of the picture can be qualitatively improved, and the shaking feeling can be weakened; the high frame rate game picture can capture more action details and the like.
However, the prior art is not suitable for micro-display, and in the micro-display field, the micro-display has the advantages of small volume, high resolution and the like compared with the traditional display, but has more importance on performance, endurance and heat dissipation. The balance of viewing experience, endurance, heat dissipation, chip area and the like is the key.
Therefore, the current micro display chip does not support the optimized display processing of the phenomena of discontinuity, blurring and the like when a high-speed moving object exists in a static scene, and the use experience is poor.
Disclosure of Invention
Therefore, it is necessary to provide a display effect capable of improving the phenomena of discontinuity, blur, and the like when there is a high-speed moving object in a static scene, so that the transition of the picture is smooth, clear, and fine.
A microdisplay dynamics compensation method, comprising the steps of:
the method comprises the following steps:
s1, data extraction; carrying out format conversion on input video source data according to a display mode to obtain a converted data stream, and alternately caching the data stream into an odd frame cache region and an even frame cache region;
s2, marking data; performing operation processing on pixel data in an odd frame buffer area and an even frame buffer area by adopting an extraction marking mode selected by a configurable feature extraction marking module to respectively obtain a feature set 1 and a feature set 2;
s3, completing data and updating a feature extraction marking window in the feature extraction marking module; comparing the feature set 1 with the feature set 2 to obtain initial information of a dynamic target, then carrying out edge detection to obtain position information of a complete target, and updating a feature extraction marking window in a feature extraction marking module;
s4, indexing pixel values; in a blanking area of a frame, according to the position information of the dynamic target, indexing the corresponding pixel value and caching the pixel value into a dynamic target cache area;
s5, displaying pixels; and analyzing the data of the dynamic target cache region in the S4, and outputting the pixel value of the physical pixel point for displaying.
Further, in S3, the feature extraction mark window is automatically updated according to the size of the dynamic target after being initially set, and the video source with m × n resolution extracts feature values according to the feature window of i × j, where i is not less than m, j is not less than n, and i × j is not less than m × n.
Further, in S3, the manner of obtaining the position information of the complete target by the feature extraction and labeling module is as follows: feature set alignment and/or edge detection;
wherein the content of the feature tag comprises: one or more of dynamic target edge position information, moving speed and moving direction;
the moving speed is the number of pixels of the movement calculated according to the frame rate so as to determine the position of the moving target in the inserted frame;
the cost of the target search may be optimized according to the scene.
Further, in S5, the output frame control module selects a style of the output frame according to the criterion of inserting the fusion frame.
Further, the pattern of the output frame is: the previous frame, the insertion of the fusion frame or the duplicate frame, the sequence of the current frame.
Further, the output frame rate of the video source data stream is greater than the input frame rate.
Further, the output frame rate is 120Hz, 360Hz or 720Hz.
Further, the display format corresponding to the converted data stream is any one of spatial color and time-series color.
A microdisplay dynamics compensation system, wherein the system comprises:
and the register configuration module is used for configuring the size of the feature extraction marking window and outputting a frame rate control mode.
And the data format conversion module is used for carrying out format conversion on the input video source data stream according to the display mode to obtain the converted data stream.
And the characteristic extraction marking module is used for finding the dynamic target and marking the position.
The input frame controller is used for sequentially caching video source data in an odd frame cache region and an even frame cache region, obtaining dynamic target position information by comparing an odd frame mark set 1 with an even frame mark set 2, searching corresponding pixels in a blanking region of one frame according to the dynamic target position information, and storing the pixels in the dynamic target cache region;
and the analysis module is used for analyzing the processed data stream and outputting the pixel value of the physical pixel point.
And the output frame control module controls the style of the output frame and outputs the pixel value of the physical pixel point.
And the micro display chip module is used for displaying according to the pixel values of the physical pixels.
According to another aspect of the present invention, a micro display chip is provided, which includes a register configuration module, a data format conversion module, a feature extraction and marking module, a storage module, an analysis module, and an output frame control module, and when the micro display chip is executed, the micro display chip implements the steps of the micro display dynamic compensation method.
The invention has the beneficial effects that: the invention alternately caches the video source data in an odd frame buffer area and an even frame buffer area; adopting a configurable feature extraction marking module to perform operation processing on pixel data in a frame cache region to obtain a feature set; the method comprises the steps of locking partial information of a dynamic target position by comparing an odd frame feature set 1 with an even frame feature set 2, marking a complete target by edge detection to obtain complete information of the dynamic target, and updating a feature marking window; indexing corresponding pixels in a blanking area of a frame according to the position information of the dynamic target, and storing the corresponding pixels in a cache area; after analysis, the output frame control module inserts the fusion frame or the copy frame and the sequence of the current frame according to the previous frame, and outputs the pixel value of the physical pixel point for display. The micro-display system can weaken the smear phenomenon when the video source with the fast moving target is displayed, the purpose of optimizing and improving the display definition and smoothness of the video source is achieved, the viewing experience is improved, the micro-display system is particularly suitable for the field of micro-display AR or VR, the phenomena of incoherence, blurring and the like can occur in the scene with the high-speed moving target, and the viewing experience is influenced.
Drawings
FIG. 1 is a schematic flow diagram of the process of the present invention;
FIG. 2 is a diagram of feature extraction marker window sliding for a 1920 × 1080 resolution frame video source, according to one embodiment;
FIG. 3 is a diagram illustrating feature comparison, updating a feature window, and obtaining complete target information according to an embodiment;
FIG. 4 is a diagram illustrating search and display of a lateral trajectory target in one embodiment;
FIG. 5 is a diagram illustrating the searching and displaying of a longitudinal trajectory target in one embodiment;
FIG. 6 is a diagram illustrating the searching and displaying of an object with a diagonal trajectory according to an embodiment;
FIG. 7 is a diagram illustrating control of target pixel value caching and frame output in one embodiment;
FIG. 8 is a block diagram illustrating the architecture of a microdisplay dynamics compensation system in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
As shown in fig. 1 to 8, the micro display dynamic compensation method of the present invention can be placed in a driving module and interactively implemented with a micro display chip; or may be integrated into a micro display chip as part of which the display method is performed by the micro display chip. The MICRO-display dynamic compensation method can be used for LCoS, OLED, MICRO-LED, Q-LED and the like.
In one embodiment, as shown in fig. 1, there is provided a micro-display dynamic compensation method, comprising the steps of:
and S1, performing format conversion on the input video source data stream according to a display mode to obtain a converted data stream.
And the display format corresponding to the converted data stream is spatial color or time sequence color. The format conversion includes conversion of signal types, such as HDMI to RGB, and conversion of the input video source data stream to a desired data format.
Then, the data stream is alternately buffered in the odd frame buffer and the even frame buffer. The odd frame buffer area is a first frame, a third frame and a fifth frame which are input; the even frame buffer area is the input second frame, fourth frame and sixth frame;
and S2, performing operation processing on the pixel data in the frame buffer by adopting an extraction marking mode selected by the configurable feature extraction marking module to obtain a marking feature set.
The resolution of the video source to be subjected to feature processing is represented as m × n, and feature values can be extracted by configuring a feature window of i × j according to actual needs, wherein m, n, i and j are positive integers, i is not less than m, j is not less than n, and i × j is not less than m × n.
Feature extraction algorithms include, but are not limited to: after the original image is subjected to binarization processing, operation is carried out in a characteristic window according to the following formula:
the characteristic extraction marking module is used for marking the content of the dynamic target, including but not limited to the moving speed and the moving direction of the dynamic target; wherein the velocity is the number of pixels moved calculated according to the frame rate, so as to determine the position of the moving object in the interpolated frame;
s3, comparing the odd frame feature set 1 with the even frame feature set 2, filtering an invalid target by using the difference between the feature window 1 and the feature window 2 to obtain partial position information of an effective dynamic target, then obtaining the position information of a complete target according to an edge detection algorithm, and feeding back the position information of 4 edges, namely the upper edge, the lower edge, the left edge and the right edge, to a feature extraction marking module for updating the feature window, wherein the edge detection algorithm can be configured;
the moving direction of the target can be a transverse track, a longitudinal track, an oblique track and an irregular track; searching in a small area to search for a matched characteristic window, obviously, the larger the search area is, the higher the calculation cost is, so that for the application of the first three regular track scenes, the calculation amount of a positioning target can be reduced, and the bottleneck among the system calculation amount, the calculation speed and the algorithm complexity is solved.
S4, indexing corresponding pixels in a blanking area of a frame according to the position information of the dynamic complete target, and storing the corresponding pixels in a cache area;
and S5, outputting the data stream output by the output frame control module after data analysis, and outputting the pixel values of the physical pixel points for display according to the sequence of the previous frame, the inserted fusion frame or the copied frame and the current frame.
Wherein, the criterion of inserting the fusion frame or the copy frame is as follows:
for 1920 x 1080 resolution, the exit 120Hz frame rate, 1920/120 = 16, moving more than 16 pixels to perform the interpolated frame;
for 1920 x 1080 resolution, exit 240Hz frame rate, 1920/240 = 8, shift by more than 8 pixels to perform the interpolated frame;
for 1280 × 720 resolution, the exit 120Hz frame rate, 1280/120 = 10.6, shift more than 11 pixels to interpolate the fused frame;
when the scene is switched, or the content in two frames changes greatly, the previous frame is directly copied and output.
The method can improve the display effect of phenomena such as incoherence, blurring and the like when a high-speed moving object exists in a scene, and is smooth, clear and fine. The method can balance a plurality of factors such as computational power, power consumption, area of the micro display chip and the like.
In one embodiment, a microdisplay dynamic compensation method is described by taking a video source with 1920 × 1080 resolution as an example, initially selecting a 64 × 64 feature extraction mark window, firstly performing binarization processing, using the initial feature mark window to obtain feature mark sets of a first frame odd image and a second frame even image, sequentially comparing the feature mark sets of the two frames, screening invalid targets, preliminarily locking partial information of dynamic targets, then using edge detection to obtain complete dynamic target information, updating the feature mark window for target prediction of subsequent frames, fusing the complete dynamic target information and an original image to obtain an inserted fused frame, and outputting and displaying the inserted fused frame according to a sequence under an output frame control module. The specific contents are as follows:
as shown in fig. 2, a schematic diagram of sliding a feature tag window of a video source with 1920 × 1080 resolution of one frame is input, the top left corner is used as an origin position, sliding tagging is performed in the order of horizontal and vertical, and information of each position is recorded, including feature value information, position index information and the like, and stored in one ram with 255 bits wide and 512 depths. After an input first frame is cached in an odd frame cache region, a feature extraction marking module simultaneously starts to perform feature extraction marking on the frame, and the frame is marked as a feature set 1; meanwhile, after an input second frame arrives, caching the input second frame in an even frame cache region, and simultaneously carrying out feature extraction marking on the frame as a feature set 2; after the information of the first frame is output at the output end, the circulation is continued, and input and output matching is achieved; it should be noted that the initially set size of the mark window can be selected from a variety of modes, including but not limited to 64 × 64, 128 × 128. Accordingly, there are various video source resolutions including, but not limited to, 1920 × 1080 and 1280 × 720.
As shown in fig. 3, the feature set 1 of the first frame is compared with the feature set 2 of the second frame one by one, the comparison method is to make a difference, set a threshold Q, feed back upward above the threshold Q, discard the value below the threshold Q, filter invalid targets, such as (5, 3) and (5, 4) feature extraction windows are selected, feed back position information to a feature extraction marking module, start edge detection function, mark edges in 2 window ranges of upper, lower, left and right of the position, identify complete target edges, obtain 4 limit values of upper, lower, left and right, and correct the size of the feature window for subsequent feature extraction of the feature window, wherein the feature extraction formula:
the operation formula of the edge detection is as follows:for the detection in the x-direction,for the detection in the y-direction,as a result of the detection, A is an image pixel
Position of previous frame target: the left, right, top and bottom are a, b, c and d, the position of the current frame target is left, right, top and bottom areAnd judging the displacement between two adjacent frames as follows:
if the number of the inserted fusion frames is n, the fusion position required by the inserted k frame target is:
as shown in fig. 7, the corresponding pixel is searched in the blanking area of one frame according to the position information of the dynamic target, and is stored in the buffer area;
opening up 3 storage areas, namely odd frame buffer areas, namely a first frame, a third frame and a fifth frame, in the chip; an even frame buffer area, namely an input second frame, a fourth frame and a sixth frame; updating according to the frequency of 1 frame, and updating the dynamic target storage area according to the frequency of updating once every 2 frames;
the output frame control module outputs the data stream after data analysis, and if the data stream meets the criteria of interpolating and fusing frames, the data stream is output according to the sequence of the previous frame, the interpolating and fusing frames and the current frame, and the pixel values of the physical pixel points are output for display; if the interpolation frame criterion is not met or the characteristic set comparison result is larger than the maximum preset value M, copying the frames according to the previous frame, outputting the sequence of the current frame, and outputting the pixel values of the physical pixel points for display;
the method comprises the steps of obtaining a fused image A without an object by fusing images with objects subtracted from adjacent odd frame images and adjacent even frame images together, (performing pixel addition averaging), and then fusing the object at a proper position to obtain a fused image B with a moving object, namely a frame fused image.
In one embodiment, as shown in FIG. 8, a microdisplay dynamics compensation system is provided that includes: the device comprises a register configuration module, a data format conversion module, a storage module, a feature extraction marking module, an analysis module, an output frame control module and a micro display chip module.
The register configuration module is used for configuring the size of a feature extraction marking window, a feature extraction marking method, a dynamic target cache region and an output frame control mode;
the data format conversion module is used for carrying out format conversion on the input video source data stream according to the display mode to obtain a converted data stream;
the characteristic extraction marking module is used for locking the dynamic target and marking the position information;
the storage module is used for storing odd frames, even frames and dynamic targets;
the analysis module is used for analyzing the processed data stream and outputting the pixel values of the physical pixel points to the output frame control module;
the output frame control module is used for controlling the style of an output frame and outputting the pixel value of a physical pixel point;
and the micro display chip module is used for displaying according to the pixel values of the physical pixel points.
The driving module can support but not limited to RGB888 interface, MIPI interface, HDMI interface and VGA interface, and convert the signals into signals which need to be used inside through the data conversion module, and output the required data to the following module through calculation and selection. The driver module and the microdisplay chip module support, but are not limited to, video sources with 1920 x 1080 and 1280 x 720 resolutions.
The modules in the above-mentioned microdisplay dynamics compensation system can be implemented wholly or partially by software, hardware or a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiment only expresses one implementation manner of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
Claims (10)
1. A method for compensating motion of a microdisplay, the method comprising:
s1, data extraction; carrying out format conversion on input video source data according to a display mode to obtain a converted data stream, and alternately caching the data stream into an odd frame cache region and an even frame cache region;
s2, marking data; performing operation processing on pixel data in an odd frame buffer area and an even frame buffer area by adopting an extraction mark mode selected by a configurable feature extraction mark module to respectively obtain a feature set 1 and a feature set 2;
s3, completing data and updating a feature extraction marking window in the feature extraction marking module; comparing the feature set 1 with the feature set 2 to obtain initial information of a dynamic target, then carrying out edge detection to obtain position information of a complete target, and updating a feature extraction marking window in a feature extraction marking module;
s4, indexing pixel values; in a blanking area of a frame, according to the position information of the dynamic target, indexing a corresponding pixel value and caching the pixel value into a dynamic target cache area;
s5, displaying pixels; and analyzing the data of the dynamic target cache region in the S4, and outputting the pixel value of the physical pixel point for displaying.
2. The method of claim 1, wherein the feature: in S3, the characteristic extraction mark window is automatically updated according to the size of the dynamic target after initial setting, and the video source with m x n resolution extracts characteristic values according to the characteristic window of i x j, wherein i is less than or equal to m, j is less than or equal to n, and i x j is less than or equal to m x n.
3. The method of claim 2, wherein in S3, the feature extraction labeling module obtains the position information of the complete target by: feature set alignment and/or edge detection;
wherein the content of the feature tag comprises: one or more of dynamic target edge position information, moving speed and moving direction;
the moving speed is the number of pixels of the movement calculated according to the frame rate so as to determine the position of the moving target in the inserted frame;
the cost of the target search may be optimized according to the scene.
4. The method according to claim 1, wherein in S5, the output frame control module performs discrimination according to the inserted fusion frame, and selects the pattern of the output frame; the distinguishing method comprises the following steps:
for 1920 x 1080 resolution, the exit 120Hz frame rate, 1920/120 = 16, moving more than 16 pixels to insert the fused frame;
for 1920 × 1080 resolution, the exit 240Hz frame rate, 1920/240 = 8, is shifted by more than 8 pixels to insert the fused frame;
for 1280 × 720 resolution, the exit 120Hz frame rate, 1280/120 = 10.6, moves more than 11 pixels to insert the fused frame;
when the scene is switched, the previous frame is directly copied for output.
5. The method of claim 4, wherein the output frame has a pattern of: the previous frame, the insertion of the fusion frame or the duplicate frame, the sequence of the current frame.
6. The method of claim 1, wherein an output frame rate of the video source data stream is greater than an input frame rate.
7. The method of claim 6, wherein the output frame rate is 120Hz or 360Hz or 720Hz.
8. The method of claim 1, wherein the converted data stream corresponds to a display format selected from the group consisting of spatial color and temporal color.
9. A microdisplay dynamics compensation system, wherein the system comprises: the device comprises a register configuration module, a data format conversion module, a feature extraction marking module, a storage module, an analysis module, an output frame control module and a micro-display chip module;
the register configuration module is used for configuring the size of a feature extraction marking window, a feature extraction marking method, a dynamic target cache region and an output frame control mode;
the data format conversion module is used for carrying out format conversion on an input video source data stream according to a display mode to obtain a converted data stream;
the characteristic extraction marking module is used for locking the dynamic target and marking position information;
the storage module is used for storing odd frames, even frames and dynamic targets;
the analysis module is used for analyzing the processed data stream and outputting the pixel values of the physical pixel points to the output frame control module;
the output frame control module is used for controlling the style of an output frame and outputting the pixel value of a physical pixel point;
and the micro display chip module is used for displaying according to the pixel values of the physical pixel points.
10. A micro-display chip, comprising a register configuration module, a data format conversion module, a feature extraction marking module, a storage module, an analysis module, and an output frame control module, wherein the micro-display chip, when executed, implements the steps of the method of any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211224253.8A CN115297313B (en) | 2022-10-09 | 2022-10-09 | Micro display dynamic compensation method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211224253.8A CN115297313B (en) | 2022-10-09 | 2022-10-09 | Micro display dynamic compensation method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115297313A true CN115297313A (en) | 2022-11-04 |
CN115297313B CN115297313B (en) | 2023-04-25 |
Family
ID=83834685
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211224253.8A Active CN115297313B (en) | 2022-10-09 | 2022-10-09 | Micro display dynamic compensation method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115297313B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118538186A (en) * | 2024-07-24 | 2024-08-23 | 南京芯视元电子有限公司 | Display system, display device and display method |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104243991A (en) * | 2014-10-11 | 2014-12-24 | 中国矿业大学 | Side information generation method and device |
CN105760826A (en) * | 2016-02-03 | 2016-07-13 | 歌尔声学股份有限公司 | Face tracking method and device and intelligent terminal. |
US20160379055A1 (en) * | 2015-06-25 | 2016-12-29 | Kodak Alaris Inc. | Graph-based framework for video object segmentation and extraction in feature space |
CN112995678A (en) * | 2021-02-22 | 2021-06-18 | 深圳创维-Rgb电子有限公司 | Video motion compensation method and device and computer equipment |
CN113837136A (en) * | 2021-09-29 | 2021-12-24 | 深圳市慧鲤科技有限公司 | Video frame insertion method and device, electronic equipment and storage medium |
CN114598876A (en) * | 2022-03-03 | 2022-06-07 | 深圳创维-Rgb电子有限公司 | Motion compensation method and device for dynamic image, terminal device and storage medium |
-
2022
- 2022-10-09 CN CN202211224253.8A patent/CN115297313B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104243991A (en) * | 2014-10-11 | 2014-12-24 | 中国矿业大学 | Side information generation method and device |
US20160379055A1 (en) * | 2015-06-25 | 2016-12-29 | Kodak Alaris Inc. | Graph-based framework for video object segmentation and extraction in feature space |
CN105760826A (en) * | 2016-02-03 | 2016-07-13 | 歌尔声学股份有限公司 | Face tracking method and device and intelligent terminal. |
CN112995678A (en) * | 2021-02-22 | 2021-06-18 | 深圳创维-Rgb电子有限公司 | Video motion compensation method and device and computer equipment |
CN113837136A (en) * | 2021-09-29 | 2021-12-24 | 深圳市慧鲤科技有限公司 | Video frame insertion method and device, electronic equipment and storage medium |
CN114598876A (en) * | 2022-03-03 | 2022-06-07 | 深圳创维-Rgb电子有限公司 | Motion compensation method and device for dynamic image, terminal device and storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118538186A (en) * | 2024-07-24 | 2024-08-23 | 南京芯视元电子有限公司 | Display system, display device and display method |
Also Published As
Publication number | Publication date |
---|---|
CN115297313B (en) | 2023-04-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9148622B2 (en) | Halo reduction in frame-rate-conversion using hybrid bi-directional motion vectors for occlusion/disocclusion detection | |
JP4620163B2 (en) | Still subtitle detection apparatus, video device for displaying image including still subtitle, and method for processing image including still subtitle | |
JP4438795B2 (en) | Video conversion device, video display device, and video conversion method | |
KR950011822B1 (en) | Apparatus and method for generating stereoscopic images | |
US8045825B2 (en) | Image processing apparatus and method for composition of real space images and virtual space images | |
US8963951B2 (en) | Image processing apparatus, moving-image playing apparatus, and processing method and program therefor to allow browsing of a sequence of images | |
CN107580186B (en) | A dual-camera panoramic video stitching method based on seam space-time optimization | |
JP2004080252A (en) | Video display unit and its method | |
CN102577365B (en) | Video display device | |
JP2014077993A (en) | Display device | |
JP3731952B2 (en) | Information generation apparatus for moving image search | |
CN115297313A (en) | Micro-display dynamic compensation method and system | |
CN107666560B (en) | Video de-interlacing method and device | |
KR20060135667A (en) | Image format conversion | |
JP4951487B2 (en) | Video processing apparatus and video display apparatus using the same | |
JP5188272B2 (en) | Video processing apparatus and video display apparatus | |
US20100214488A1 (en) | Image signal processing device | |
JP4433719B2 (en) | Image display apparatus burn-in prevention apparatus and image display apparatus burn-in prevention method | |
CN113727176B (en) | Video motion subtitle detection method | |
JP4288909B2 (en) | Character information detecting apparatus, character information detecting method, program, and recording medium | |
JP4546810B2 (en) | Trajectory-added video generation apparatus and trajectory-added video generation program | |
CN112333401B (en) | Method, device, system, medium and equipment for detecting motion subtitle area | |
CN107316314A (en) | A kind of dynamic background extracting method | |
JP2007142507A (en) | Imaging apparatus | |
CN112819706B (en) | Method for determining identification frame of superimposed display, readable storage medium and electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |