CN101827264A - Hierarchical self-adaptive video frame sampling method - Google Patents
Hierarchical self-adaptive video frame sampling method Download PDFInfo
- Publication number
- CN101827264A CN101827264A CN 200910118898 CN200910118898A CN101827264A CN 101827264 A CN101827264 A CN 101827264A CN 200910118898 CN200910118898 CN 200910118898 CN 200910118898 A CN200910118898 A CN 200910118898A CN 101827264 A CN101827264 A CN 101827264A
- Authority
- CN
- China
- Prior art keywords
- frame
- duplication
- sampling
- video
- period
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000005070 sampling Methods 0.000 title claims abstract description 44
- 238000000034 method Methods 0.000 title claims abstract description 26
- 230000033001 locomotion Effects 0.000 claims abstract description 37
- 238000012886 linear function Methods 0.000 claims description 6
- 238000013519 translation Methods 0.000 claims description 6
- 238000012937 correction Methods 0.000 claims description 5
- 239000004744 fabric Substances 0.000 claims description 2
- 238000013178 mathematical model Methods 0.000 claims 2
- 230000009189 diving Effects 0.000 claims 1
- 230000008859 change Effects 0.000 abstract description 5
- 238000004364 calculation method Methods 0.000 abstract description 4
- 238000005516 engineering process Methods 0.000 abstract description 3
- 238000001514 detection method Methods 0.000 abstract description 2
- 230000001105 regulatory effect Effects 0.000 abstract 1
- 239000000284 extract Substances 0.000 description 8
- 230000000694 effects Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000001788 irregular Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000012467 final product Substances 0.000 description 1
- 230000035772 mutation Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a hierarchical self-adaptive video frame sampling method. When in video shoot, the video cannot carry out frame sampling by using a fixed interval due to non-uniform camera motion. The method comprises the following steps of: establishing a mathematic model on the relation between an inter-frame overlap rate and a frame interval; estimating the overlap rate between two video frames through the image registration technology; self-adaptively regulating the interval of the sampling by using the mathematic model; only extracting less parts of video frame to be detected; and updating the mathematic model according to the detection result. The model can track the change of the camera motion to determine the position of the sampling frame and reduce the number of detecting frames without detecting frame by frame. When the expected overlap rate is smaller, the overlap rate can be reduced layer by layer by adopting a method of hierarchical sampling-reduction. The method greatly reduces the calculation consumption of the sampling and reserves all scene information in the video.
Description
Technical field
The present invention relates to Digital Image Processing and computer vision field, particularly relate to and to carry out in the down-sampled application frame of video.
Background technology
Along with the development of hardware device, the same easy acquisition of video with image.The information that continuous video comprises is more, and the selectivity of picture frame is also more.But in much to video data, the Duplication of the identical content of adjacent video frames has very high redundancy up to 90%.If each frame data is all handled, can significantly increase the amount of calculation of handling, reduce treatment effeciency.For example in the image splicing, 3 D scene rebuilding during virtual reality etc. are used, only needs to guarantee that the Duplication between adjacent image is just passable about 50%, and Duplication is too high can to reduce last realization effect sometimes on the contrary.
In order to reduce the video data redundancy, the down-sampled step of frame of video is inevitable.Because in capture video, the motion of camera is irregular, the camera that has is to have machine control to carry out uniform motion, and the camera that has then is motion random in user's hand, can say without any rule.In this case, satisfy the scope of application need by Duplication with the down-sampled rear video frame that can't guarantee to sample of fixing sampling interval.How efficient accurately realization the down-sampled of frame of video is problem demanding prompt solution.
Now, the down-sampled method of Chang Yong frame of video mainly contains:
1) video is decomposed into the image of a frame one frame, artificial then samples to image sequence.
Though this method reliability is higher, needs man-machine interactively, the frame per second of supposing video was 30 frame/seconds, and the video of a few minutes just has thousands of frame of video so, and the time cost that screens cost with artificial method is too high.
2) calculating Duplication by the method that detects frame by frame samples.
This method fully automatically detects video sequence frame by frame, extracts the frame of video that meets the demands.The method of calculating the interframe Duplication can be with calculating global vector, methods such as image registration.But owing to detect frame by frame, amount of calculation is still very high.
Through the overtesting discovery, a lot of in fact frame of video need not detect fully, directly skip to get final product.But how to select those bigger frame of video of possibility just to become a subject matter of research.
Summary of the invention
The problem to be solved in the present invention is exactly how to reduce unnecessary frame to detect, and only detects the higher candidate frame of those possibilities, thereby reduces the amount of calculation of sampling, does not influence the effect of sampling simultaneously again.
This paper has proposed the method that new hierarchical self-adaptive extracts key frame.By " prediction-check-correction " model, the motion of dynamic tracking camera, the adjacent video frames Duplication of bringing according to camera motion change to be extracted some frames adaptively and detects, and determines whether it is key frame, and need not to compare frame by frame, reduced computation complexity.In order to prevent in the camera motion sudden change, to lose some main information frames, we carry out the sampling of layering: we conservatively extract key frame with big Duplication when ground floor extracts, then these key frames are done the extraction of the second layer, further reduced redundant.Certainly, if camera motion is comparatively steady, only the sampling with ground floor can be drawn into the key frame that satisfies the Duplication scope fully.Test shows that this method has kept main information frame, has reduced simultaneously and has calculated redundancy, has improved efficient.
Usually when taking the video data of splicing, in order to guarantee to splice success and splicing effect, translation (translation), yawing (pan) and pitching (tilt) motion are mainly carried out in the motion of camera, avoid camera lens to stretch as far as possible and rotatablely move.So the motion of camera can be thought mainly by the linear superposition of translation, yawing and three kinds of forms of motion of pitching.
Define two width of cloth image I
0, I
1Between Duplication R be the ratio of overlapping region and image area (two image sizes are identical in the video), promptly
Discuss in the motion process of camera below, the Duplication of two two field pictures is situation over time.
Suppose the width of the frame of video that camera is taken and highly be respectively W, H.The very short time period (0, T) in, camera is key frame F in 0 position constantly
k, be frame F in the position of moment t
tThen along with the motion of camera, frame of video F
tWith key frame F
kThe variation of overlapping region as follows:
When camera is done translational motion, short time period (0, can think uniform motion in T), speed is v, the angle of the direction of motion and horizontal plane is
As shown in Figure 1.
Frame F
kAnd F
tBetween the overlapping region area be
The value of t is very little, vt<<W, vt<<H, so formula (2) can be approximated to be:
Be spaced apart under the situation of fixed value Δ T key frame F at every frame shooting time
kThe n frame F of back
KnWith F
kBetween Duplication be:
Short time period (0, T) in,
Think constant with v.So as can be seen, frame F from formula (4)
KnWith key frame F
kDuplication R along with frame period n evenly reduces, promptly R is the linear function of n.
When camera when fixing point is done yawing (pan) and pitching (tilt) motion, interframe Duplication situation of change in time should be identical, just one is in the horizontal direction, one in vertical direction.Here we discuss camera yawing motion (pan).
The very short time period (0, think that camera is with the angular velocity omega uniform rotation in T).0 to moment t, camera is from frame F from constantly
kMove to frame F
t, scanned angle is ω t.The camera perspective of picture traverse W correspondence is θ, as shown in Figure 2.When ω t hour, frame F
kWith frame F
tBetween the overlapping region area be
Be spaced apart under the situation of fixed value Δ T key frame F at every frame shooting time
kThe n frame F of back
KnWith F
kBetween Duplication be:
Short time period (0, T) in, ω thinks constant, the θ of each camera correspondence also is a fixed value.So as can be seen, frame F from formula (6)
KnWith key frame F
kDuplication R along with frame period n evenly reduces, promptly R is the linear function of n.
According to the discussion of front, when camera was in certain single forms of motion in translation, yawing and three kinds of motions of pitching respectively, in the very short time period, Duplication R can be similar to the linear function of regarding frame period n as.In the camera motion based on the linear superposition of these three kinds of forms of motion, Duplication R still can be approximated to be the linear function of frame period n so.
We can infer the pairing frame period n of the frame that satisfies certain Duplication R according to the linear relationship of R and n.If known up-to-date key frame F
k, itself and last key frame F
K-1Duplication and frame period be respectively R
kAnd N
k, then with F
kDuplication be expectation Duplication R
sFrame F
K+1With F
kFrame period N
K+1Can estimate with the linear relationship of R and n:
Frame period 0 N
kN
K+1
Duplication 1 R
kR
K+1
R
k+1=R
s
N is arranged
K+1=N
k* (1-R
s)/(1-R
k) (7)
The N that calculates
K+1Differing is decided to be integer, and we get N
K+1Integer part [N
K+1] as the frame period of wait to sample frame and current key frame.
Because camera is not uniform motion completely in short time period, so predicted value and actual value are not on all four, according to [N
K+1] the key frame F that gets of frame period
K+1With F
kDuplication R
K+1Generally be not to equal to expect Duplication R
sSo work as R
K+1At certain interval (R
Tl, R
Th) in, we just think F
K+1Estimation be correct.
Each candidate frame F
nHave three parameters, with the Duplication R of last key frame
n, mate the M that counts
nWith frame period N
nKey frame constraints has: the upper and lower bound of Duplication is respectively R
ThAnd R
Tl, the lower bound that coupling is counted is M
Tl, the upper bound of frame period is N
ThHave only three parameters of candidate frame all to satisfy under the situation of its constraints, this frame is just thought effective key frame, otherwise abandons this frame, chooses candidate frame again according to correction model.
Because the frame per second of capture video is different with the movement velocity of camera, largest frames is N at interval
ThDetermine it is the comparison difficulty, our initialization N here
Th=N
S=scale*FPS (scale<1, FPS is a video frame rate).In the process of sampling, suppose that the average in sampling interval is N so far
Mean, largest frames this moment N at interval so
ThBe 1.5*N
MeanAnd N
SThe smaller value of the two,
Be N
Th=min (1.5*N
Mean, N
S) (8)
Correction model decides the strategy of choosing of candidate frame next time according to the check situation of this candidate frame.If this candidate frame satisfies key frame constraint, as new key frame, and utilize formula (7) to calculate next sampling interval this frame.Otherwise, abandon this frame, extract candidate frame again:, extract new candidate frame according to formula (7) the new sampling interval of prediction if Duplication surpasses the upper limit; If Duplication is low excessively, then rollback upgrades candidate frame with littler interval.
Algorithm begin to need two initial key frame, first frame of video is as first initial key frame, choosing of another initial key frame will be to the key frame prediction generating influence of back.Motion instability when camera begins to take most is if adopt the frame of certain fixed intervals bigger as the methodical error of second key frame.We detect frame by frame to the frame of the first frame back, and first frame of video that meets the key frame constraint as second initial key frame, has been avoided the irregular influence of camera initial motion, obtain well initial Duplication.
Self adaptation key frame sampling algorithm concrete steps based on camera motion are as follows:
1) first frame of selecting video is as key frame.
2) find second key frame by the method that detects frame by frame, write down its corresponding frame period N
2, Duplication R
2
3) establishing current up-to-date key frame is F
k, corresponding Duplication and frame period are respectively R
k, N
kAccording to the next candidate frame frame C of formula (7) prediction
K+1Corresponding frame period N
K+1
4) upgrade sampling interval upper bound N according to formula (8)
Th, if N
K+1>N
Th, get N
K+1=N
Th
5) calculate C
K+1With F
kBetween Duplication R
K+1Number of features M with coupling
K+1
6) if Duplication R
Th>R
K+1>R
TlAnd M
K+1>M
Tl, with candidate frame C
K+1As new key frame F
K+1
Otherwise, abandon candidate frame C
K+1, choose candidate frame again.If R
K+1>R
Th, illustrate that the too small overlapping region that causes of sampling interval is too big, need the increase sampling interval further to go redundancy, choose and F
kInterframe is divided into N
h=N
K+1* (1-R
s)/(1-R
K+1) frame as new candidate frame C
K+1, if R
K+1<R
TlPerhaps M
K+1<M
Tl, illustrate that the sampling interval excessively causes overlapping region or characteristic point very little, need reduce the sampling interval, this registration results is unreliable so can not be used for next time and estimate, directly selects and F
kBe spaced apart N
l=max ([N
K+1/ 2] frame, 1) is as new candidate frame C
K+1
7) repeating step 4-6 is up to finding the new key frame F that satisfies condition
K+1
8) repeating step 3-7 finishes until video.
When the movement velocity of camera was undergone mutation, the sampling of jump might the useful frame of lost part.For example scan when an end then flyback takes place at camera, then can take place at this moment to change some frames constantly in the camera motion direction can be missed along with frame period increases and situation that Duplication also increases, corresponding scene information is lost.In order to guarantee to be drawn into all main information frames, we adopt the hierarchical key frame sampling, in the ground floor sampling, bigger Duplication (such as 80%) are set, and the limited samples interval upper limit is (as N
Th=0.25*FPS, FPS are frame per second).Carry out once down-sampled again to these accurate key frames then.This moment, number of frames was less, can adopt the method that compares Duplication frame by frame to sample.Adopt the notion of information dropout rate in [9], for one group of successive frame F (m), F (m+1) ... F (m+L) extracts F (m), and F (m+L) two frames as the information dropout rate Lost_ratio of key frame correspondence are:
The information dropout rate is the parameter that the expression local message is lost, and too much for fear of the information dropout of last spliced panoramic figure, the upper limit of Lost_ratio is generally got P
Lost=10%.
Then frame F (m) is if key frame must satisfy:
Lost_ratio<P
lost (9)
With first frame in the frame sequence that for the first time sampling is obtained as initial key frame, subsequent frame is done detection frame by frame, when certain frame F (m+1) does not satisfy inequality (9), just get F (m+1) front one frame F (m) as up-to-date key frame, restart to detect frame by frame for up-to-date key frame with F (m) then.
Description of drawings
Two interframe overlapping region schematic diagrames when Fig. 1 is the camera translational motion
Overlapping region schematic diagram when Fig. 2 is the motion of camera yawing
Fig. 3 is a hierarchical self-adaptive frame of video frame sampling flow chart of the present invention
Embodiment
Be that the specific embodiment of the present invention is described in further detail below.Following examples are used to illustrate the present invention, but are not used for limiting the scope of the invention.
Hierarchical self-adaptive extracts key frame and mainly comprises following step (as shown in Figure 3):
(1) characteristic point of up-to-date key frame of extraction and candidate frame.
(2) registration candidate frame and up-to-date key frame calculate two frame Duplication.
(3) whether the check candidate frame satisfies the key frame requirement, if candidate frame satisfies standard, then with this frame as up-to-date key frame, otherwise lose this candidate frame
(4) revise forecast model according to the difference of predicted value and actual value, choose candidate frame again.
(5) finish up to video multiple step (1)-(4).
(6) if desired, key frame is carried out double sampling, further go redundancy.
(7) the final sampling results of output
The above only is a preferred implementation of the present invention; should be pointed out that for those skilled in the art, under the prerequisite that does not break away from the technology of the present invention principle; can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.
Claims (4)
1. based on the frame of video methods of sampling of image registration, it is characterized in that, may further comprise the steps:
A, utilize the method for image registration to calculate Duplication between two width of cloth frame of video
B, utilize the Mathematical Modeling of interframe Duplication and frame period to predict the frame period of satisfied certain Duplication
C, the parameter that obtains according to this sampling are upgraded Mathematical Modeling
D, according to the sampling interval next time of the calculated with mathematical model after upgrading.
2. the video methods of sampling described in claim 1 is characterized in that, the Mathematical Modeling of interframe Duplication and frame period is as follows:
Suppose to determine that the frame of video that is extracted is called key frame, the frame of video that waits to detect is called candidate frame.
Camera is in translation (translation) respectively, and during certain single forms of motion in the yawing (pan) and (tilt) the three kinds of motions of diving, in the very short time period, Duplication R is the linear function of frame period n.In the camera motion based on the linear superposition of these three kinds of forms of motion, Duplication R still can be approximated to be the linear function (in the very short time period) of frame period n so.
We can infer the frame period n of the frame correspondence that satisfies certain Duplication R according to the linear relationship of R and n.If known up-to-date key frame F
k, itself and last key frame F
K-1Duplication and frame period be respectively R
kAnd N
k, then with F
kDuplication be expectation Duplication R
sFrame F
K+1With F
kFrame period N
K+1Can estimate with following formula:
Frame period 0 N
kN
K+1
Duplication 1 R
kR
K+1
That is: N
K+1=N
k* (1-R
s)/(1-R
k).
3. the video methods of sampling described in claim 1 is characterized in that, it is as follows that the parameter that obtains according to this sampling is upgraded mathematical model method:
Each candidate frame F
nHave two parameters, with the Duplication R of last key frame
nWith frame period N
nKey frame constraints has: the upper and lower bound of Duplication is respectively R
ThAnd R
Tl, the upper bound of frame period is N
Th=scale*FPS (scale<1, FPS is a video frame rate).Have only two parameters of candidate frame all to satisfy under the situation of its constraints, this frame is just thought effective key frame, otherwise abandons this frame, chooses candidate frame again according to correction model.
If Duplication R
Th>R
K+1>R
TlAnd M
K+1>M
Tl, with candidate frame C
K+1As new key frame F
K+1, and calculate prediction correction factor r
K+1=(1-R
s)/(1-R
K+1).Otherwise, abandon candidate frame C
K+1, choose candidate frame again.If R
K+1>R
Th, illustrate that the too small overlapping region that causes of sampling interval is too big, need the increase sampling interval further to go redundancy, choose and F
kInterframe is divided into N
h=N
K+1* (1-R
s)/(1-R
K+1) frame as new candidate frame C
K+1, if R
K+1<R
TlIllustrate that the sampling interval excessively causes overlapping region or characteristic point very little, need reduce the sampling interval, this registration results is unreliable so can not be used for estimation next time, illustrate that the sampling interval excessively causes overlapping region or characteristic point very little, need reduce the sampling interval, this registration results is unreliable so can not be used for estimation next time, directly selects and F
kBe spaced apart N
l=max ([N
K+1/ 2] frame, 1) is as new candidate frame C
K+1
4. based on the frame of video methods of sampling of image registration, it is characterized in that: the piecewise linear model of interframe Duplication and frame period is only more accurate when Duplication is less than certain limit, the Duplication that requires when sampling is less, can adopt the stratified sampling method progressively to reduce Duplication.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN 200910118898 CN101827264A (en) | 2009-03-06 | 2009-03-06 | Hierarchical self-adaptive video frame sampling method |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN 200910118898 CN101827264A (en) | 2009-03-06 | 2009-03-06 | Hierarchical self-adaptive video frame sampling method |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN101827264A true CN101827264A (en) | 2010-09-08 |
Family
ID=42690928
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN 200910118898 Pending CN101827264A (en) | 2009-03-06 | 2009-03-06 | Hierarchical self-adaptive video frame sampling method |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN101827264A (en) |
Cited By (17)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102222222A (en) * | 2011-05-27 | 2011-10-19 | 汉王科技股份有限公司 | Frame-skipping scanning and recognizing device and method |
| CN102324027A (en) * | 2011-05-27 | 2012-01-18 | 汉王科技股份有限公司 | Scanning and identifying device and method |
| CN102393903A (en) * | 2011-07-15 | 2012-03-28 | 汉王科技股份有限公司 | Scanning and recognizing device and method based on triaxial accelerometer |
| CN104618679A (en) * | 2015-03-13 | 2015-05-13 | 南京知乎信息科技有限公司 | Method for extracting key information frame from monitoring video |
| CN104798372A (en) * | 2012-11-21 | 2015-07-22 | 高通股份有限公司 | Low-complexity support of multiple layers for HEVC extensions in video coding |
| CN107392846A (en) * | 2017-07-31 | 2017-11-24 | 四川长虹电器股份有限公司 | Drive recorder image split-joint method |
| CN107480580A (en) * | 2017-03-31 | 2017-12-15 | 触景无限科技(北京)有限公司 | Image-recognizing method and pattern recognition device |
| CN107507132A (en) * | 2017-09-12 | 2017-12-22 | 成都纵横自动化技术有限公司 | A kind of real-time joining method of unmanned plane aerial photography image |
| CN108010058A (en) * | 2017-11-29 | 2018-05-08 | 广东技术师范学院 | A kind of method and system that vision tracking is carried out to destination object in video flowing |
| CN108023869A (en) * | 2016-10-28 | 2018-05-11 | 海能达通信股份有限公司 | Parameter regulation means, device and the mobile terminal of multimedia communication |
| CN108055542A (en) * | 2012-12-21 | 2018-05-18 | 杜比实验室特许公司 | High-precision up-sampling in the scalable coding of high bit depth video |
| CN108229481A (en) * | 2017-12-25 | 2018-06-29 | 中国移动通信集团江苏有限公司 | Screen content analysis method, device, computing device and storage medium |
| CN110197107A (en) * | 2018-08-17 | 2019-09-03 | 平安科技(深圳)有限公司 | Micro- expression recognition method, device, computer equipment and storage medium |
| CN111476868A (en) * | 2020-04-07 | 2020-07-31 | 哈尔滨工业大学 | Animation generation model training and animation generation method and device based on deep learning |
| CN111629261A (en) * | 2019-02-28 | 2020-09-04 | 阿里巴巴集团控股有限公司 | Information processing method, information processing device, electronic equipment and computer readable storage medium |
| CN112543339A (en) * | 2020-12-09 | 2021-03-23 | 广州杰赛科技股份有限公司 | Video simulation method and device based on residual error reconstruction |
| CN114429636A (en) * | 2022-04-06 | 2022-05-03 | 中国科学院自动化研究所 | Image scanning identification method, device and electronic equipment |
-
2009
- 2009-03-06 CN CN 200910118898 patent/CN101827264A/en active Pending
Cited By (32)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN102222222A (en) * | 2011-05-27 | 2011-10-19 | 汉王科技股份有限公司 | Frame-skipping scanning and recognizing device and method |
| CN102324027A (en) * | 2011-05-27 | 2012-01-18 | 汉王科技股份有限公司 | Scanning and identifying device and method |
| CN102324027B (en) * | 2011-05-27 | 2013-05-29 | 汉王科技股份有限公司 | Scanning and identifying device and method |
| CN102393903A (en) * | 2011-07-15 | 2012-03-28 | 汉王科技股份有限公司 | Scanning and recognizing device and method based on triaxial accelerometer |
| CN104798372B (en) * | 2012-11-21 | 2019-05-28 | 高通股份有限公司 | Extended low-complexity support for multiple layers for High Efficiency Video Coding (HEVC) in video coding |
| CN104798372A (en) * | 2012-11-21 | 2015-07-22 | 高通股份有限公司 | Low-complexity support of multiple layers for HEVC extensions in video coding |
| US10097825B2 (en) | 2012-11-21 | 2018-10-09 | Qualcomm Incorporated | Restricting inter-layer prediction based on a maximum number of motion-compensated layers in high efficiency video coding (HEVC) extensions |
| CN108055542A (en) * | 2012-12-21 | 2018-05-18 | 杜比实验室特许公司 | High-precision up-sampling in the scalable coding of high bit depth video |
| US11792416B2 (en) | 2012-12-21 | 2023-10-17 | Dolby Laboratories Licensing Corporation | High precision up-sampling in scalable coding of high bit-depth video |
| CN108055543A (en) * | 2012-12-21 | 2018-05-18 | 杜比实验室特许公司 | High-precision up-sampling in the scalable coding of high bit depth video |
| US11570455B2 (en) | 2012-12-21 | 2023-01-31 | Dolby Laboratories Licensing Corporation | High precision up-sampling in scalable coding of high bit-depth video |
| CN108055543B (en) * | 2012-12-21 | 2021-08-17 | 杜比实验室特许公司 | High-precision upsampling in scalable coding of high-bit-depth video |
| CN108055542B (en) * | 2012-12-21 | 2021-08-13 | 杜比实验室特许公司 | High-precision upsampling in scalable coding of high-bit-depth video |
| CN104618679B (en) * | 2015-03-13 | 2018-03-27 | 南京知乎信息科技有限公司 | A kind of method that key message frame is extracted in monitor video |
| CN104618679A (en) * | 2015-03-13 | 2015-05-13 | 南京知乎信息科技有限公司 | Method for extracting key information frame from monitoring video |
| CN108023869A (en) * | 2016-10-28 | 2018-05-11 | 海能达通信股份有限公司 | Parameter regulation means, device and the mobile terminal of multimedia communication |
| CN107480580A (en) * | 2017-03-31 | 2017-12-15 | 触景无限科技(北京)有限公司 | Image-recognizing method and pattern recognition device |
| CN107480580B (en) * | 2017-03-31 | 2021-06-15 | 触景无限科技(北京)有限公司 | Image recognition method and image recognition device |
| CN107392846A (en) * | 2017-07-31 | 2017-11-24 | 四川长虹电器股份有限公司 | Drive recorder image split-joint method |
| CN107507132A (en) * | 2017-09-12 | 2017-12-22 | 成都纵横自动化技术有限公司 | A kind of real-time joining method of unmanned plane aerial photography image |
| CN108010058A (en) * | 2017-11-29 | 2018-05-08 | 广东技术师范学院 | A kind of method and system that vision tracking is carried out to destination object in video flowing |
| CN108229481A (en) * | 2017-12-25 | 2018-06-29 | 中国移动通信集团江苏有限公司 | Screen content analysis method, device, computing device and storage medium |
| CN110197107A (en) * | 2018-08-17 | 2019-09-03 | 平安科技(深圳)有限公司 | Micro- expression recognition method, device, computer equipment and storage medium |
| CN110197107B (en) * | 2018-08-17 | 2024-05-28 | 平安科技(深圳)有限公司 | Micro-expression recognition method, device, computer equipment and storage medium |
| CN111629261A (en) * | 2019-02-28 | 2020-09-04 | 阿里巴巴集团控股有限公司 | Information processing method, information processing device, electronic equipment and computer readable storage medium |
| CN111629261B (en) * | 2019-02-28 | 2022-04-22 | 阿里巴巴集团控股有限公司 | Information processing method, information processing device, electronic equipment and computer readable storage medium |
| CN111476868A (en) * | 2020-04-07 | 2020-07-31 | 哈尔滨工业大学 | Animation generation model training and animation generation method and device based on deep learning |
| CN111476868B (en) * | 2020-04-07 | 2023-06-23 | 哈尔滨工业大学 | Animation generation model training and animation generation method and device based on deep learning |
| CN112543339A (en) * | 2020-12-09 | 2021-03-23 | 广州杰赛科技股份有限公司 | Video simulation method and device based on residual error reconstruction |
| CN112543339B (en) * | 2020-12-09 | 2022-08-02 | 广州杰赛科技股份有限公司 | Video simulation method and device based on residual error reconstruction |
| CN114429636A (en) * | 2022-04-06 | 2022-05-03 | 中国科学院自动化研究所 | Image scanning identification method, device and electronic equipment |
| CN114429636B (en) * | 2022-04-06 | 2022-07-12 | 中国科学院自动化研究所 | Image scanning identification method and device and electronic equipment |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN101827264A (en) | Hierarchical self-adaptive video frame sampling method | |
| CN109784306B (en) | Intelligent parking management method and system based on deep learning | |
| US10453204B2 (en) | Image alignment for burst mode images | |
| US10827122B2 (en) | Image stabilization techniques for video | |
| JP2022551717A (en) | Parking space and direction angle detection method, device, device and medium | |
| CN109345593A (en) | A kind of detection method and device of video camera posture | |
| CN107248173A (en) | Method for tracking target, device, computer equipment and storage medium | |
| US9525821B2 (en) | Video stabilization | |
| US20140300686A1 (en) | Systems and methods for tracking camera orientation and mapping frames onto a panoramic canvas | |
| CN105678808A (en) | Moving object tracking method and device | |
| US20140193035A1 (en) | Method and Device for Head Tracking and Computer-Readable Recording Medium | |
| US8731246B2 (en) | Motion vector detection device, apparatus for detecting motion vector and motion vector detection method | |
| CN111382784B (en) | Moving target tracking method | |
| WO2013151873A1 (en) | Joint video stabilization and rolling shutter correction on a generic platform | |
| WO2013007164A1 (en) | Shooting anti-shake method and apparatus | |
| CN103209292A (en) | Intelligent photographing system and method for obtaining stable imaging | |
| CN103391430B (en) | Correlation tracking method and special device based on DSP | |
| US7254279B2 (en) | Method for image stabilization by adaptive filtering | |
| CN108564028A (en) | A kind of multithreading face identification system based on embedded system | |
| CN118189931A (en) | A split SLAM device and multi-sensor fusion positioning method | |
| EP3718302B1 (en) | Method and system for handling 360 degree image content | |
| CN116935290B (en) | Heterogeneous target detection method and system for high-resolution array camera in airport scene | |
| CN119085642A (en) | A multi-sensor fusion state estimation system based on event camera | |
| CN109600543A (en) | Method and mobile device for mobile device photographing panorama picture | |
| CN113766175B (en) | Target monitoring method, device, equipment and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
| WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20100908 |

