CN109544694A - A kind of augmented reality system actual situation hybrid modeling method based on deep learning - Google Patents
A kind of augmented reality system actual situation hybrid modeling method based on deep learning Download PDFInfo
- Publication number
- CN109544694A CN109544694A CN201811366602.3A CN201811366602A CN109544694A CN 109544694 A CN109544694 A CN 109544694A CN 201811366602 A CN201811366602 A CN 201811366602A CN 109544694 A CN109544694 A CN 109544694A
- Authority
- CN
- China
- Prior art keywords
- background
- model
- foreground
- pixel
- augmented reality
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 230000003190 augmentative effect Effects 0.000 title claims abstract description 32
- 238000013135 deep learning Methods 0.000 title claims abstract description 23
- 230000011218 segmentation Effects 0.000 claims abstract description 29
- 238000001514 detection method Methods 0.000 claims abstract description 23
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 15
- 239000000284 extract Substances 0.000 claims abstract description 4
- 230000008569 process Effects 0.000 claims description 18
- 230000003044 adaptive effect Effects 0.000 claims description 9
- 230000008030 elimination Effects 0.000 claims description 5
- 238000003379 elimination reaction Methods 0.000 claims description 5
- 238000012216 screening Methods 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 239000000203 mixture Substances 0.000 claims 2
- 230000003247 decreasing effect Effects 0.000 claims 1
- 230000000877 morphologic effect Effects 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 6
- 230000008859 change Effects 0.000 description 6
- 238000012360 testing method Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 230000007797 corrosion Effects 0.000 description 2
- 238000005260 corrosion Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 230000007306 turnover Effects 0.000 description 2
- 241001269238 Data Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000009123 feedback regulation Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000013526 transfer learning Methods 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/254—Analysis of motion involving subtraction of images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20224—Image subtraction
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
A kind of augmented reality system actual situation hybrid modeling method based on deep learning is claimed in the present invention, for augmented reality system actual situation hybrid modeling problem, this method first all extracts the dummy model view of consecutive frame and the discrepant region of actual object picture, the image of input first passes around PBAS algorithm and is detected, complete the segmentation to foreground target, then suspected target region segmentation obtained is sent into VGGNet-16 model and carries out secondary judgement, the foreground image coordinate judged is exported, binding model textures and initial pictures, obtain the model result of actual situation mixing.Utilize actual situation hybrid modeling scheme proposed by the present invention, the operand of algorithm entirety can either be greatly lowered, it is effectively reduced demand of the algorithm to hardware, the hi-vision classification accuracy that can make full use of depth convolutional neural networks model VGGNET-16 again guarantees target detection effect, effectively improves modeling accuracy.
Description
Technical field
The invention belongs to augmented reality fields, and in particular to a kind of augmented reality system actual situation based on deep learning
Hybrid modeling method.
Background technique
Augmented reality (Augmented Reality, AR) technology is as an emerging technology, can generate computer two
Dimension or three-dimensional virtual object are superimposed in real time with real scene;And it is realized between real scene and dummy object using interaction technique
Interaction, the experience of exceeding reality is sensuously brought from audiovisual to people, by the digital information of additional virtual with promoted user with
The interactive experience of true environment.The substantially process of augmented reality are as follows: the then positioning shooting seat in the plane appearance first in real scene is adopted
Dummy object is registered to the application view that virtual reality fusion is generated in real scene with computer graphics rendering technology.But due to
The image that single camera perspective relation carries out virtual-real synthesis cannot be identified according to taken the photograph Object Depth relationship and be optimized display,
It is poor that synthesized actual situation binding model is usually present the sense of reality, in conjunction with the problems such as more coarse.
For augmented reality system actual situation hybrid modeling problem, since existing depth recognition method for registering cannot completely move mesh
The actual situation for marking sufficient long period span models alignment, and long partition image sequence will lead to the large change of interframe background, frame difference method,
Adaptability when the methods of gauss hybrid models change greatly background is insufficient, and VIBE method also uses constant context update
Threshold value is difficult to use in strong reality system actual situation hybrid modeling.PBAS algorithm is a kind of effective exercise target inspection proposed in recent years
Survey method, it makes use of the method for background modeling, context update threshold value and foreground segmentation threshold value can be with background complexities certainly
It adapts to change, there is certain robustness simultaneously for illumination.Classifier based on deep learning carries out secondary judgement, Ke Yiyou
Effect improves modeling accuracy.The present invention merges the advantages of above several schemes, proposes a kind of augmented reality based on deep learning
System actual situation hybrid modeling method.
Summary of the invention
Present invention seek to address that the above problem of the prior art.Algorithm entirety can either be greatly lowered by proposing one kind
Operand is effectively reduced demand of the algorithm to hardware, and can make full use of depth convolutional neural networks model VGGNET-16
Hi-vision classification accuracy guarantee target detection effect, effectively improve the augmented reality based on deep learning of modeling accuracy
System actual situation hybrid modeling method.Technical scheme is as follows:
A kind of augmented reality system actual situation hybrid modeling method based on deep learning comprising following steps:
1) dummy model view and actual object image, are inputted, is primarily based on target priori knowledge to the virtual of consecutive frame
Model view and actual object picture have carried out preliminary screening, get rid of the discrepant region of significant ground false target;
2), the dummy model view after the completion first step and actual object image are detected by PBAS algorithm, complete
The segmentation of pairs of foreground target, obtains suspected target region;Wherein, the background modeling of SACON algorithm has been merged in PBAS algorithm
The foreground detection part of part and VIBE algorithm;
3), the suspected target region for then obtaining segmentation is sent into VGGNet-16 model and carries out secondary judgement, will judge
Foreground image coordinate output;
4), binding model textures and initial pictures obtain the model result of actual situation mixing.
Further, the step 1) is to have carried out preliminary screening to result based on target priori knowledge, is got rid of significant
False target.
Further, the step 2) is detected by PBAS algorithm, is completed the segmentation to foreground target, is obtained doubtful
Target area specifically includes:
A1, using the background modeling method of similar SACON algorithm, N frame pixel obtains background as background modeling before collecting
Model;
A2, under step A1 background model, current pixel belongs to prospect or background by comparing present frame I (xi) and back
Scape Model B (xi) determine, by comparing in sample set pixel value and current frame pixel value color space Euclidean away from
From if distance is less than distance threshold R (xi) number of samples than current frame pixel value color space Euclidean distance sample
Number SdminIt is few, then determine that current pixel point is otherwise background dot for foreground point;
A3, the update of background model and background complexity calculating;
A4, the adaptive adjustment of foreground segmentation threshold value and more new strategy;
A5, cavity filling and nontarget area removal process.
Further, the step A1 is specifically included: for each pixel, background model is indicated are as follows:
B(xi)={ B1(xi),…,Bk(xi),…,BN(xi)}
Wherein, xiRepresent first pixel of the i-th frame image, B (xi) indicate the i-th frame when background model, Bk(xi) represent
Background model B (xi) in a sample pixel value, for color image, Bk(xi)=(ri,gi,bi), it is corresponding its
The value of rgb space;It is then gray value for gray level image.
Further, the foreground detection result of the step A2 are as follows:
F(xi) it is foreground image pixel xiSet, wherein if distance be less than distance threshold R (xi) number of samples ratio
Euclidean distance number of samples S of the current frame pixel value in color spacedminIt is at least foreground point, otherwise numerical value 1 is background
Point, numerical value 0, dist indicate pixel and its Euclidean distance in the corresponding point of background model on color space.
Further, the update of the step A3 model and the calculating of background complexity specifically include:
In background model renewal process, random selection needs the sample being replaced, and randomly chooses the sample set of neighborhood of pixels
It closes and updates, specifically foreground area is without updating, and background area is with current context update probabilityRandomly select back
A sampled pixel value B in scape modelk(xi), with current pixel value I (xi) be replaced, what each background pixel was replaced
Probability isAt the same time, in x selected at randomiNeighborhood in, then randomly select a pixel yi, take identical side
The current pixel value V (y of formulai) replacement background pixel point Bk(yi);
Using measurement of the average value of minimum range as background complexity, background are complicated when Sample Refreshment in sample set
The calculating process of degree is as follows: building background model B (xi) while, also construct a minimum range model D (xi):
D(xi)={ D1(xi),…,DN(xi)}
Current lowest distance value is dmin(xi)=minkdist(I(xi),Bk(xi)), it can be constructed according to above step
Minimum range model, corresponding relationship dmin(xi)→Dk(xi), the complexity of background at this time is determined by the mean value of minimum range
Degree:N is minimum range sample number.
Further, the adaptive adjustment of the step A4 foreground segmentation threshold value and more new strategy, specifically include:
R(xi) it is foreground detection as a result, Rinc\decWith RscaleIt is constant constant;
The adaptive adjustment current pixel point x of background model renewal rateiWhen for background dot, its corresponding background mould is updated
Type, if xiNeighborhood point yiFor foreground pixel point, the update of background model equally can also occur, introduce parameter T (xi) dynamic control
The speed for making this process makes it when pixel is judged as background, and renewal rate improves, and when being judged as prospect, updates
Rate reduces;When scene changes are more violent, background complexity is relatively high, and foreground segmentation is easier to judge by accident,
Raising or lowering for renewal rate can suitably slow down at this time;Conversely, when scene is more stable, the raising of renewal rate
Or reduce and should suitably accelerate, more new strategy is specific as follows
F(xi) it is foreground detection as a result, TincAnd TdecRespectively indicate the amplitude of increase, the reduction of turnover rate.
Further, the filling of the cavity the step A5 and nontarget area removal process, specifically include:
Firstly, carrying out empty elimination using morphology opening operation;
The area in the connection region on foreground image is extracted, region of the elemental area less than 100 is abandoned;
The length-width ratio for calculating the boundary rectangle in the region left, the region by length-width ratio greater than 4:3 abandon.
Further, the step 3) sets 2 for the output layer class categories number of VGGNET-16 model, network remaining
Part-structure remains unchanged, i.e. two class classification problems of solution real picture and model picture use warp in trim process
The entire convolutional neural networks adjusted of original VGGNET-16 network model parameter initialization of ImageNet data set training,
Then using augmented reality system acquisition to sample parameter is finely adjusted, obtain the new convolutional Neural for secondary judgement
Network returns if the foreground image coordinate precision of output is below standard, otherwise exports the foreground image coordinate judged, in conjunction with
Model pinup picture and initial pictures obtain the model result of actual situation mixing.
It advantages of the present invention and has the beneficial effect that:
The purpose of the present invention is to provide a kind of augmented reality system actual situation hybrid modeling method based on deep learning, needle
To augmented reality system actual situation hybrid modeling problem, this method is first by the dummy model view of consecutive frame and actual object picture
Discrepant region all extracts, and the image of input first passes around PBAS algorithm and detected, and completes to foreground target
Segmentation, the suspected target region for then obtaining segmentation are sent into VGGNet-16 model and carry out secondary judgement, the prospect that will be judged
Image coordinate output, binding model textures and initial pictures, obtain the model result of actual situation mixing.Utilize void proposed by the present invention
Real hybrid modeling scheme, can either be greatly lowered the operand of algorithm entirety, be effectively reduced demand of the algorithm to hardware, again
The hi-vision classification accuracy that can make full use of depth convolutional neural networks model VGGNET-16 guarantees target detection effect,
Effectively improve modeling accuracy.
Detailed description of the invention
Fig. 1 is a kind of augmented reality system actual situation mixing based on deep learning that the present invention provides that preferred embodiment provides
Modeling method flow diagram.
Fig. 2 is the Preliminary detection schematic diagram provided by the invention based on PBAS algorithm.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, detailed
Carefully describe.Described embodiment is only a part of the embodiments of the present invention.
The technical solution that the present invention solves above-mentioned technical problem is:
A kind of augmented reality system actual situation hybrid modeling method based on deep learning, mainly comprises the steps that
1. image inputs, and has carried out preliminary screening to result based on target priori knowledge, it is false to get rid of significant ground
Target.
2. the Preliminary detection based on PBAS algorithm.Target detection, PBAS are carried out using the preferable PBAS algorithm of comprehensive performance
SACON algorithm has been used for reference in background modeling part in algorithm, and foreground detection part has used for reference VIBE algorithm, enabled the algorithm to root
Adaptively change the renewal rate of background model and the judgment threshold of foreground segmentation according to the complexity of background, to adapt to
The variation of scene.
1) PBAS algorithm uses the background modeling method of similar SACON algorithm, before collecting N frame pixel as background modeling,
Then for each pixel, background model can be indicated are as follows:
B(xi)={ B1(xi),…,Bk(xi),…,BN(xi)}
Wherein, xiRepresent first pixel of the i-th frame image, B (xi) indicate the i-th frame when background model.Bk(xi) represent
Background model B (xi) in a sample pixel value.For color image, Bk(xi)=(ri,gi,bi), it is corresponding its
The value of rgb space;It is then gray value for gray level image.
2) model that previous step is established is a kind of background model based on sampling statistics, under such background model,
Current pixel belongs to prospect or background can be by comparing present frame I (xi) and background model B (xi) determine.By comparing
Pixel value in sample set and current frame pixel value color space Euclidean distance, if distance is less than distance threshold R (xi)
Number of samples ratio SdminIt is few, then determine that current pixel point is otherwise background dot for foreground point.Foreground detection result:
3) calculating of the update of model and background complexity.
A) in background model renewal process, random selection needs the sample being replaced, and randomly chooses the sample of neighborhood of pixels
Set updates.Specifically foreground area is without updating, and background area is with current context update probabilityIt randomly selects
A sampled pixel value B in background modelk(xi), with current pixel value I (xi) be replaced.Each background pixel is replaced
Probability beAt the same time, in x selected at randomiNeighborhood in, then randomly select a pixel yi, take identical
Mode with current pixel value V (yi) replacement background pixel point Bk(yi)。
Measurement of the average value of minimum range as background complexity when b) using Sample Refreshment in sample set.Background is multiple
The calculating process of miscellaneous degree is as follows: building background model B (xi) while, also construct a minimum range model D (xi):
D(xi)={ D1(xi),…,DN(xi)}
Current lowest distance value is dmin(xi)=minkdist(I(xi),Bk(xi)).It can be constructed according to above step
Minimum range model, corresponding relationship dmin(xi)→Dk(xi).The complexity of background at this time is determined by the mean value of minimum range
Degree:
4) the adaptive adjustment of foreground segmentation threshold value and more new strategy.
A) during the adjustment of foreground segmentation threshold value, scene changes are more violent, and background complexity is higher, and background pixel point is got over
Be easy it is misjudged break as prospect, so segmentation threshold should increase accordingly at this time, before guaranteeing that background pixel is not mistaken for
Scape;Otherwise scene is more stable, and background complexity is lower, and segmentation threshold should be smaller, complete to foreground segmentation to guarantee, specifically
Adjustable strategies it is as follows:
R(xi) it is foreground detection as a result, Rinc\decWith RscaleIt is constant constant.
B) the adaptive adjustment current pixel point x of background model renewal rateiWhen for background dot, its corresponding back will be updated
Scape model, if xiNeighborhood point yiFor foreground pixel point, the update of background model equally can also occur, this shows quiet for a long time
The edge of foreground area only can gradually be judged as background.This algorithm introduces parameter T (xi) dynamically control the speed of this process
Degree, makes it when pixel is judged as background, and renewal rate improves, and when being judged as prospect, renewal rate is reduced.Work as scene
When changing more violent, background complexity is relatively high, foreground segmentation is easier to judge by accident, and renewal rate mentions at this time
High or reduction can suitably slow down;Conversely, raising or lowering for renewal rate should suitably add when scene is more stable
Fastly, more new strategy is specific as follows
F(xi) it is foreground detection as a result, TincAnd TdecRespectively indicate the amplitude of increase, the reduction of turnover rate.
5) cavity filling is eliminated with nontarget area
After foreground segmentation process, in foreground area there may be cavitation, and also have can for original testing result
Itself there can be incompleteness, this can have an impact the accuracy of detection.It is also required to reduce simultaneously and is sent into convolutional neural networks progress
The region quantity of secondary judgement, and then reduce overall calculation amount.In conclusion need to carry out the foreground area that is partitioned into
Lower processing:
A) firstly, carrying out empty elimination using morphology opening operation.This algorithm using 3 pixel wides expansion with
Corrosion;
B) area for extracting the connection region on foreground image, abandons region of the elemental area less than 100;
C) length-width ratio for calculating the boundary rectangle in the region left, the region by length-width ratio greater than 4:3 abandon.Above step
In 3 pixel wides, foreground area region threshold 100 and length-width ratio 4:3 are obtained by repetition test.
3. the secondary classification based on deep learning algorithm judges
Still include a large amount of false datas in the foreground image coordinate screened in aforementioned manners, needs to pass through classification
The higher convolutional neural networks model of precision carries out further classification judgement.
Transfer learning is carried out for convolutional neural networks, the present invention joins primarily with respect to the whole of entire convolutional neural networks
Several or certain a part of layer parameter is finely adjusted, and is modified the output classification number of the last layer and is utilized the sample of target scene micro-
Adjust VGGNET-16 network model.
2 are set by the output layer class categories number of VGGNET-16 model, network rest part structure remains unchanged, i.e.,
Solve two class classification problems of real picture and model picture.In trim process, using through the training of ImageNet data set
Then the entire convolutional neural networks adjusted of original VGGNET-16 network model parameter initialization utilize augmented reality system
Collected sample is finely adjusted parameter, obtains the new convolutional neural networks for secondary judgement.
4. the return step 3 if the foreground image coordinate precision of output is below standard, otherwise sits the foreground image judged
Mark output, binding model textures and initial pictures, obtain the model result of actual situation mixing.
Specifically, as shown in Figure 1, a kind of augmented reality system actual situation hybrid modeling method based on deep learning is specifically transported
Row process is as follows:
Step 1, image input have carried out preliminary screening to result using and based on target priori knowledge, have got rid of aobvious
Land false target.
Step 2, the Preliminary detection based on PBAS algorithm are as shown in Figure 2.Step 3, secondary point based on deep learning algorithm
Class judgement.
Step 4, the return step 3 if the foreground image coordinate precision of output is below standard, the foreground picture that otherwise will be judged
As coordinate output, binding model textures and initial pictures obtain the model result of actual situation mixing.
1, the present invention is directed to augmented reality system actual situation hybrid modeling problem, and this method is first by the dummy model of consecutive frame
View and the discrepant region of actual object picture all extract, and the image of input first passes around PBAS algorithm and examined
It surveys, completes the segmentation to foreground target, it is secondary that the progress of VGGNet-16 model is sent into the suspected target region for then obtaining segmentation
Judgement exports the foreground image coordinate judged, binding model textures and initial pictures obtain the model knot of actual situation mixing
Fruit.Using actual situation hybrid modeling scheme proposed by the present invention, the operand of algorithm entirety can either be greatly lowered, effectively drop
Demand of the low algorithm to hardware, but the hi-vision classification that can make full use of depth convolutional neural networks model VGGNET-16 is quasi-
True rate guarantees target detection effect, effectively improves modeling accuracy.
2, target detection is carried out using the preferable PBAS algorithm of comprehensive performance, the background modeling part in PBAS algorithm is used for reference
VIBE algorithm has been used for reference in SACON algorithm, foreground detection part, enables the algorithm to according to the complexity of background adaptively
Change the renewal rate of background model and the judgment threshold of foreground segmentation, to adapt to the variation of scene.Particularly, PBAS is calculated
Method uses the background modeling method of similar SACON algorithm, and N frame pixel then carrys out each pixel as background modeling before collecting
It says, background model can indicate are as follows:
B(xi)={ B1(xi),…,Bk(xi),…,BN(xi)}
3, by comparing in sample set pixel value and current frame pixel value color space Euclidean distance, if distance
Less than distance threshold R (xi) number of samples ratio SdminIt is few, then determine that current pixel point is otherwise background dot for foreground point.Prospect
Testing result:
4, foreground area is without updating, and background area is with current context update probabilityRandomly select background mould
A sampled pixel value B in typek(xi), with current pixel value I (xi) be replaced.The probability that each background pixel is replaced
It isAt the same time, in x selected at randomiNeighborhood in, then randomly select a pixel yi, take identical mode
With current pixel value V (yi) replacement background pixel point Bk(yi)。
5, background model B (x is constructedi) while, also construct a minimum range model D (xi):
D(xi)={ D1(xi),…,DN(xi)}
6, the adaptive re-configuration police of foreground segmentation threshold value is as follows:
7, more new strategy is specific as follows
8, the foreground area being partitioned into is carried out the following processing in cavity filling and nontarget area elimination:
A) firstly, carrying out empty elimination using morphology opening operation.This algorithm using 3 pixel wides expansion with
Corrosion;
B) area for extracting the connection region on foreground image, abandons region of the elemental area less than 100;
C) length-width ratio for calculating the boundary rectangle in the region left, the region by length-width ratio greater than 4:3 abandon.Above step
In 3 pixel wides, foreground area region threshold 100 and length-width ratio 4:3 are obtained by repetition test.
9,2 being set by the output layer class categories number of VGGNET-16 model, network rest part structure remains unchanged,
Solve two class classification problems of real picture and model picture.In trim process, using through the training of ImageNet data set
The entire convolutional neural networks adjusted of original VGGNET-16 network model parameter initialization, then utilize augmented reality system
Collected sample of uniting is finely adjusted parameter, obtains the new convolutional neural networks for secondary judgement.
10, previous step is returned to if the foreground image coordinate precision of output is below standard, the foreground picture that otherwise will be judged
As coordinate output, binding model textures and initial pictures obtain the model result of actual situation mixing.Good feedback regulation is reached
Effect.
The above embodiment is interpreted as being merely to illustrate the present invention rather than limit the scope of the invention.?
After the content for having read record of the invention, technical staff can be made various changes or modifications the present invention, these equivalent changes
Change and modification equally falls into the scope of the claims in the present invention.
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811366602.3A CN109544694A (en) | 2018-11-16 | 2018-11-16 | A kind of augmented reality system actual situation hybrid modeling method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811366602.3A CN109544694A (en) | 2018-11-16 | 2018-11-16 | A kind of augmented reality system actual situation hybrid modeling method based on deep learning |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109544694A true CN109544694A (en) | 2019-03-29 |
Family
ID=65848028
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811366602.3A Pending CN109544694A (en) | 2018-11-16 | 2018-11-16 | A kind of augmented reality system actual situation hybrid modeling method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109544694A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110503664A (en) * | 2019-08-07 | 2019-11-26 | 江苏大学 | An Improved Local Adaptive Sensitivity Background Modeling Method |
CN110888535A (en) * | 2019-12-05 | 2020-03-17 | 上海工程技术大学 | AR system capable of improving on-site reality |
CN111178291A (en) * | 2019-12-31 | 2020-05-19 | 北京筑梦园科技有限公司 | Parking payment system and parking payment method |
CN112003999A (en) * | 2020-09-15 | 2020-11-27 | 东北大学 | Three-dimensional virtual reality synthesis algorithm based on Unity 3D |
CN112101232A (en) * | 2020-09-16 | 2020-12-18 | 国网上海市电力公司 | Flame detection method based on multiple classifiers |
CN114327341A (en) * | 2021-12-31 | 2022-04-12 | 江苏龙冠影视文化科技有限公司 | Remote interactive virtual display system |
CN114694090A (en) * | 2022-03-04 | 2022-07-01 | 浙江工业大学 | Campus abnormal behavior detection method based on improved PBAS algorithm and YOLOv5 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020084974A1 (en) * | 1997-09-01 | 2002-07-04 | Toshikazu Ohshima | Apparatus for presenting mixed reality shared among operators |
GB0818561D0 (en) * | 2008-10-09 | 2008-11-19 | Isis Innovation | Visual tracking of objects in images, and segmentation of images |
WO2015144209A1 (en) * | 2014-03-25 | 2015-10-01 | Metaio Gmbh | Method and system for representing a virtual object in a view of a real environment |
US20170361216A1 (en) * | 2015-03-26 | 2017-12-21 | Bei Jing Xiao Xiao Niu Creative Technologies Ltd. | Method and system incorporating real environment for virtuality and reality combined interaction |
-
2018
- 2018-11-16 CN CN201811366602.3A patent/CN109544694A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020084974A1 (en) * | 1997-09-01 | 2002-07-04 | Toshikazu Ohshima | Apparatus for presenting mixed reality shared among operators |
GB0818561D0 (en) * | 2008-10-09 | 2008-11-19 | Isis Innovation | Visual tracking of objects in images, and segmentation of images |
WO2015144209A1 (en) * | 2014-03-25 | 2015-10-01 | Metaio Gmbh | Method and system for representing a virtual object in a view of a real environment |
US20170361216A1 (en) * | 2015-03-26 | 2017-12-21 | Bei Jing Xiao Xiao Niu Creative Technologies Ltd. | Method and system incorporating real environment for virtuality and reality combined interaction |
Non-Patent Citations (3)
Title |
---|
万剑等: "自适应邻域相关性的背景建模", 《中国图象图形学报》 * |
侯畅等: "基于深度编解码网络的运动目标检测算法", 《计算机系统应用》 * |
闫春江等: "基于深度学习的输电线路工程车辆入侵检测", 《信息技术》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110503664A (en) * | 2019-08-07 | 2019-11-26 | 江苏大学 | An Improved Local Adaptive Sensitivity Background Modeling Method |
CN110888535A (en) * | 2019-12-05 | 2020-03-17 | 上海工程技术大学 | AR system capable of improving on-site reality |
CN111178291A (en) * | 2019-12-31 | 2020-05-19 | 北京筑梦园科技有限公司 | Parking payment system and parking payment method |
CN112003999A (en) * | 2020-09-15 | 2020-11-27 | 东北大学 | Three-dimensional virtual reality synthesis algorithm based on Unity 3D |
CN112101232A (en) * | 2020-09-16 | 2020-12-18 | 国网上海市电力公司 | Flame detection method based on multiple classifiers |
CN114327341A (en) * | 2021-12-31 | 2022-04-12 | 江苏龙冠影视文化科技有限公司 | Remote interactive virtual display system |
CN114694090A (en) * | 2022-03-04 | 2022-07-01 | 浙江工业大学 | Campus abnormal behavior detection method based on improved PBAS algorithm and YOLOv5 |
CN114694090B (en) * | 2022-03-04 | 2025-05-30 | 浙江工业大学 | A campus abnormal behavior detection method based on improved PBAS algorithm and YOLOv5 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109544694A (en) | A kind of augmented reality system actual situation hybrid modeling method based on deep learning | |
CN113160192B (en) | Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background | |
CN109961049B (en) | Cigarette brand identification method under complex scene | |
CN101371274B (en) | Edge Comparison in Segmentation of Video Sequences | |
WO2021208275A1 (en) | Traffic video background modelling method and system | |
CN108470354A (en) | Video target tracking method, device and realization device | |
CN108520219A (en) | A kind of multiple dimensioned fast face detecting method of convolutional neural networks Fusion Features | |
CN101371273A (en) | Segmentation of Video Sequences | |
CN106664417A (en) | Content adaptive background-foreground segmentation for video coding | |
CN107833242A (en) | One kind is based on marginal information and improves VIBE moving target detecting methods | |
CN114495170B (en) | Pedestrian re-recognition method and system based on local suppression self-attention | |
CN111161313A (en) | Method and device for multi-target tracking in video stream | |
CN108470178B (en) | A depth map saliency detection method combined with depth reliability evaluation factor | |
CN110633727A (en) | Deep neural network ship target fine-grained identification method based on selective search | |
Liu et al. | Image edge recognition of virtual reality scene based on multi-operator dynamic weight detection | |
CN114677289A (en) | An image dehazing method, system, computer equipment, storage medium and terminal | |
CN112446871A (en) | Tunnel crack identification method based on deep learning and OpenCV | |
CN111080754B (en) | Character animation production method and device for connecting characteristic points of head and limbs | |
CN114663562A (en) | Method and system for optimizing middle painting image based on artificial intelligence and pattern recognition | |
CN119445372A (en) | An optimization method for furrow recognition algorithm | |
CN116580121B (en) | Method and system for generating 2D model by single drawing based on deep learning | |
CN113642650B (en) | Multi-beam sonar sunken ship detection method based on multi-scale template matching and adaptive color screening | |
CN113763432B (en) | Target detection tracking method based on image definition and tracking stability conditions | |
CN110969113B (en) | Auxiliary judging system and method for float state | |
Shen et al. | A method of billiard objects detection based on Snooker game video |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190329 |