[go: up one dir, main page]

CN110135354B - Change detection method based on live-action three-dimensional model - Google Patents

Change detection method based on live-action three-dimensional model Download PDF

Info

Publication number
CN110135354B
CN110135354B CN201910412076.8A CN201910412076A CN110135354B CN 110135354 B CN110135354 B CN 110135354B CN 201910412076 A CN201910412076 A CN 201910412076A CN 110135354 B CN110135354 B CN 110135354B
Authority
CN
China
Prior art keywords
change
model
generate
image
method based
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910412076.8A
Other languages
Chinese (zh)
Other versions
CN110135354A (en
Inventor
黄先锋
张帆
石芸
赵峻弘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhai Dashi Intelligence Technology Co ltd
Wuhan University WHU
Original Assignee
Wuhai Dashi Intelligence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhai Dashi Intelligence Technology Co ltd filed Critical Wuhai Dashi Intelligence Technology Co ltd
Priority to CN201910412076.8A priority Critical patent/CN110135354B/en
Publication of CN110135354A publication Critical patent/CN110135354A/en
Application granted granted Critical
Publication of CN110135354B publication Critical patent/CN110135354B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于实景三维模型的变化检测方法,具体包括以下步骤:S1、计算重叠区域,S2、模型重采样,S3、DOM与DSM分割,生成图斑对象集,S4、判断是否为变化区域,S5、用深度学习的方法生成分类器,S6、样本训练,S7、生成分类器,S8、预测变化地物类型,本发明涉及三维数字化技术领域。该基于实景三维模型的变化检测方法,可实现通过利用实景三维模型的颜色及几何信息,用面向对象的方法进行分割,以对象为基本单位进行变化检测,确定变化区域后,利用深度学习的方法,对地物变化类型进行识别,大大提高了变化检测精度与变化类型识别准确度,同时利用了颜色与几何信息,提高了检测精度,并且丰富了分类依据。

Figure 201910412076

The invention discloses a change detection method based on a three-dimensional model of a real scene, which specifically includes the following steps: S1, calculating the overlapping area, S2, resampling the model, S3, segmenting the DOM and DSM, and generating a patch object set, S4, judging whether it is a Change area, S5, use deep learning method to generate a classifier, S6, sample training, S7, generate a classifier, S8, predict the type of changed ground objects, the present invention relates to the technical field of three-dimensional digitization. The change detection method based on the 3D model of the real scene can realize the segmentation by the object-oriented method by using the color and geometric information of the 3D model of the real scene, and use the object as the basic unit to detect the change. After determining the change area, use the deep learning method. , to identify the change types of ground objects, which greatly improves the change detection accuracy and change type identification accuracy, and at the same time uses color and geometric information to improve the detection accuracy and enrich the classification basis.

Figure 201910412076

Description

Change detection method based on live-action three-dimensional model
Technical Field
The invention relates to the technical field of three-dimensional digitization, in particular to a change detection method based on a live-action three-dimensional model.
Background
Change detection is one of key technologies in the fields of land coverage monitoring, land utilization monitoring, disaster assessment, disaster prediction, geographic information data updating and the like, and is always concerned about, the change detection comprises change area detection and change type identification, the traditional change detection process comprises the steps of generating a difference map and changing area classification, the difference map acquisition method comprises image difference values, image ratio values and the like, these pixel-based methods are only applicable to large-scale satellite images or low-resolution aerial images, for high-resolution images with increasingly frequent application, a large number of fragments are easily formed, so that excessive pseudo-change regions are generated, the later data processing is not facilitated, the traditional classification method is divided into supervised classification and unsupervised classification, however, they are based on images, only use the color information of the images, and the classification basis is too single, and the accuracy of classification is not high.
With the rapid development of the unmanned aerial vehicle technology, the unmanned aerial vehicle images are increasingly applied to geographic information source data acquisition due to the advantages of low acquisition cost, high efficiency, high resolution and the like, real three-dimensional model data generated by the unmanned aerial vehicle images also become one of important geographic information data, the real three-dimensional model data simultaneously have color and geometric information and are applied to change detection, and meanwhile, according to the characteristic of high resolution of the unmanned aerial vehicle images, an object-oriented method is applied to perform change detection by taking a segmented object as a basic unit, so that the change detection precision can be greatly improved.
Deep learning is a new field in machine learning research, and the motivation lies in establishing and simulating a Neural network for analyzing and learning the human brain, which simulates the mechanism of the human brain to interpret data such as images, sounds and texts, the deep learning aims at learning better features by constructing a machine learning model with a plurality of hidden layers and massive training data, thereby improving the classification accuracy, and tracing the root and the source, the concept of the deep learning is derived from the research of the artificial Neural network, forms more abstract high-level representation attribute categories or features by combining low-level features to find the distributed feature representation of the data, the Convolutional Neural Network (CNN) specially solving the problem of image classification and identification is a deep learning network with a convolutional structure, the CNN can automatically extract spatial features from the images, and takes the pixels to be classified and the neighborhood pixels as the input of the convolutional Neural network together, and the method is further converted into the characteristics effectively utilized by a machine learning task, in recent years, the problem of image classification by using a neural network method is mature day by day, the application field is expanded, and compared with the traditional classification method, the deep learning method has the strong capability of learning the essential characteristics of the data set from a few sample sets, and the identification accuracy of the types of the changed ground objects can be greatly improved.
In summary, the invention provides a change detection method based on a live-action three-dimensional model, which utilizes live-action three-dimensional model data, detects a change area by an object-oriented method, and identifies a change type by a deep learning method, thereby greatly improving the change detection precision and the change type identification accuracy.
Disclosure of Invention
Technical problem to be solved
Aiming at the defects of the prior art, the invention provides a change detection method based on a live-action three-dimensional model, which solves the problem that the traditional change detection method based on images only utilizes the color information of the images and is difficult to achieve satisfactory effect on change detection precision and change type identification accuracy.
(II) technical scheme
In order to achieve the purpose, the invention is realized by the following technical scheme: a change detection method based on a live-action three-dimensional model specifically comprises the following steps:
s1, calculating the overlapping area before and after the change according to the range of the live-action three-dimensional model;
s2, performing texture resampling and elevation resampling on the multiple three-dimensional models to generate a digital positive shot image (DOM) and a digital ground model (DSM);
s3, carrying out image processing on the digital positive shot DOM and the digital ground model DSM in the step S2, and then carrying out object-oriented segmentation to generate a speckle object set;
s4, judging whether the image spots are change areas according to the elevation change in each image spot object;
s5, respectively collecting sample data of different ground feature types;
s6, training a sample;
s7, generating a classifier;
and S8, inputting the color and elevation information of the pattern spots to obtain the types of the changed ground features.
Preferably, the step S1 of calculating the overlap area specifically includes: firstly, reading in real-scene three-dimensional model data before and after change, calculating the boundary range of the area before and after the change to obtain an overlapped area before and after the change, then setting the length and the width of a block, dividing the overlapped area into blocks again, and carrying out subsequent processing on each block.
Preferably, the model resampling in step S2 is divided into texture resampling and elevation resampling, a horizontal sampling grid is generated according to the boundary range of the block, the sizes Δ x and Δ y of the grid are set according to the resolution of the model, and the initial coordinate (x) of the grid is known0,y0) The horizontal coordinates of grid point (i, j) may be found as: x ═ x0+i*Δx,y=y0+j*Δy。
Preferably, the digital orthophoto image DOM is generated by taking texture color values of model points at corresponding positions on the model as z values according to horizontal coordinates of the grid points, and then the digital ground model DSM is generated by taking elevation values of the model points at corresponding positions on the model as z values according to the horizontal coordinates of the grid points.
Preferably, in the step S3, the DOM and the DSM are segmented to generate the patch object set, the generated digital positive-shot image is utilized to segment the image by using an effective segmentation method based on a graph, the image is segmented into a plurality of specific regions with unique properties, the details of low-variation regions can be maintained, and the details of high-variation regions can be ignored, so as to reduce the generation of fine fragments, thereby obtaining a good segmentation effect, the digital ground model DSM is utilized to stretch the elevation values of the grid points to the range of 0-255, so as to generate a gray image, the image is segmented into a plurality of non-overlapping regions by using a threshold segmentation method, and the two segmentation results are combined to obtain the final patch object set.
Preferably, in step S4, it is determined whether the change region is a region in which the average of the height differences in the patches is counted for each patch object, a threshold is set, and if the average of the height differences is higher than the threshold, the change region is considered as a candidate change region, otherwise, the change region is considered as a no-change region, and an initial change region is generated.
Preferably, the classifier is generated by deep learning in step S7, the deep learning network is formed by clustering a plurality of neural units together and constructing a hierarchical result, the simplest network is formed by an input layer, an output layer, and an implicit layer, each layer has a plurality of neurons, and the neurons in no layer are connected with the neurons in the next layer, and the output of the previous layer is used as the input of the next layer, such a network is also referred to as a fully-connected network.
Preferably, the predicting of the type of the changed surface feature in step S8 is inputting a surface feature pattern and a classifier, calculating by the classifier, and outputting the probability of each class, where the highest probability is the class of the changed surface feature.
(III) advantageous effects
The invention provides a change detection method based on a live-action three-dimensional model. Compared with the prior art, the method has the following beneficial effects: the change detection method based on the live-action three-dimensional model specifically comprises the following steps: s1, calculating the overlapped area before and after the change according to the range of the real three-dimensional model, S2, carrying out texture resampling and elevation resampling on a plurality of three-dimensional models, generating a digital positive shot image DOM and a digital ground model DSM, S3, carrying out imaging processing on the digital positive shot image DOM and the digital ground model DSM in the step S2, carrying out object-oriented segmentation, generating a spot object set, S4, judging whether a spot is a changed area according to the elevation change in each spot object, S5, respectively collecting sample data of different ground object types, S6, carrying out sample training, S7, generating a classifier, S8, inputting the color and elevation information of the spot to obtain the type of the changed ground object, realizing segmentation by using an object-oriented method by using the color and geometric information of the real three-dimensional model, carrying out change detection by using the object as a basic unit, determining the changed area, the method for recognizing the ground feature change type by utilizing the deep learning method greatly improves the change detection precision and the change type recognition accuracy, provides favorable conditions for further rejecting pseudo change areas by utilizing the color and geometric information, improves the detection precision and enriches the classification basis.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, an embodiment of the present invention provides a technical solution: a change detection method based on a live-action three-dimensional model specifically comprises the following steps:
s1, calculating the overlapping area before and after the change according to the range of the live-action three-dimensional model;
s2, performing texture resampling and elevation resampling on the multiple three-dimensional models to generate a digital positive shot image (DOM) and a digital ground model (DSM);
s3, carrying out image processing on the digital positive shot DOM and the digital ground model DSM in the step S2, and then carrying out object-oriented segmentation to generate a speckle object set;
s4, judging whether the image spots are change areas according to the elevation change in each image spot object;
s5, respectively collecting sample data of different ground feature types;
s6, training a sample;
s7, generating a classifier;
and S8, inputting the color and elevation information of the pattern spots to obtain the types of the changed ground features.
In the present invention, the step S1 of calculating the overlap area specifically includes: firstly, reading in real-scene three-dimensional model data before and after change, calculating the boundary range of the area before and after the change to obtain an overlapped area before and after the change, then setting the length and the width of a block, dividing the overlapped area into blocks again, and carrying out subsequent processing on each block.
In the invention, the model resampling in step S2 is divided into texture resampling and elevation resampling, a horizontal sampling grid is generated according to the boundary range of the block, the size delta x and delta y of the grid are set according to the resolution of the model, and the initial coordinate (x) of the grid is known0,y0) The horizontal coordinates of grid point (i, j) may be found as: x ═ x0+i*Δx,y=y0+j*Δy。
According to the horizontal coordinates of the grid points, texture color values of model points at corresponding positions on the model are taken as z values, so that the digital orthophoto image DOM can be generated, and then according to the horizontal coordinates of the grid points, elevation values of the model points at corresponding positions on the model are taken as z values, so that the digital ground model DSM is generated.
In the invention, the DOM and DSM are divided in step S3 to generate a speckle object set, the generated digital positive shooting image is utilized, the image is divided by adopting an effective dividing method based on a graph, the image is divided into a plurality of specific areas with unique properties, the details of low-variation areas can be kept, and the details of high-variation areas can be ignored, so that the generation of fine fragments is reduced, a good dividing effect is obtained, the elevation value of a grid point is stretched to a range of 0-255 by utilizing a digital ground model DSM to generate a gray level image, the image is divided into a plurality of non-overlapping areas by utilizing a threshold dividing method, and the two dividing results are combined to obtain the final speckle object set.
In the present invention, in step S4, it is determined whether the change region is a region of each patch object, a mean value of height differences within the patch is counted, a threshold is set, if the mean value of height differences is higher than the threshold, the change region is considered as a candidate change region, otherwise, the change region is considered as a no-change region, and an initial change region is generated.
The method for eliminating the pseudo change area comprises the following steps: 1) removing unimportant change areas such as vegetation by using the vegetation index, calculating the vegetation index EGI (2G-R-B) or nGEI (2G-R-B)/(2G + R + B) of the map spot object according to the RGB value of the image, and if the vegetation index before and after the change exceeds a threshold value, determining the change areas as pseudo change areas; 2) a fine isolated region, the area of which is generally larger according to the change detection, the fine isolated region can be regarded as a pseudo change; 3) the patch having irregular geometric characteristics such as long, narrow, and concave height is regarded as a pseudo-change region, and if the patch is determined as a change region, step S8 is executed, otherwise, the process is ended.
In the present invention, the generation classifier in step S7 is generated by a deep learning method, the deep learning network is formed by gathering a plurality of neural units together and constructing a layered result, the simplest network is composed of an input layer, an output layer, and an implicit layer, each layer has a plurality of neurons, and the neurons in none of the layers are connected with the neurons in the next layer, the output of the previous layer is used as the input of the next layer, such a network is also referred to as a fully connected network, the deep learning generally uses a multi-layer neural network, and is composed of three parts: 1) the input layer is responsible for data acquisition; 2) the method comprises the steps of extracting features by combining n convolution layers and pooling layers, namely a hidden layer which is invisible to the outside; 3) the output layer is composed of a fully connected multi-layer perceptron classifier.
The last layer of the classification model is usually a Softmax regression model, which works on the principle of adding features that can be judged to be of a certain class, then converting the features into the probability that the judgment is of the class, and describing the features as:
featuresi=∑jWi,jxj+bi
i represents class i, j represents pixel j of an image, biIs bias (representing the tendency of the data itself), W represents a weight parameter, and x represents the input image data.
Next softmax was calculated for all features, with the following results:
softmax(x)=normalize(exp(x))
the probability of the i-th class being determined can be obtained by the following equation.
Figure BDA0002063117950000071
In order to train the model, a loss function needs to be defined to describe the classification precision of the model to the problem, and the smaller the loss function is, the smaller the deviation of the classification result representing the model from the true value is, i.e. the more accurate the model is. For the multi-classification problem, Cross-entropy (Cross-entropy) is usually used as a loss function, and is defined as follows, where y is the predicted distribution probability and y' is the true probability distribution (i.e., the one-hot code of Label), which is used to judge how accurate the model estimates the true probability distribution.
Figure BDA0002063117950000072
The application of Stochastic Gradient Descent (SGD) to neural networks is a back propagation algorithm, a common Stochastic Gradient Descent optimization algorithm is used to optimize a loss function, a Gradient Descent method is to determine a new search direction for each iteration by using a negative Gradient direction, so that a target function to be optimized can be gradually reduced for each iteration, and the most appropriate weight parameter of the perceptron is solved through a known input value (image) and a real output value (prediction probability).
Thus, the deep learning is applied to the change detection, and is mainly divided into 3 steps: 1) collecting samples, namely selecting a certain number of surface feature pattern spots with different resolutions, different visual angles and different light and shade degrees on a pattern according to surface feature types such as roads, buildings, lands, vegetation and the like, and putting the surface feature pattern spots into a sample library; 2) sample training, defining a classification algorithm formula and a loss function, then defining an optimization algorithm, then carrying out iterative training, updating parameters to reduce loss in each iteration, and finally achieving global optimal parameters, wherein in order to better complete a task, the method adopts two different network structures, namely a Google inclusion Net V3 network structure and a SegNet network structure, and identifies and segments an image; 3) and generating a classifier, and storing the model parameters output by training so as to load the model parameters in prediction.
In the invention, the step S8 of predicting the type of the changed ground feature is to input the ground feature pattern spot and the classifier, calculate the classification and output the probability of each class, wherein the highest probability is the class of the changed ground feature.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (3)

1.一种基于实景三维模型的变化检测方法,其特征在于:具体包括以下步骤:1. a change detection method based on a three-dimensional model of a real scene, is characterized in that: specifically comprise the following steps: S1、根据实景三维模型的范围,计算变化前后的重叠区域;S1. Calculate the overlapping area before and after the change according to the range of the 3D model of the real scene; S2、多三维模型进行纹理重采样与高程重采样,生成数字正摄影像DOM与数字地面模型DSM;S2. Perform texture resampling and elevation resampling on multiple 3D models to generate digital orthophoto DOM and digital ground model DSM; S3、将步骤S2中的数字正摄影像DOM与数字地面模型DSM图像化处理后,进行面向对象的分割,生成图斑对象集;S3, after the digital orthographic image DOM and the digital ground model DSM in step S2 are imaged and processed, carry out object-oriented segmentation, and generate a patch object set; S4、根据每个图斑对象内的高程变化,判断图斑是否为变化区域;S4, according to the elevation change in each patch object, determine whether the patch is a change area; S5、分别采集不同地物类型的样本数据;S5. Collect sample data of different ground object types respectively; S6、样本训练;S6, sample training; S7、生成分类器;S7, generate a classifier; S8、输入图斑颜色与高程信息,得到变化地物的类型;S8. Input the color and elevation information of the patch to obtain the type of the changed features; 所述步骤S1中计算重叠区域具体为:首先读入变化前后的实景三维模型数据,计算变化前后区域边界范围,得到变化前后的重叠区域,然后设置区块长宽,对重叠区域重新划分区块,对每个区块进行后面的处理;The calculation of the overlapping area in the step S1 is specifically: firstly read in the real scene three-dimensional model data before and after the change, calculate the area boundary range before and after the change, obtain the overlapping area before and after the change, then set the block length and width, and re-divide the overlapping area into blocks , and perform subsequent processing on each block; 所述步骤S2中的模型重采样分为纹理重采样和高程重采样,根据区块的边界范围,生成水平采样网格,网格的大小Δx、Δy根据模型的分辨率设定,已知网格的起始坐标(x0,y0),可得到网格点(i,j)的水平坐标为:x=x0+i*Δx,y=y0+j*Δy,根据网格点的水平坐标,在模型上取相应位置上的模型点的纹理颜色值作为z值,即可生成数字正摄影像DOM,然后根据网格点的水平坐标,在模型上取相应位置上的模型点的高程值作为z值,生成数字地面模型DSM;The model resampling in the described step S2 is divided into texture resampling and elevation resampling. According to the boundary range of the block, a horizontal sampling grid is generated, and the size Δx and Δy of the grid are set according to the resolution of the model. The starting coordinates of the grid (x 0 , y 0 ), the horizontal coordinates of the grid point (i, j) can be obtained as: x=x 0 +i*Δx, y=y 0 +j*Δy, according to the grid point The horizontal coordinates of the grid point, and the texture color value of the model point at the corresponding position on the model is taken as the z value to generate a digital orthographic image DOM, and then according to the horizontal coordinate of the grid point, the model point at the corresponding position is taken on the model. The elevation value is used as the z value to generate the digital ground model DSM; 所述步骤S3中DOM与DSM分割,并生成图斑对象集,是利用生成的数字正摄影像,采用基于图的有效分割方法,对影像进行分割,将影像分割成若干个特定的、具有独特性质的区域,它能保持低变化区域的细节,同时能忽略高变化区域的细节,从而减少细小碎片的产生,得到一个很好的分割效果,利用数字地面模型DSM,将网格点高程值拉伸到0-255范围,生成灰度影像,用阈值分割的方法,将影像分割成若干互不重叠的区域,将两种分割结果进行合并,即可得到最终的图斑对象集;In the step S3, the DOM and DSM are segmented, and a patch object set is generated. The generated digital orthographic image is used, and an effective segmentation method based on the image is used to segment the image, and the image is segmented into several specific and unique. It can maintain the details of the low-change area, while ignoring the details of the high-change area, thereby reducing the generation of small fragments and obtaining a good segmentation effect. Using the digital ground model DSM, the elevation value of the grid points is pulled. Extend to the range of 0-255, generate a grayscale image, divide the image into several non-overlapping areas by threshold segmentation, and combine the two segmentation results to obtain the final patch object set; 所述步骤S4中判断是否为变化区域是对每个图斑对象,统计图斑内的高差均值,设定阈值,若高差均值高于此阈值,则视为候选变化区域,否则视为无变化区域,生成初始的变化区域。In the step S4, judging whether it is a change region is to count the height difference mean value in the map spot for each spot object, and set a threshold value. If the height difference mean value is higher than this threshold value, it is regarded as a candidate change region, otherwise it is regarded as a candidate change region. No change area, generate the initial change area. 2.根据权利要求1所述的一种基于实景三维模型的变化检测方法,其特征在于:所述步骤S7中的生成分类器是采用深度学习的方法生成,深度学习网络由多个神经单元聚集在一起并构造出分层的结果,最简单的网络由一个输入层、一个输出层、一个隐含层组成,每一层上都有多个神经元,并且每一层上的神经元都和下一层的神经元连接在一起,上一层的输出作为下一层的输入,这样的网络也被称为全连接网络。2. a kind of change detection method based on real scene three-dimensional model according to claim 1, is characterized in that: the generation classifier in described step S7 is to adopt the method of deep learning to generate, and deep learning network is gathered by a plurality of neural units together and construct a hierarchical result, the simplest network consists of an input layer, an output layer, a hidden layer, each layer has multiple neurons, and the neurons on each layer are the same as The neurons of the next layer are connected together, and the output of the previous layer is used as the input of the next layer. Such a network is also called a fully connected network. 3.根据权利要求1所述的一种基于实景三维模型的变化检测方法,其特征在于:所述步骤S8中预测变化地物类型是输入地物图斑和分类器,进分类器进行计算,输出各个类别的概率,概率最高者即为变化地物的类别。3. a kind of change detection method based on real scene three-dimensional model according to claim 1, it is characterized in that: in described step S8, predicting change ground object type is to input ground object map spot and classifier, enter classifier to calculate, The probability of each category is output, and the one with the highest probability is the category of changing features.
CN201910412076.8A 2019-05-17 2019-05-17 Change detection method based on live-action three-dimensional model Active CN110135354B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910412076.8A CN110135354B (en) 2019-05-17 2019-05-17 Change detection method based on live-action three-dimensional model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910412076.8A CN110135354B (en) 2019-05-17 2019-05-17 Change detection method based on live-action three-dimensional model

Publications (2)

Publication Number Publication Date
CN110135354A CN110135354A (en) 2019-08-16
CN110135354B true CN110135354B (en) 2022-03-29

Family

ID=67574999

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910412076.8A Active CN110135354B (en) 2019-05-17 2019-05-17 Change detection method based on live-action three-dimensional model

Country Status (1)

Country Link
CN (1) CN110135354B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113064954B (en) * 2020-01-02 2024-03-26 沈阳美行科技股份有限公司 Map data processing method, device, equipment and storage medium
CN111259955B (en) * 2020-01-15 2023-12-08 国家测绘产品质量检验测试中心 Reliable quality inspection method and system for geographical national condition monitoring result
CN113515971A (en) * 2020-04-09 2021-10-19 阿里巴巴集团控股有限公司 Data processing method and system, network system and training method and device thereof
CN111723643B (en) * 2020-04-12 2024-03-01 四川川测研地科技有限公司 Target detection method based on fixed-area periodic image acquisition
CN112149920A (en) * 2020-10-17 2020-12-29 河北省地质环境监测院 Regional geological disaster trend prediction method
CN113515798B (en) * 2021-07-05 2022-08-12 中山大学 A method and device for simulating urban three-dimensional spatial expansion
CN114998662B (en) * 2022-06-24 2024-05-03 四川川测研地科技有限公司 Method for identifying and extracting real-scene three-dimensional geographic information data
CN115482466B (en) * 2022-09-28 2023-04-28 广西壮族自治区自然资源遥感院 Three-dimensional model vegetation area lightweight processing method based on deep learning
CN115861826B (en) * 2023-02-27 2023-05-12 武汉天际航信息科技股份有限公司 Configuration method, computing device and storage medium for model-oriented overlapping area

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10058596A1 (en) * 2000-11-25 2002-06-06 Aventis Pharma Gmbh Method of screening chemical compounds for modulating the interaction of an EVH1 domain or a protein with an EVH1 domain with an EVH1 binding domain or a protein with an EVH1 binding domain, and a method for detecting said interaction
CN103839286A (en) * 2014-03-17 2014-06-04 武汉大学 True-orthophoto optimization sampling method of object semantic constraint
CN104049245A (en) * 2014-06-13 2014-09-17 中原智慧城市设计研究院有限公司 Urban building change detection method based on LiDAR point cloud spatial difference analysis
CN105893972A (en) * 2016-04-08 2016-08-24 深圳市智绘科技有限公司 Automatic illegal building monitoring method based on image and realization system thereof
CN107844802A (en) * 2017-10-19 2018-03-27 中国电建集团成都勘测设计研究院有限公司 Water and soil conservation value method based on unmanned plane low-altitude remote sensing and object oriented classification

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10058596A1 (en) * 2000-11-25 2002-06-06 Aventis Pharma Gmbh Method of screening chemical compounds for modulating the interaction of an EVH1 domain or a protein with an EVH1 domain with an EVH1 binding domain or a protein with an EVH1 binding domain, and a method for detecting said interaction
CN103839286A (en) * 2014-03-17 2014-06-04 武汉大学 True-orthophoto optimization sampling method of object semantic constraint
CN104049245A (en) * 2014-06-13 2014-09-17 中原智慧城市设计研究院有限公司 Urban building change detection method based on LiDAR point cloud spatial difference analysis
CN105893972A (en) * 2016-04-08 2016-08-24 深圳市智绘科技有限公司 Automatic illegal building monitoring method based on image and realization system thereof
CN107844802A (en) * 2017-10-19 2018-03-27 中国电建集团成都勘测设计研究院有限公司 Water and soil conservation value method based on unmanned plane low-altitude remote sensing and object oriented classification

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"融合数字表面模型的无人机遥感影像城市土地利用分类";宋晓阳 等;《地球信息科学》;20180531;第20卷(第5期);第703-711页 *

Also Published As

Publication number Publication date
CN110135354A (en) 2019-08-16

Similar Documents

Publication Publication Date Title
CN110135354B (en) Change detection method based on live-action three-dimensional model
US10984532B2 (en) Joint deep learning for land cover and land use classification
EP3614308B1 (en) Joint deep learning for land cover and land use classification
CN109685067B (en) A Semantic Image Segmentation Method Based on Region and Deep Residual Networks
CN111027547B (en) Automatic detection method for multi-scale polymorphic target in two-dimensional image
CN113160192B (en) Visual sense-based snow pressing vehicle appearance defect detection method and device under complex background
CN111598174B (en) Model training method and image change analysis method based on semi-supervised adversarial learning
CN108573276B (en) A change detection method based on high-resolution remote sensing images
CN111640125B (en) Aerial photography graph building detection and segmentation method and device based on Mask R-CNN
CN107092870B (en) A kind of high resolution image Semantic features extraction method
CN104915636B (en) Remote sensing image road recognition methods based on multistage frame significant characteristics
CN107067405B (en) Remote sensing image segmentation method based on scale optimization
CN110070091B (en) Semantic segmentation method and system based on dynamic interpolation reconstruction and used for street view understanding
CN108304873A (en) Object detection method based on high-resolution optical satellite remote-sensing image and its system
CN114596500B (en) Remote sensing image semantic segmentation method based on channel-space attention and DeeplabV plus
CN108830870A (en) Satellite image high-precision field boundary extracting method based on Multi-scale model study
CN108960404B (en) Image-based crowd counting method and device
CN103136537B (en) Vehicle type identification method based on support vector machine
CN105427313B (en) SAR image segmentation method based on deconvolution network and adaptive inference network
CN101986348A (en) Visual target identification and tracking method
CN113269224B (en) Scene image classification method, system and storage medium
CN108062575A (en) High-similarity image identification and classification method
CN117437201A (en) A road crack detection method based on improved YOLOv7
CN109635726A (en) A kind of landslide identification method based on the symmetrical multiple dimensioned pond of depth network integration
CN110889840A (en) Validity Detection Method of Gaofen-6 Remote Sensing Satellite Data Oriented to Ground Objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230901

Address after: 430205 room 01, 4 / F, building B2, phase II of financial background service center base construction project, No. 77, Guanggu Avenue, Donghu New Technology Development Zone, Wuhan, Hubei Province

Patentee after: WUHAI DASHI INTELLIGENCE TECHNOLOGY CO.,LTD.

Patentee after: WUHAN University

Address before: 430000 Room 01, 02, 8 Floors, Building B18, Building 2, Financial Background Service Center, 77 Guanggu Avenue, Wuhan Donghu New Technology Development Zone, Hubei Province

Patentee before: WUHAI DASHI INTELLIGENCE TECHNOLOGY CO.,LTD.

TR01 Transfer of patent right