[go: up one dir, main page]

CN113837134A - A Wetland Vegetation Recognition Method Based on Object-Oriented Deep Learning Model and Transfer Learning - Google Patents

A Wetland Vegetation Recognition Method Based on Object-Oriented Deep Learning Model and Transfer Learning Download PDF

Info

Publication number
CN113837134A
CN113837134A CN202111152307.XA CN202111152307A CN113837134A CN 113837134 A CN113837134 A CN 113837134A CN 202111152307 A CN202111152307 A CN 202111152307A CN 113837134 A CN113837134 A CN 113837134A
Authority
CN
China
Prior art keywords
training
classification
learning
model
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111152307.XA
Other languages
Chinese (zh)
Inventor
付波霖
刘曼
李雨阳
何宏昌
范冬林
刘立龙
黄良珂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Technology
Original Assignee
Guilin University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Technology filed Critical Guilin University of Technology
Priority to CN202111152307.XA priority Critical patent/CN113837134A/en
Publication of CN113837134A publication Critical patent/CN113837134A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a wetland vegetation identification method based on an object-oriented deep learning model and transfer learning, which improves the classification precision and efficiency of wetland vegetation clusters by applying the transfer learning capability of a convolutional neural network; the wetland vegetation is identified by expanding the spatial resolution gradient and the spectral dimension of the remote sensing image, so that the problem that various wetland vegetation cannot be accurately identified by a single image is solved; accurate classification at vegetation boundaries is improved by using a convolutional neural network model that incorporates image segmentation.

Description

Wetland vegetation identification method based on object-oriented deep learning model and transfer learning
Technical Field
The invention relates to a classification algorithm model of wetland vegetation, in particular to a method for realizing high-precision classification of wetland vegetation based on an object-oriented deep learning model and a migration learning mode, aiming at the problems of complicated construction, long training time and easy confusion of vegetation classification of the conventional wetland vegetation model, and particularly overcoming the defects of low efficiency and low precision of wetland vegetation classification.
Background
The wetland vegetation is an important component of the wetland, is an important characteristic for judging the mature development of a wetland ecosystem, and is a premise and a foundation for maintaining the stability of the wetland ecological function and promoting the virtuous cycle of the ecological environment. The method has the advantages that the space-time distribution information of the wetland vegetation is accurately identified and monitored, the method has important theoretical significance for systematically researching the structure and ecological function of the wetland, and the method plays a vital role in protecting and reasonably developing the wetland.
At present, most of vegetation measurement and calculation are realized through a machine learning algorithm, for example, patent application 202011322693.8 discloses an arbor biomass measurement and calculation method based on unmanned aerial vehicle hyperspectral and machine learning algorithms, which is used for better realizing biomass monitoring of target area type arbors. Firstly, acquiring a hyperspectral image of a terrestrial plant in a target area by using an unmanned aerial vehicle, modeling based on the hyperspectral image, and extracting elevation information of a digital surface model; extracting spectral information from the original image picture, monitoring the type of vegetation classification according to the ecological environment of terrestrial plants, and performing quantitative inversion model training by adopting a machine learning algorithm by combining high-level information, characteristic wave bands and vegetation indexes of various plants in a target area to obtain an inversion model; classifying the vegetation types of the target area by using an inversion model so as to extract classification data of the arbor; and finally, calculating the biomass of the arbor by utilizing the classification data extracted from the arbor and combining an aboveground biomass formula.
Referring to the research of wetland vegetation in the existing scientific literature, the most extensive classification method is to use a convolutional neural network algorithm for intelligent identification, the algorithm has a deep multilayer structure, an end-to-end training mode and strong generalization capability, and can identify the wetland vegetation with higher precision, but the following problems exist: when different remote sensing images are classified by using a convolutional neural network algorithm, a large amount of iterative training is required, and a large amount of time is spent; the convolutional neural network algorithm cannot accurately and newly identify context information when performing wetland vegetation classification pixel by pixel, so that vegetation boundaries are easy to be confused.
Disclosure of Invention
In order to solve the above problems, a primary object of the present invention is to provide a wetland vegetation identification method based on an object-oriented deep learning model and migration learning, which aims at the problems of complicated construction, long training time and easy confusion of vegetation classification of the existing wetland vegetation model, especially the disadvantages of low efficiency and low precision of wetland vegetation classification, and provides a method for realizing high-precision classification of wetland vegetation clusters and vegetation boundaries based on an object-oriented deep learning model and a migration learning method.
Another objective of the present invention is to provide a wetland vegetation identification method based on an object-oriented deep learning model and transfer learning, the classification model constructed by the method can effectively solve the above problems, and the transfer learning capability of the convolutional neural network is used to improve the classification accuracy and efficiency of wetland vegetation clusters; the wetland vegetation is identified by expanding the spatial resolution gradient and the spectral dimension of the remote sensing image, so that the problem that various wetland vegetation cannot be accurately identified by a single image is solved; accurate classification at vegetation boundaries is improved by using a convolutional neural network model that incorporates image segmentation.
The invention further aims to provide a wetland vegetation identification method based on an object-oriented deep learning model and migration learning, which selects a new generation of China high spatial resolution earth observation satellites GF-1, GF-2 and ZY-3 and international earth observation satellites Sentinel-2A and Landsat 8OLI as data sources to perform high-precision classification of wetland vegetation based on a deep learning algorithm, is beneficial to simply developing a wetland vegetation data set meeting the deep learning model, improves the classification precision by expanding the spatial resolution gradient and spectral dimension of remote sensing images, and improves the classification efficiency of the wetland vegetation by utilizing the migration learning capability of a convolutional neural network.
In order to achieve the purpose, the invention provides the following technical scheme:
a wetland vegetation identification method based on an object-oriented deep learning model and transfer learning comprises the following steps:
step (1): making a deep learning semantic label according to the measured data;
step (2): preprocessing the remote sensing image; the remote sensing image is obtained by observing a satellite;
and (3): matching the data in the steps (1) and (2), matching the remote sensing image with the same spatial resolution with the label data to prepare a training sample, and inputting the training sample and the training sample into a classification model to form a training sample set;
and (4): establishing a classification scheme of the multi-source data remote sensing image;
respectively integrating remote sensing images with different spatial resolution gradients and spectral dimensions, and establishing a multi-source data remote sensing image classification scheme;
and (5): performing enhancement processing on the training sample;
and (6): performing iterative training on each classification scheme by using a convolutional neural network algorithm;
and (7): taking the training record in the step (6) as a training reference of other schemes to carry out transfer learning training;
selecting a weight record with the highest training precision in 30 times of iterative training, taking the weight record as a reference of transfer learning training, training images of other schemes, and finely adjusting all convolutional layers at a learning rate 10 times smaller than a default learning rate during training;
and (8): classifying and predicting the image corresponding to each scheme; selecting the weight record with the highest training precision in the iterative training for classification prediction;
and (9): performing multi-scale segmentation on the image;
the segmentation is to perform multi-scale segmentation on the image, and segment the image into objects with relatively uniform characteristics by setting 3 important parameters of shape/color, tightness/smoothness and scale and adopting a bottom-up region merging technology;
step (10): fusing the classification result of the step (8) with the segmentation result of the step (9);
step (11): constructing a deep learning model classification result evaluation index;
5 precision indexes including drawing Precision (PA), user precision (UA), average precision (mean value of PA and UA, AA), Kappa value and overall classification precision (OA) are adopted to verify the classification condition of the model to the vegetation;
step (12): and (5) comparing the classification results of the steps (8) and (10) with the actually measured data, and performing model judgment according to the evaluation indexes.
Compared with the prior art, the invention has the beneficial effects that:
according to the method, the high-precision classification of the wetland vegetation is realized by making deep learning semantic label data and training a model. By utilizing the advantages of the convolutional neural network, the constructed classification model can effectively solve the problem of the efficiency of the current vegetation classification, and the classification precision and efficiency of wetland vegetation clusters are improved by utilizing the migration learning capacity; the wetland vegetation is identified by expanding the spatial resolution gradient and the spectral dimension of the remote sensing image, so that the problem that various wetland vegetation cannot be accurately identified by a single image is solved; accurate classification at vegetation boundaries is improved by using a convolutional neural network model that incorporates image segmentation.
Drawings
FIG. 1 is a flow chart of an implementation of the present invention.
FIG. 2 is a result diagram of the fused image segmentation algorithm of the present invention.
FIG. 3 is a graph of the results of the fused image segmentation algorithm of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, the flow chart of the implementation of the present invention is that iterative training is performed on the corresponding remote sensing image and the label data according to the classification scheme, the trained records are used to predict wetland vegetation, and finally the performance of the classification model is evaluated through the constructed evaluation indexes.
The various implementation steps are described separately below to provide reference.
Step (1): making a deep learning semantic label according to the measured data;
manufacturing wetland deep learning semantic tag data according to the actually measured data and by combining artificial visual interpretation;
step (2): preprocessing the remote sensing image;
preprocessing remote sensing images obtained by earth observation satellites GF-1, GF-2 and ZY-3 and international earth observation satellites Sentinel-2A and Landsat 8OLI by utilizing software such as ArcGIS 10.6, ENVI 5.3, SNAP and the like, such as radiometric calibration, atmospheric correction, cutting, geographic registration and the like;
and (3): matching the data in the step (1) and the data in the step (2) to form a training sample set;
matching the remote sensing image with the same spatial resolution with the label data to make a training sample, and inputting the training sample and the label data into a classification model;
and (4): establishing a classification scheme of the multi-source data remote sensing image;
respectively integrating remote sensing images with different spatial resolution gradients and spectral dimensions, and establishing a multi-source data remote sensing image classification scheme;
and (5): performing enhancement processing on the training sample;
in order to increase the number of samples, the image and the label data are divided into 256 × 256 pixels, and the image and the sample data are subjected to enhancement processing such as overturning, channel exchange, random rotation and the like in the dividing process;
and (6): performing iterative training on each classification scheme by using a convolutional neural network algorithm;
to achieve a stable training accuracy, 30 iterative trainings were performed for each protocol. Wherein the model optimizer algorithm is set to Adam, the initial learning rate is set to 0.001, and the momentum parameter is set to 0.8; the loss function is set to a conditional _ cross _ learning loss, the initial learning rate is set to 0.001, the momentum parameter is set to 0.8, and the loss function is set to a conditional _ cross _ learning loss.
And (7): taking the training record in the step (6) as a training reference of other schemes to carry out transfer learning training;
selecting a weight record with the highest training precision in 30 times of iterative training, taking the weight record as a reference of transfer learning training, training images of other schemes, and finely adjusting all convolutional layers at a learning rate 10 times smaller than a default learning rate during training;
and (8): classifying and predicting the image corresponding to each scheme;
selecting the weight record with the highest training precision in the iterative training for classification prediction;
and (9): performing multi-scale segmentation on the image;
performing multi-scale segmentation on the image based on eCogination Developer 9.4 software, and segmenting the image into objects with relatively uniform characteristics by setting 3 important parameters of shape/color, tightness/smoothness and scale and adopting a bottom-up region merging technology;
step (10): fusing the classification result of the step (8) with the segmentation result of the step (9);
combining the classification result of the step (8) with the multi-scale segmentation result of the step (9) by adopting an area optimization method;
step (11): constructing a deep learning model classification result evaluation index;
5 precision indexes including drawing Precision (PA), user precision (UA), average precision (mean value of PA and UA, AA), Kappa value and overall classification precision (OA) are adopted to verify the classification condition of the model to the vegetation;
step (12): and (5) comparing the classification results of the steps (8) and (10) with the actually measured data, and performing model judgment according to the evaluation indexes.
The migration-learned wetland vegetation classification result and the wetland vegetation classification result fused with the image segmentation algorithm are shown in fig. 2 and 3.
The method improves the classification precision and efficiency of the wetland vegetation cluster by applying the migration learning capacity of the convolutional neural network; the wetland vegetation is identified by expanding the spatial resolution gradient and the spectral dimension of the remote sensing image, so that the problem that various wetland vegetation cannot be accurately identified by a single image is solved; accurate classification at vegetation boundaries is improved by using a convolutional neural network model that incorporates image segmentation.
In summary, the advantages of the present invention are as follows:
1. the classification precision and efficiency of wetland vegetation clusters are improved through the transfer learning capacity;
2. the wetland vegetation is identified by expanding the spatial resolution gradient and the spectral dimension of the remote sensing image, so that the problem that various wetland vegetation cannot be accurately identified by a single image is solved;
3. accurate classification at vegetation boundaries is improved by using a convolutional neural network model that incorporates image segmentation.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be able to cover the technical solutions and the inventive concepts of the present invention within the technical scope of the present invention.

Claims (7)

1.一种基于面向对象的深度学习模型和迁移学习的湿地植被识别方法,其特征在于该方法包括如下步骤:1. a wetland vegetation identification method based on object-oriented deep learning model and migration learning, is characterized in that this method comprises the steps: 步骤(1):依据实测数据制作深度学习语义标签;Step (1): Make a deep learning semantic label according to the measured data; 步骤(2):对遥感影像进行预处理;Step (2): preprocessing the remote sensing image; 步骤(3):将步骤(1)与(2)数据进行匹配,形成训练样本集合;Step (3): matching the data in steps (1) and (2) to form a training sample set; 步骤(4):建立多源数据遥感影像分类方案;Step (4): establish a multi-source data remote sensing image classification scheme; 步骤(5):对训练样本进行增强处理;Step (5): enhancing the training samples; 步骤(6):利用卷积神经网络算法对每种分类方案进行迭代训练;Step (6): use the convolutional neural network algorithm to iteratively train each classification scheme; 步骤(7):将步骤(6)中的训练记录作为其他方案的训练基准,进行迁移学习训练;Step (7): take the training record in step (6) as the training benchmark of other schemes, and carry out transfer learning training; 步骤(8):对每种方案对应的影像进行分类预测;Step (8): classify and predict the images corresponding to each scheme; 步骤(9):对影像进行多尺度分割;Step (9): perform multi-scale segmentation on the image; 步骤(10):将步骤(8)的分类结果与步骤(9)的分割结果进行融合;Step (10): fuse the classification result of step (8) with the segmentation result of step (9); 步骤(11):构建深度学习模型分类结果评价指标;Step (11): construct a deep learning model classification result evaluation index; 步骤(12):将步骤(8)、(10)的分类结果与实测数据进行对比,按照评价指标进行模型评判。Step (12): Compare the classification results of steps (8) and (10) with the measured data, and conduct model evaluation according to the evaluation index. 2.如权利要求1所述的基于面向对象的深度学习模型和迁移学习的湿地植被识别方法,其特征在于步骤(3)中,将步骤(1)与(2)数据进行匹配,是将具有相同空间分辨率的遥感影像与标签数据进行匹配做成训练样本,一同输入分类模型,形成训练样本集合。2. The wetland vegetation identification method based on object-oriented deep learning model and migration learning as claimed in claim 1, is characterized in that in step (3), step (1) and (2) data are matched, is to have The remote sensing images of the same spatial resolution are matched with the label data to form training samples, which are input into the classification model together to form a training sample set. 3.如权利要求1所述的基于面向对象的深度学习模型和迁移学习的湿地植被识别方法,其特征在于步骤(4)中,建立多源数据遥感影像分类方案;是将不同空间分辨率梯度、光谱维度的遥感影像分别进行整合,建立多源数据遥感影像分类方案。3. the wetland vegetation identification method based on object-oriented deep learning model and migration learning as claimed in claim 1, it is characterized in that in step (4), establish multi-source data remote sensing image classification scheme; The remote sensing images in the spectral dimension and the remote sensing images are respectively integrated to establish a multi-source data remote sensing image classification scheme. 4.如权利要求1所述的基于面向对象的深度学习模型和迁移学习的湿地植被识别方法,其特征在于步骤(6)中,对每种方案进行30次迭代训练,其中,模型优化器算法设置为Adam,初始学习率设置为0.001,动量参数设置为0.8;损失函数设置为Categorical_crossentropy loss,初始学习率设置为0.001,动量参数设置为0.8,损失函数设置为Categorical_crossentropy loss。4. the wetland vegetation identification method based on object-oriented deep learning model and migration learning as claimed in claim 1, is characterized in that in step (6), carry out 30 iterations training to each kind of scheme, wherein, model optimizer algorithm Set to Adam, the initial learning rate is set to 0.001, the momentum parameter is set to 0.8; the loss function is set to Categorical_crossentropy loss, the initial learning rate is set to 0.001, the momentum parameter is set to 0.8, and the loss function is set to Categorical_crossentropy loss. 5.如权利要求4所述的基于面向对象的深度学习模型和迁移学习的湿地植被识别方法,其特征在于步骤(7)中,进行迁移学习训练;选取30次迭代训练中训练精度最高的权重记录,以此为迁移学习训练的基准,对其他方案的影像进行训练,在训练时将所有卷积层都以比默认学习率小10倍的学习率进行微调。5. The wetland vegetation identification method based on object-oriented deep learning model and migration learning as claimed in claim 4, is characterized in that in step (7), carry out migration learning training; Select the weight with the highest training accuracy in 30 iterations of training Recordings, using this as the benchmark for transfer learning training, trained on images from other schemes, fine-tuning all convolutional layers with a learning rate 10 times smaller than the default learning rate. 6.如权利要求1所述的基于面向对象的深度学习模型和迁移学习的湿地植被识别方法,其特征在于步骤(9)中,所述分割,是对影像进行多尺度分割,通过设置形状/颜色、紧度/平滑度和尺度3个重要参数,采用自下而上的区域合并技术将影像分割成具有相对均匀特性的对象。6. The wetland vegetation identification method based on object-oriented deep learning model and migration learning as claimed in claim 1, characterized in that in step (9), the segmentation is to perform multi-scale segmentation on the image, by setting the shape/ Color, tightness/smoothness and scale are three important parameters, and the bottom-up region merging technique is used to segment the image into objects with relatively uniform characteristics. 7.如权利要求1所述的基于面向对象的深度学习模型和迁移学习的湿地植被识别方法,其特征在于步骤(11)中,采用制图精度、用户精度、平均精度、Kappa值与总体分类精度5种精度指标来验证模型对植被的分类情况。7. The wetland vegetation identification method based on object-oriented deep learning model and migration learning as claimed in claim 1, is characterized in that in step (11), adopts mapping precision, user precision, average precision, Kappa value and overall classification precision Five accuracy indicators are used to verify the classification of vegetation by the model.
CN202111152307.XA 2021-09-29 2021-09-29 A Wetland Vegetation Recognition Method Based on Object-Oriented Deep Learning Model and Transfer Learning Pending CN113837134A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111152307.XA CN113837134A (en) 2021-09-29 2021-09-29 A Wetland Vegetation Recognition Method Based on Object-Oriented Deep Learning Model and Transfer Learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111152307.XA CN113837134A (en) 2021-09-29 2021-09-29 A Wetland Vegetation Recognition Method Based on Object-Oriented Deep Learning Model and Transfer Learning

Publications (1)

Publication Number Publication Date
CN113837134A true CN113837134A (en) 2021-12-24

Family

ID=78967400

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111152307.XA Pending CN113837134A (en) 2021-09-29 2021-09-29 A Wetland Vegetation Recognition Method Based on Object-Oriented Deep Learning Model and Transfer Learning

Country Status (1)

Country Link
CN (1) CN113837134A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114529721A (en) * 2022-02-08 2022-05-24 山东浪潮科学研究院有限公司 Urban remote sensing image vegetation coverage identification method based on deep learning
CN115965812A (en) * 2022-12-13 2023-04-14 桂林理工大学 Evaluation method of wetland vegetation species and ground features classification by UAV images
CN116168290A (en) * 2022-12-28 2023-05-26 二十一世纪空间技术应用股份有限公司 Method and device for classifying arbor and shrub in remote sensing image
CN116433596A (en) * 2023-03-07 2023-07-14 南京林业大学 Slope vegetation coverage measuring method and device and related components
CN116863243A (en) * 2023-07-26 2023-10-10 航天恒星科技有限公司 A method, electronic equipment and storage medium for identifying wetland ecological drought conditions based on multi-source data
CN117611909A (en) * 2023-12-04 2024-02-27 桂林理工大学 A wetland vegetation classification method based on deep learning and image spatial resolution
CN116433596B (en) * 2023-03-07 2025-07-25 南京林业大学 Slope vegetation coverage measuring method and device and related components

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170249746A1 (en) * 2015-10-23 2017-08-31 International Business Machines Corporation Imaging segmentation using multi-scale machine learning approach
CN110853026A (en) * 2019-11-16 2020-02-28 四创科技有限公司 Remote sensing image change detection method integrating deep learning and region segmentation
CN111652193A (en) * 2020-07-08 2020-09-11 中南林业科技大学 Wetland classification method based on multi-source imagery

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170249746A1 (en) * 2015-10-23 2017-08-31 International Business Machines Corporation Imaging segmentation using multi-scale machine learning approach
CN110853026A (en) * 2019-11-16 2020-02-28 四创科技有限公司 Remote sensing image change detection method integrating deep learning and region segmentation
CN111652193A (en) * 2020-07-08 2020-09-11 中南林业科技大学 Wetland classification method based on multi-source imagery

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
腾文秀 等: ""结合面向对象和深度特征的高分影像树种分类"", 《测绘通报》, no. 4, pages 38 - 42 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114529721A (en) * 2022-02-08 2022-05-24 山东浪潮科学研究院有限公司 Urban remote sensing image vegetation coverage identification method based on deep learning
CN114529721B (en) * 2022-02-08 2024-05-10 山东浪潮科学研究院有限公司 Urban remote sensing image vegetation coverage recognition method based on deep learning
CN115965812A (en) * 2022-12-13 2023-04-14 桂林理工大学 Evaluation method of wetland vegetation species and ground features classification by UAV images
CN115965812B (en) * 2022-12-13 2024-01-19 桂林理工大学 Assessment method of wetland vegetation species and surface object classification using drone images
CN116168290A (en) * 2022-12-28 2023-05-26 二十一世纪空间技术应用股份有限公司 Method and device for classifying arbor and shrub in remote sensing image
CN116168290B (en) * 2022-12-28 2023-08-08 二十一世纪空间技术应用股份有限公司 Arbor-shrub grass classification method based on high-resolution remote sensing image and three-dimensional data
CN116433596A (en) * 2023-03-07 2023-07-14 南京林业大学 Slope vegetation coverage measuring method and device and related components
CN116433596B (en) * 2023-03-07 2025-07-25 南京林业大学 Slope vegetation coverage measuring method and device and related components
CN116863243A (en) * 2023-07-26 2023-10-10 航天恒星科技有限公司 A method, electronic equipment and storage medium for identifying wetland ecological drought conditions based on multi-source data
CN117611909A (en) * 2023-12-04 2024-02-27 桂林理工大学 A wetland vegetation classification method based on deep learning and image spatial resolution

Similar Documents

Publication Publication Date Title
CN113837134A (en) A Wetland Vegetation Recognition Method Based on Object-Oriented Deep Learning Model and Transfer Learning
CN118470550B (en) A natural resource asset data collection method and platform
CN114742272A (en) A Soil Cadmium Risk Prediction Method Based on Spatial and Temporal Interaction
CN113449594A (en) Multilayer network combined remote sensing image ground semantic segmentation and area calculation method
CN109409261B (en) A crop classification method and system
CN112347970B (en) Remote sensing image ground object identification method based on graph convolution neural network
CN110929607A (en) Remote sensing identification method and system for urban building construction progress
CN112001121B (en) Large-area gentle region predictive soil mapping method based on solar radiation
CN116403123B (en) Remote sensing image change detection method based on deep convolutional network
CN113936214B (en) Karst wetland vegetation community classification method based on fusion of aerospace remote sensing images
CN109948697B (en) Method for extracting urban built-up area by using multi-source data to assist remote sensing image classification
Zheng et al. Partial domain adaptation for scene classification from remote sensing imagery
CN113469226B (en) A land use classification method and system based on street view images
CN114898089B (en) Functional area extraction and classification method fusing high-resolution images and POI data
CN116630610A (en) ROI Region Extraction Method Based on Semantic Segmentation Model and Conditional Random Field
CN118501878A (en) A real estate surveying method
Yang et al. Extraction of land covers from remote sensing images based on a deep learning model of NDVI-RSU-Net
CN113159154B (en) Crop classification-oriented time sequence feature reconstruction and dynamic identification method
CN115457403A (en) A method for intelligent identification of crops based on multi-type remote sensing images
CN117172255B (en) Geographic entity alignment method, device and electronic equipment considering spatial semantic relationship
CN110533118B (en) Sparse representation classification method for remote sensing images based on multi-kernel learning
CN116704378A (en) Homeland mapping data classification method based on self-growing convolution neural network
Cheng et al. Automated detection of impervious surfaces using night-time light and Landsat images based on an iterative classification framework
Yu et al. Mapping the planting area of winter wheat at 10-m resolution using sentinel-2 data and multimodel fusion method
Yang et al. Fast processing method of high resolution remote sensing image based on decision tree classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination