[go: up one dir, main page]

CN110263717B - A land-use category determination method incorporating street view imagery - Google Patents

A land-use category determination method incorporating street view imagery Download PDF

Info

Publication number
CN110263717B
CN110263717B CN201910539888.9A CN201910539888A CN110263717B CN 110263717 B CN110263717 B CN 110263717B CN 201910539888 A CN201910539888 A CN 201910539888A CN 110263717 B CN110263717 B CN 110263717B
Authority
CN
China
Prior art keywords
street view
image
category
land
sampling point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910539888.9A
Other languages
Chinese (zh)
Other versions
CN110263717A (en
Inventor
葛咏
赵维恒
贾远信
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Geographic Sciences and Natural Resources of CAS
Original Assignee
Institute of Geographic Sciences and Natural Resources of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Geographic Sciences and Natural Resources of CAS filed Critical Institute of Geographic Sciences and Natural Resources of CAS
Priority to CN201910539888.9A priority Critical patent/CN110263717B/en
Publication of CN110263717A publication Critical patent/CN110263717A/en
Application granted granted Critical
Publication of CN110263717B publication Critical patent/CN110263717B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/16Real estate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/62Text, e.g. of license plates, overlay texts or captions on TV images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Data Mining & Analysis (AREA)
  • Primary Health Care (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Human Resources & Organizations (AREA)
  • General Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Strategic Management (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种融入街景影像的土地利用类别确定方法。首先,通过深度学习卷积神经网络的方式提取街景影像采样点中精细地物类别信息;同时,对遥感影像进行预处理,使用监督分类方法获取土地覆被图。其次,通过街景采样点所处像元的光谱、纹理、形状以及地理分布信息,推求临近像元的类别情况。最后融合像元分类信息与土地覆被图,得到精细的多类别土地利用结果。本发明从遥感基于像元分类出发,结合了精细的街景信息,分类结果的精度高。

Figure 201910539888

The invention discloses a land use category determination method integrated with street view images. First, the fine feature category information in the sampling points of the street view image is extracted by means of deep learning convolutional neural network; at the same time, the remote sensing image is preprocessed, and the land cover map is obtained by using the supervised classification method. Secondly, through the spectrum, texture, shape and geographic distribution information of the pixel where the street view sampling point is located, the category of the adjacent pixels is inferred. Finally, the pixel classification information and the land cover map are fused to obtain detailed multi-category land use results. The invention starts from remote sensing-based pixel classification, combines fine street view information, and has high classification results.

Figure 201910539888

Description

Method for determining land utilization category of street view image
Technical Field
The invention relates to a method for determining land utilization categories, and belongs to the field of geospatial information.
Background
Land utilization refers to all activities of purposefully developing and utilizing land resources by human beings, and land utilization information plays a vital role in urban planning, urban environment monitoring, urban transportation and the like (Liu X, He J, Yao Y, et al.). How to extract more accurate land use information is an important problem and challenge. Remote sensing image information is an important means (Zhao Yingshi) for extracting land utilization information as a large-range and non-contact data source. By using the high-resolution remote sensing image acquired by the resource satellite, combining professional remote sensing image processing software and applying a scientific information extraction method, the corresponding resource information can be quickly and accurately acquired.
However, due to the influence of the self-precision of the remote sensing image, for example, the medium-low spatial resolution remote sensing image has lower spatial resolution and weak ground object identification capability although having higher spectral resolution and time resolution; the high spatial resolution remote sensing image has high spatial resolution and strong ground object identification capability but less historical archives and poorer spectral resolution. It is difficult to extract high-precision ground feature information by simply using remote sensing image information. Many researchers have aided information by some geospatial information to improve the accuracy of land use classification. In the aspect of extracting land utilization information by fusing a remote sensing image and geographic spatial data, HU and the like use open social data to segment the remote sensing image into block areas and use a nuclear density mode to estimate land feature distribution so as to improve the land utilization classification precision (Hu T, Yang J, Li X, et al; ZhangY, Li Q, Huang H, et al), Chen and the like use social perception data, and the urban land utilization effect is improved by using methods of extracting urban green land functional areas (Chen W, Huang H, Dong J, et al.) and mobile phone position information data (Jiay, Ge Y, et al) in an auxiliary way through spatial information and character information in the social perception data.
The street view image records the appearance of a city at a certain moment in a digital image form, and because the city street view image data has the advantages of geographic labels and openness, the large geographic image data with fine image quality and uniform distribution becomes a research hotspot (Betula grahami) in the field of GIS and computer vision.
In the existing classification research of land utilization integrated with street view images, the street scale is mainly taken as a research hotspot, remote sensing images are divided into land parcels through road information data, the land parcels are classified through the distribution density of street view spots, the most common method is a kernel density method, and the classification of the remote sensing land utilization is assisted by performing space-to-space analysis on street view sampling points (Rui Cao, Jia song, et. al; Jian Kang a, Marco)
Figure BDA0002102221380000011
). However, these methods are limited to the size of a block, and do not use single pixel information to classify land use, and for some areas with sparse block sample points or areas with unevenly distributed block samples, using kernel density regression may cause a large error.
Therefore, the invention provides the method for determining the street view information fused into the land use category based on the pixels, effectively improves the defects of the method, utilizes a small amount of fine samples to calculate more sample points, and obtains a more fine land use classification result based on the pixels.
The invention content is as follows:
the technical problems solved by the invention are as follows: the method overcomes the defects of the prior art, provides a method for determining the land utilization category of the street view image, theoretically constructs a theoretical model of the land utilization classification of the street view image, classifies the land utilization from the fine category, and improves the classification precision of the land utilization.
Firstly, for the collected street view picture, a fine ground feature type represented in the image is extracted by a deep learning method. Secondly, the remote sensing image is preprocessed, the land cover is classified, and the region is divided into a plurality of large classes according to a primary classification system. Then integrating street view category information into land utilization classification, establishing a window by taking a sampling point as a center, and marking pixels similar to the sample point information through characteristic marks such as pixel spectrum, texture, shape and the like; the above-mentioned manner is repeated for the marked image element to extract another image element until enough classification category information is obtained. And finally classifying the land utilization condition.
The technical scheme of the invention is as follows: a land use category determining method for integrating street view images comprises the following steps:
step 1: processing the street view image, editing road vector data of a street view image acquisition area, extracting the road vector data with the street view image, and randomly generating a street view image sampling point on a road vector of the street view image acquisition area through GIS software; obtaining street view images of each sampling point position according to the sampling point spatial positions of the street view images, and then carrying out preprocessing operation on the obtained street view images to obtain the range of the bottom area of the street view images;
step 2: semantic segmentation and category judgment are carried out on the street view images in the bottom area range obtained by the preprocessing in the step 1 by utilizing a convolutional neural network, and the categories of ground objects in the street view images in all directions are judged for the street view images on the left side and the right side of the road direction where the sampling point is located through the convolutional neural network; judging the road category in the street view image by the convolutional neural network for the street view image of the road direction where the sampling point is located to obtain a convolutional neural network street view image category judgment result, endowing the judgment result to the street view image sampling point, and carrying out spatial position transition on the street view image sampling point endowed with the category to ensure that the street view image sampling point effectively represents the pixel characteristic of the real position of the street view image sampling point;
and step 3: in order to judge the land cover category of the street view image acquisition sampling point region, a remote sensing image of the street view image acquisition region needs to be acquired, the remote sensing image is preprocessed, and relevant auxiliary information including NDVI (normalized difference of intensity), NDWI (normalized difference of intensity), DEM (digital elevation model) data is added for classifying the land cover of the street view image acquisition region;
and 4, step 4: the remote sensing image of the street view image acquisition area is classified into land cover, land features are classified into 5 categories of vegetation, construction land, water, unused land and cultivated land in a remote sensing image supervision and classification mode to obtain a remote sensing land cover classification result, and the classification result is classified and then processed to improve the classification precision of the land cover;
and 5: and (4) combining the remote sensing image land cover classification result and the convolutional neural network street view image category judgment result, and adding the convolutional neural network street view image category judgment result to the land utilization classification model respectively according to the construction land category and the vegetation category in the remote sensing land cover classification result obtained in the step (4) so as to obtain a more refined land utilization category.
In the step 2, the specific implementation manner of pushing the spatial position of the sampling point is as follows: the sampling points on the left side and the right side are displaced, the sampling points are pushed h m towards the left side and the right side of the vertical road, the longitude and latitude and the feature type of the pushed sample points are recorded, wherein h is the moving distance, the specific value is determined by the resolution of the remote sensing image and the street view image, and the center coordinates of the pushed feature are as follows:
X′=x+h*cosθ
Y′=y+h*sinθ
wherein x is the longitude of the position of the sampling point, y is the latitude of the position of the sampling point, the theta value is the included angle between the road where the sampling point is located and the horizontal line, and h is the push length.
In the step 5, the land use classification model is specifically realized as follows:
(1) setting a sampling window, carrying out pixel-by-pixel search on the whole remote sensing image, and judging whether each pixel neighborhood contains a streetscape sampling point;
(2) if the street view sampling points are included, judging the distance weight of each street view sampling point to the central pixel by an inverse distance weight method, wherein the definition of the inverse distance weight method is as follows:
Figure BDA0002102221380000031
wherein WiRepresenting the weight of the distance of the ith street view sampling point, n representing the number of the street view sample points existing in the search window range, hiThe distance from the ith point to the central pixel is represented, and P represents a parameter of the control weight changing along with the distance;
(3) calculating the absolute value of the difference value between the attribute value of the central pixel of each layer and the attribute value of the pixel where the street view sampling point is located one by one, and averaging the absolute values of the difference values of the plurality of layers:
Figure BDA0002102221380000032
wherein M represents the sum of difference values of central pixels and sampling pixels of each image layer, M represents the number of image layers, n represents the number of image layers, and comprises 3 spectral image layers, texture feature image layers and shape feature image layers, and z is an amplification factor;
and finally, multiplying the M value of each sampling point by the weight value corresponding to the M value and calculating the proportion:
Ci=Mi *Wi L=1
Figure BDA0002102221380000033
wherein C isiThe coefficient of i-type ground objects in the central pixel is represented, L represents the number of the i-th type ground objects in n samples of the neighborhood of the pixel, and the number of the i-th type ground objects is represented by comparing C of each type of the pixeliValue in CiThe ground object with the largest value is the pixel land utilization category. Repeating the land classification model algorithm until all pixelsUntil a unique land use category is obtained.
In the step 2: the specific process of performing semantic segmentation and category judgment on the street view image by using the convolutional neural network is as follows:
(1) performing semantic segmentation on the street view image, segmenting pixels according to different semantic meanings expressed in the image to obtain labels of different ground objects in the image, and keeping a large range and an interested area as the labels to perform category judgment;
(2) and (4) judging the street view image category, namely, adopting a convolutional neural network as a street view image classification model, selecting an equal number of street view images for each category according to a set classification system, training the selected model by using the interesting region label obtained in the previous step, and obtaining the category attributes of other photos by using the trained model parameters.
Compared with the prior art, the invention has the advantages that:
(1) by utilizing the street view image, the land use category can be more accurately judged from the human observation angle, and the accuracy of selecting the sample is higher than that of the traditional supervision and classification method.
(2) And land utilization classification is carried out based on the pixel attribute characteristics, so that the obtained classification result is more precise and more accurate than the land utilization classification based on the block scale.
Drawings
FIG. 1 is a main flow diagram of the present invention;
FIG. 2 is a schematic diagram of classification of pixel land utilization integrated with street view spatial information according to the present invention, which can improve pixel classification accuracy.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and examples.
As shown in fig. 1, a method for determining a land use category fused with street view information according to the present invention includes the steps of:
step 1: streetscape information extraction
Selecting street view sampling points: editing the road vector data, and extracting the road vector data with the street view image; sampling points are randomly generated on a road through GIS software, direction calculation is carried out on each sampling point due to the data quality of the road trend image sampling points, and the four directions of the obtained sampling points are 0 degree, 270 degrees, 180 degrees and 90 degrees respectively. And respectively selecting one street view image on the left side and one street view image on the right side of the road as a ground feature distinguishing sample and one street view image as a road distinguishing sample according to the position of the sampling point.
Preprocessing street view images: and acquiring street view images of sampling points, wherein 3 frames are acquired at each sampling point. In order to reduce the classification precision images of remote object pairs with the unpredictable distance, images on the left side and the right side of the road are cut to obtain 320 × 320 pixel images on the lower part of the road; and performing semantic annotation on all images, and delineating the ground feature semantic information to be extracted in the images, such as outlines of roads, houses and the like.
And performing semantic segmentation and classification judgment on the labeled tag files through a convolutional neural network, classifying the left and right pictures into fine ground object types, and using the middle road part for classifying road classes.
Because the acquired coordinates of the sampling points are all located in the center of the road, the positions of the sampling points on the left side and the right side need to be displaced in order to accurately express the ground feature categories on the two sides of the road, the distance from the middle lower area of the image to the road is estimated to be 5m more, the sampling points are pushed to the left side and the right side of the vertical road by 5m respectively, and the longitude and latitude of the sample points and the types of the ground features after the pushing are recorded.
X′=x+h*cosθ
Y′=y+h*sinθ
Wherein the theta value is the included angle between the road where the sampling point is located and the horizontal line, and h is the push length.
Step 2: remote sensing land cover classification
Remote sensing image preprocessing: the method comprises the steps of obtaining remote sensing images of a research area, preprocessing the remote sensing images of the research area such as radiometric calibration, atmospheric correction and geometric correction, calculating NDVI, NDWI and NDBI values of the remote sensing images, and providing elevation and gradient information through DEM data of the research area. And fusing the spectral information and various additional information for image classification.
The land cover types are divided into 5 categories such as vegetation, construction land, water, unused land, cultivated land and the like through software, classification post-processing is carried out on classification results, and classification precision is improved. And the classification precision of the samples is preliminarily evaluated through verification.
And step 3: street view information-integrated land utilization classification
And selecting 70% from the street view sample points as training samples. And for vegetation and construction land, adding fine classification information obtained by extracting street view information into the land cover classification result to obtain a more fine land utilization category. And adding the cement road and the asphalt road in the road category information obtained by extracting the street view information into the land cover category of the construction land for refining and classifying, and adding the soil road into the land cover category of the unused land for refining and classifying. And finally, fusing the classification results to obtain a soil utilization map.
The model is provided with a sampling window, pixel-by-pixel search is carried out on the whole remote sensing image, and whether neighborhood of the remote sensing image contains street view sampling points or not is judged.
And if the street view sampling points are included, judging the distance weight of each street view sampling point to the central pixel by an inverse distance weight method. The definition of the inverse distance weight method is:
Figure BDA0002102221380000051
wherein WiA weight representing the distance of the ith street view sample point. n represents the number of street view sample points existing in the search window range. h isiRepresenting the distance of the ith point from the center pixel element. P represents a parameter that controls the weight as a function of distance.
And calculating the absolute value of the difference value between the attribute value of the central pixel of each layer and the attribute value of the pixel where the street view sampling point is located one by one, and averaging the absolute values of the difference values of the plurality of layers.
Figure BDA0002102221380000061
Wherein M represents the sum of the difference values of the center pixel and the sampling pixel of each image layer. n represents the number of drawing layers, wherein the drawing layers comprise 3 spectral drawing layers, texture characteristic drawing layers and shape characteristic drawing layers. z is the magnification factor.
And finally, multiplying the M value of each sampling point by the corresponding weight value and calculating the proportion of the M value and the weight value.
Ci=Mi *Wi L=1
Figure BDA0002102221380000062
Wherein C isiThe coefficient of i-type ground objects in the central pixel is represented, L represents the number of the i-th type ground objects in n samples of the neighborhood of the pixel, and the number of the i-th type ground objects is represented by comparing C of each type of the pixeliValue in CiThe ground object with the largest value is the pixel land utilization category. And repeating the land classification model algorithm until all the pixels acquire the unique land utilization category.
And for vegetation and construction land, adding fine classification information obtained by extracting street view information into the land cover classification result to obtain a more fine land utilization category. And adding the cement road and the asphalt road in the road category information obtained by extracting the street view information into the land cover category of the construction land for refining and classifying, and adding the soil road into the land cover category of the unused land for refining and classifying. And finally, fusing the classification results to obtain a soil utilization map.
As shown in fig. 2, the central pixel represents a pixel of a to-be-determined category, and feature category points obtained by classifying street view pictures are arranged around the pixel point, attribute values of feature point layers obtained by classifying the central pixel and the street view pictures are sequentially determined by the method, and the feature category is obtained by performing land use category determination on the central pixel by using the difference value and combining the distance weight.

Claims (3)

1. A land use category determination method fused into street view images is characterized by comprising the following steps:
step 1: processing the street view image, editing road vector data of a street view image acquisition area, extracting the road vector data with the street view image, and randomly generating a street view image sampling point on a road vector of the street view image acquisition area through GIS software; obtaining street view images of each sampling point position according to the sampling point spatial positions of the street view images, and then carrying out preprocessing operation on the obtained street view images to obtain the street view images in the bottom area range;
step 2: semantic segmentation and category judgment are carried out on the street view images in the bottom area range obtained by the preprocessing in the step 1 by utilizing a convolutional neural network, and the categories of ground objects in the street view images in all directions are judged for the street view images on the left side and the right side of the road direction where the sampling point is located through the convolutional neural network; judging the road category in the street view image by the convolutional neural network for the street view image of the road direction where the sampling point is located to obtain a convolutional neural network street view image category judgment result, endowing the judgment result to the street view image sampling point, and carrying out spatial position transition on the street view image sampling point endowed with the category to ensure that the street view image sampling point effectively represents the pixel characteristic of the real position of the street view image sampling point;
and step 3: in order to judge the land cover category of the street view image acquisition sampling point region, a remote sensing image of the street view image acquisition region needs to be acquired, the remote sensing image is preprocessed, and relevant auxiliary information including NDVI (normalized difference of intensity), NDWI (normalized difference of intensity), DEM (digital elevation model) data is added for classifying the land cover of the street view image acquisition region;
and 4, step 4: the remote sensing image of the street view image acquisition area is classified into land cover, land features are classified into 5 categories of vegetation, construction land, water, unused land and cultivated land in a remote sensing image supervision and classification mode to obtain a remote sensing land cover classification result, and the classification result is classified and then processed to improve the classification precision of the land cover;
and 5: combining the remote sensing image land cover classification result and the convolutional neural network street view image category judgment result, and adding the convolutional neural network street view image category judgment result to a land utilization classification model respectively according to the construction land category and the vegetation category in the remote sensing land cover classification result obtained in the step 4 so as to obtain a more refined land utilization category;
in the step 5, the land use classification model is specifically realized as follows:
(1) setting a sampling window, carrying out pixel-by-pixel search on the whole remote sensing image, and judging whether each pixel neighborhood contains a streetscape sampling point;
(2) if the street view sampling points are included, judging the distance weight of each street view sampling point to the central pixel by an inverse distance weight method, wherein the definition of the inverse distance weight method is as follows:
Figure FDA0002920516240000011
wherein WiRepresenting the weight of the distance of the ith street view sampling point, n representing the number of the street view sample points existing in the search window range, hiThe distance from the ith point to the central pixel is represented, and P represents a parameter of the control weight changing along with the distance;
(3) calculating the absolute value of the difference value between the attribute value of the central pixel of each layer and the attribute value of the pixel where the street view sampling point is located one by one, and averaging the absolute values of the difference values of the plurality of layers:
Figure FDA0002920516240000021
wherein M represents the sum of difference values of central pixels and sampling pixels of each image layer, M represents the number of image layers, n represents the number of image layers, and comprises 3 spectral image layers, texture feature image layers and shape feature image layers, and z is an amplification factor;
and finally, multiplying the M value of each sampling point by the weight value corresponding to the M value and calculating the proportion:
Ci=Mi*Wi L=1
Figure FDA0002920516240000022
wherein C isiCoefficient representing i-type ground object in central pixel, L tableIn n samples of the pixel neighborhood, the number of the i-th type ground objects is compared with the C of each type of the pixeliValue in CiThe ground object with the maximum value is the pixel land utilization category; and repeating the land classification model algorithm until all the pixels acquire the unique land utilization category.
2. The method for determining land use category fused to streetscape image according to claim 1, wherein: in the step 2, the specific implementation manner of pushing the spatial position of the sampling point is as follows: the sampling points on the left side and the right side are displaced, the sampling points are pushed h m towards the left side and the right side of the vertical road, the longitude and latitude and the feature type of the pushed sample points are recorded, wherein h is the moving distance, the specific value is determined by the resolution of the remote sensing image and the street view image, and the center coordinates of the pushed feature are as follows:
X′=x+h*cosθ
Y′=y+h*sinθ
wherein x is the longitude of the position of the sampling point, y is the latitude of the position of the sampling point, the theta value is the included angle between the road where the sampling point is located and the horizontal line, and h is the push length.
3. The method for determining land use category fused to streetscape image according to claim 1, wherein: in the step 2, the specific process of performing semantic segmentation and category judgment on the streetscape image by using the convolutional neural network is as follows:
(1) performing semantic segmentation on the street view image, segmenting pixels according to different semantic meanings expressed in the image to obtain labels of different ground objects in the image, and keeping a large range and an interested area as the labels to perform category judgment;
(2) and (4) judging the street view image category, namely, adopting a convolutional neural network as a street view image classification model, selecting an equal number of street view images for each category according to a set classification system, training the selected model by using the interesting region label obtained in the previous step, and obtaining the category attributes of other photos by using the trained model parameters.
CN201910539888.9A 2019-06-21 2019-06-21 A land-use category determination method incorporating street view imagery Active CN110263717B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910539888.9A CN110263717B (en) 2019-06-21 2019-06-21 A land-use category determination method incorporating street view imagery

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910539888.9A CN110263717B (en) 2019-06-21 2019-06-21 A land-use category determination method incorporating street view imagery

Publications (2)

Publication Number Publication Date
CN110263717A CN110263717A (en) 2019-09-20
CN110263717B true CN110263717B (en) 2021-04-09

Family

ID=67919959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910539888.9A Active CN110263717B (en) 2019-06-21 2019-06-21 A land-use category determination method incorporating street view imagery

Country Status (1)

Country Link
CN (1) CN110263717B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111091054B (en) * 2019-11-13 2020-11-10 广东国地规划科技股份有限公司 Method, system and device for monitoring land type change and storage medium
CN111177587B (en) * 2019-12-12 2023-05-23 广州地理研究所 A shopping street recommendation method and device
CN111402131B (en) * 2020-03-10 2022-04-01 北京师范大学 Method for acquiring super-resolution land cover classification map based on deep learning
CN111444783B (en) * 2020-03-11 2023-11-28 中科禾信遥感科技(苏州)有限公司 Crop planting plot identification method and device based on pixel statistics
CN111414878B (en) * 2020-03-26 2023-11-03 北京京东智能城市大数据研究院 Social attribute analysis and image processing method and device for land parcels
CN111597377B (en) * 2020-04-08 2021-05-11 广东省国土资源测绘院 Deep learning technology-based field investigation method and system
CN111598048B (en) * 2020-05-31 2021-06-15 中国科学院地理科学与资源研究所 An urban village identification method based on fusion of high-resolution remote sensing images and street view images
CN113469226B (en) * 2021-06-16 2022-09-30 中国地质大学(武汉) A land use classification method and system based on street view images
CN113722530B (en) * 2021-09-08 2023-10-24 云南大学 Fine granularity geographic position positioning method
CN114155433B (en) * 2021-11-30 2022-07-19 北京新兴华安智慧科技有限公司 Illegal land detection method and device, electronic equipment and storage medium
CN114529838B (en) * 2022-04-24 2022-07-15 江西农业大学 Construction method and system of soil nitrogen content inversion model based on convolutional neural network
CN115546643A (en) * 2022-10-14 2022-12-30 辽宁工程技术大学 A multi-classification extraction method for rod-shaped objects based on street view images
CN115761526A (en) * 2022-12-06 2023-03-07 中国科学院地理科学与资源研究所 Method and system for land cover mapping of satellite remote sensing images by fusing sparse photos
CN116129278B (en) * 2023-04-10 2023-06-30 牧马人(山东)勘察测绘集团有限公司 Land utilization classification and identification system based on remote sensing images
CN118378780A (en) * 2024-04-08 2024-07-23 广西壮族自治区自然资源遥感院 Environment comprehensive evaluation method and system based on remote sensing image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105868533B (en) * 2016-03-23 2018-12-14 四川理工学院 Based on Internet of Things and the integrated perception of 3S technology river basin water environment and application method
CN109657602A (en) * 2018-12-17 2019-04-19 中国地质大学(武汉) Automatic functional region of city method and system based on streetscape data and transfer learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105868533B (en) * 2016-03-23 2018-12-14 四川理工学院 Based on Internet of Things and the integrated perception of 3S technology river basin water environment and application method
CN109657602A (en) * 2018-12-17 2019-04-19 中国地质大学(武汉) Automatic functional region of city method and system based on streetscape data and transfer learning

Also Published As

Publication number Publication date
CN110263717A (en) 2019-09-20

Similar Documents

Publication Publication Date Title
CN110263717B (en) A land-use category determination method incorporating street view imagery
Münzinger et al. Mapping the urban forest in detail: From LiDAR point clouds to 3D tree models
Feng et al. GCN-based pavement crack detection using mobile LiDAR point clouds
Xu et al. Wheat ear counting using K-means clustering segmentation and convolutional neural network
CN107516077B (en) Traffic sign information extraction method based on fusion of laser point cloud and image data
CN110956207B (en) Method for detecting full-element change of optical remote sensing image
CN112766155A (en) Deep learning-based mariculture area extraction method
CN114241326B (en) Progressive intelligent production method and system for ground feature elements of remote sensing images
Ghanea et al. Automatic building extraction in dense urban areas through GeoEye multispectral imagery
CN110008908A (en) A method for extracting grassland fences based on high-resolution remote sensing images
Lauko et al. Local color and morphological image feature based vegetation identification and its application to human environment street view vegetation mapping, or how green is our county?
CN113033516A (en) Object identification statistical method and device, electronic equipment and storage medium
Marvaniya et al. Small, sparse, but substantial: Techniques for segmenting small agricultural fields using sparse ground data
CN101403796A (en) City ground impermeability degree analyzing and drawing method
CN111639672A (en) Deep learning city functional area classification method based on majority voting
CN115953685A (en) Multi-layer multi-scale division agricultural greenhouse type information extraction method and system
CN118608987A (en) A method for extracting ground objects from remote sensing satellite images based on AI
Schiewe Status and future perspectives of the application potential of digital airborne sensor systems
Tran et al. Classification of image matching point clouds over an urban area
CN118298317A (en) Automatic labeling method and system based on online map training set
Zhao et al. Improving object-oriented land use/cover classification from high resolution imagery by spectral similarity-based post-classification
CN112580504B (en) Method and device for tree species classification and counting based on high-resolution satellite remote sensing images
Dunesme et al. Automatic vectorization of fluvial corridor features on historical maps to assess riverscape changes
WO2023116359A1 (en) Method, apparatus and system for classifying green, blue and gray infrastructures, and medium
CN112036246B (en) Construction method of remote sensing image classification model, remote sensing image classification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant