Object Based Technique for Delineating and Mapping 15 Tree Species
using VHR WorldView-2 Imagery
Yaseen T. Mustafa*a, Hindav N. Habeebb
a
Faculty of Science, University of Zakho, Kurdistan Region–Iraq;
b
Directorate of Forestry, Duhok, Kurdistan Region–Iraq
ABSTRACT
Monitoring and analyzing forests and trees are required task to manage and establish a good plan for the forest
sustainability. To achieve such a task, information and data collection of the trees are requested. The fastest way and
relatively low cost technique is by using satellite remote sensing. In this study, we proposed an approach to identify and
map 15 tree species in the Mangish sub-district, Kurdistan Region-Iraq. Image-objects (IOs) were used as the tree
species mapping unit. This is achieved using the shadow index, normalized difference vegetation index and texture
measurements. Four classification methods (Maximum Likelihood, Mahalanobis Distance, Neural Network, and Spectral
Angel Mapper) were used to classify IOs using selected IO features derived from WorldView-2 imagery. Results showed
that overall accuracy was increased 5-8% using the Neural Network method compared with other methods with a Kappa
coefficient of 69%. This technique gives reasonable results of various tree species classifications by means of applying
the Neural Network method with IOs techniques on WorldView-2 imagery.
Keywords: Kurdistan Region-Iraq, Remote sensing, Supervised classification, Satellite imagery, Tree species
1. INTRODUCTION
Managing and monitoring of forest has been on the agenda of forestry research in recent years. The reason is that trees
play a major role in CO2 sequestration and that they contribute to the reduction of carbon emissions to the atmosphere1.
They count as a source of providing aesthetics, fuel wood, natural areas, recreation, timber, and wildlife in infinite
combination2, 3. Hence, it is worthwhile to monitor, manage, analyze and map trees which can be achieved by collecting
a required data.
Two ways are distinguish to collect trees data: traditional techniques (field survey), and modern techniques (remote
sensing observation). Traditional techniques require a lot of efforts, cost, time, and the difficulty in accessing some area.
However, to overcome the limitation of the traditional techniques, the modern techniques as a remote sensing need to be
used4. Remote sensing technology has become increasingly important tools for mapping, inventorying, and monitoring
forest resources around the world especially after the availability of very high resolution (VHR) satellite imagery5, 6. In
recent years, for example, QuickBird imagery has been utilized for trees mapping using pixel-based image classification
methods7, 8. The object-based image analysis (OBIA), however, is preferable to be used for the classification with VHR
images instead of using the conventional pixel by pixel classification9. This is due to the fact that the high spectral
variability within class decreases classification accuracy using pixel-based approaches with the VHR images. Moreover,
pixel-based approaches ignore the context and the spectral values of adjacent pixels10.
OBIA techniques first use image segmentation to produce image objects (IOs) that are more homogeneous regions (e.g.,
a tree crown), and then these IOs rather than pixels are used as the classification unit11. Several efforts have been
achieved to map species composition using VHR images as IKONOS and QuickBird. However, few studies using
WorldView-2 (WV2) data have been reported12, 13. Even so, researchers utilized WV2 to map not more than 7 tree
species11, 13.
In this context, the objective of this work is to delineate and map 15 tree species by WV2 imagery by means of IOs using
four classifiers, Maximum Likelihood, Mahalanobis Distance, Neural Network, and Spectral Angel Mapper. The study
were applied to a forest area in Duhok, Kurdistan Region-Iraq (Mangish sub-district), where no such a research has been
achieved there yet.
*yaseen.mustafa@uoz.ac; phone 0096475022553; www.uoz.ac
Remote Sensing for Agriculture, Ecosystems, and Hydrology XVI, edited
by Christopher M. U. Neale, Antonino Maltese, Proc. of SPIE Vol. 9239, 92390G
© 2014 SPIE · CCC code: 0277-786X/14/$18 · doi: 10.1117/12.2067280
Proc. of SPIE Vol. 9239 92390G-1
Downloaded From: http://proceedings.spiedigitallibrary.org/ on 03/28/2015 Terms of Use: http://spiedl.org/terms
2. STUDY AREA AND DATA
2.1 Study Area Description
Mangish is approximately located between latitudes 37o07'05″ - 36o55'27″ N and longitudes 42o48'09″- 43o13'59″ E
(Figure 1) and about 489.63 km2. The maximum elevation reaches more than 1500 m above sea level in the east and the
lowest elevation reaches not less than 500 m above sea level at the west part of the study area (Figure 2). It contains
natural and planted trees. Land of Mangish is used for field crops (wheat and barley) and vineyards and orchards cover
the foothills. The forest cover consists of different tree species including: Azarole hawthorn (Crataegus azarolus), Tera
binth (Pistacia), Almand (Prunus duclis), Calabrain pine (Pinus brutia), Canary Islands Junipirus (Junipirus oxycedrus),
Common fig (Ficus carica), Common walnut (Juglans regia), Gall Oak (Quercus Infectoria), Jerusalem thorn (Paliurus
spina Christi), Oriental plane (Platanus orientalis), silver (Populus euphratica), Valonia Oak (Quercus aegilops), White
Mulberry (Morus), White willow (Salix), and Oleaster (Elaeagnus angustifolia)14.
(b)
(c)
I
0
I
150
Ikm
300
Figure 1. (a) Map of Iraq, (b) Map of Duhok province, (c) Map of the Mangish
Legend
Contour Lines
DEM
5
High
512
km
10
Low : 509
Figure 2. Digital Elevation Model (DEM) image of the study area with 30 m resolution retrieved from15.
2.2 Data description
The study is primarily based on field data and the WV2 data.
2. 2. 1 Field data: This data include the tree species name and their location (longitude, latitude, and altitude). A
fieldwork carried out between June 19- July 20, 2013. Two Differential Global positioning system (DGPS)
Proc. of SPIE Vol. 9239 92390G-2
Downloaded From: http://proceedings.spiedigitallibrary.org/ on 03/28/2015 Terms of Use: http://spiedl.org/terms
devices (Leica Viva GS15 Smart Rover) were used. Prior to the fieldwork, false color composite images WV2
were brought to the field to directly locate and delineate tree species on the images for later use of determining
training and validation (Table 1). A simple random sampling is used due to its satisfactory results as reported by
Congalton 16. Based on the trees popularity and abundant in the study area, they have been categorized into two
groups; main trees (5 species) and secondary trees (10 species).
Table 1. Training and validation samples of the fifteen (main and secondary) tree species that is used in this study.
Tree species
Main Trees:
Pinus brutia
Quercus aegilops
Quercus Infectoria
Pistacia
Juglans regia
Secondary Trees:
Prunus duclis
Crataegus azarolus
Junipirus oxycedrus
Platanus orientalis
Populus euphratica
Salix
Paliurus spina Christi
Morus
Ficus carica
Elaeagnu angustifolia
Total
Training
Validation
77
227
114
67
44
45
110
55
34
22
58
27
45
27
35
92
17
61
27
13
931
29
14
27
13
18
45
9
31
15
7
474
2. 2. 2 Satellite data: The unique satellite that has a high spatial resolution with 8 multispectral (MS) bands and one
panchromatic (Pan) band is WV2 owed by Digital Globe. The characteristics of WV2 imagery that is used in this
study is shown in Table 2. Fourteen cloud free WV2 scenes were acquired to cover the study area from 11 June to
10 July 2011.
Table 2. Characteristics of the WV2 imagery that used in this study
Spectral
Wavelength
Band width
Spatial resolution
bands
(μm)
(μm)
(m)
1
Coastal
0.40–0.45
0.50
1.84
2
Blue
0.45–0.51
0.60
1.84
3
Green
0.51–0.58
0.70
1.84
4
Yellow
0.59–0.63
0.40
1.84
5
Red
0.63–0.69
0.60
1.84
6
Red Edge
0.71–0.75
0.40
1.84
7
NIR1
0.77–0.90
0.125
1.84
8
NIR2
0.86–1.04
0.128
1.84
Pan
0.46–0.80
0.350
0.5
3. METHODS
The step by step methodology is shown as a flowchart in Figure 3, which consists of four main stages.
Proc. of SPIE Vol. 9239 92390G-3
Downloaded From: http://proceedings.spiedigitallibrary.org/ on 03/28/2015 Terms of Use: http://spiedl.org/terms
IO Process
Pre-Process
!Ui ú62
OuN°
w0291
2Nuxbsu
ybbiT w;
2Nsxbsu
ybbiT w
Grcv
Classification
Process
IL9RI]I7ñ
Results
wubbiuS
Figure 3. Summary of the research workflow
3.1 Image preprocessing
The processing and the manipulation of the satellite data were done using ENVI software (V. 5.0, Exelis Visual
Information Solutions Group, Boulder, CO, USA). The radiometric calibration of WV2 is already provided by
DigitalGlobe 17. The following steps were implemented in the image preprocessing stage:
3.1.1. Ortho-rectification: Ortho-rectification is the process of removing the distortion within an image caused by
terrain relief and the sensor18. This can be done using auxiliaries satellite data. For that purpose, DEM that
covered the study area was used (Figure 2). It was obtained by Advanced Spaceborne Thermal Emission and
Reflection Radiometer (ASTER) data15 from Global Data Explorer19. The ortho-rectification process was achieved
twice, once for panchromatic band alone and once for multispectral bands using a ready tool in ENVI. Moreover,
the ground control points were used to geometrically correct the ortho-rectified image, where 120 ground control
points were used by using DGPS and projected to the Universal Transverse Mercator (UTM Zone 38N) using
WGS-84 Geodetic datum.
3.1.2. Image fusion (Pan-sharpening): Pan-sharpening (PS) is the process of combining the lower-resolution imagery
(spectral information) with the higher resolution image (spatial information) to produce a high resolution spectral
image. It has been used to improve the classification of forests20. Several algorithms have been used to achieve the
PS process, and based on the recommendation of Pu and Landry 11, the Gramm-Schmidt Spectral Sharpening
algorithm adopted and used in this study. As a result, pan-sharpened 0.5 m resolution WV2 images were created
by fusing the 1.84 m MS WV2 imagery with the 0.5 m Pan WV2 imagery using Gramm-Schmidt pan-sharpening
method.
3.1.3. Mosaic: As the study area was covered by 14 WV2 senses, therefore, the mosaic was achieved for all these senses
and a single image that represents the required study area was created. Next, the area of interest from the
mosaicked image was extracted and the undesirable parts were deleted.
Proc. of SPIE Vol. 9239 92390G-4
Downloaded From: http://proceedings.spiedigitallibrary.org/ on 03/28/2015 Terms of Use: http://spiedl.org/terms
3.2 Image object (IO) processing
3.2.1. Shadow index (SI): VHR satellite imagery offers a great details and information of the land with a precise map.
Shadows, however, may obscure information in the image leading to the corrupted classification results. This is
noticeable in the forest, for example, the shade of trees may be counted as another pattern which in turns affects to
the classification results. Therefore, it is necessary to remove the shadow which is achieved by the following
proposed equation.
−
=
−
where NIR and Red are the near-infrared and the Red-reflectance bands, respectively, that represent band no.7 and
band no.5, respectively, in WV2. Generally, the range of SI value is between 0 and 256. Further, the shadow areas
were masked out from the image.
3.2.2. Normalized difference vegetation index (NDVI): The object of interest in this study is the tree (tree crown).
Therefore, it was better to remove the ground−including water, soil, and any non-vegetation objects. In this
context, the following equation21 was applied:
=
where NIR and Red are the near infrared and Red-reflectance bands, respectively. Next, the non-vegetation areas
were masked out from the images, such that an image with vegetation areas only was created.
3.2.3. Texture: One of the methods that help to interpret and identify forest stands from very high resolution imagery is
texture information 22. The Grey-Level Co-occurrence Matrix (GLCM) 23 is a common algorithms for computing
texture measures 5, 24. Among the fourteen GLCM texture measures, that were originally proposed by Haralick 25,
angular second moment, contrast , homogeneity and variance are the most frequently used texture features for the
classification of VHR imagery 26. In this study, these four features were calculated for the NIR band using 9×9
processing windows. The reason of selecting the NIR band was that it contained the greatest range in spectral
brightness values. Next, we separated tree canopy from non-tree canopy (shrub and grass/lawn land, etc.) with
applying a threshold on the textural feature values. The threshold value was chosen based on a trial and error
approach. For example, with homogeneity and variance cases a threshold was determined based on observing a
low value of homogeneity and high value of variance for the tree canopy while with grass, a high value of
homogeneity and low value of variance were selected.
3.3 Image classification
Image classification is defined as the process of extracting differentiated classes or themes (e.g., trees species) from raw
remotely sensed satellite data 27. Generally, this technique is categorized into two groups: unsupervised and supervised
classification. In this study, through using OBIA method, the supervised classification was adopted and investigated with
four algorithms (classifiers): Maximum Likelihood (ML), Mahalanobis Distance (MahaD), Spectral Angle Mapper
(SAM), Neural Networks (NN).
4. RESULTS AND DISCUSSION
4.1 Trees without shadow
Figure 4(c) shows the created mask area from PS image which has two values only 0 (black area) and 1 (white area),
whereas, Figure 4(d) shows the final image after applying the SI mask (i.e., masking out the shaded area from the
image). The threshold value for SI was determined experimentally as in this study and was chosen to be 223. The very
white color attached to the trees in Figure 4(b) represents the shade of the tree crown. Meanwhile, the same area is
shown in Figure 4(c) as a black color area which was used as a masked area with 0 values. The final output of this
process is shown in Figure 4(d) where it is very evident that the shaded area was removed and turns into an image
without shadow. It should be mentioned that the result of this process did not remove the shade of the tree crown only; it
further removed the shade of other objects; for example, the shade of rocks. However, this has had no a major influence
on the main target of this study.
Proc. of SPIE Vol. 9239 92390G-5
Downloaded From: http://proceedings.spiedigitallibrary.org/ on 03/28/2015 Terms of Use: http://spiedl.org/terms
(a)
(b)
20 m
10 m
(c)
(d)
t
Figure 4: A small portion of the study area showing the results of the SI process. (a) image after PS process; (b) resulted image from
applying Eq. 1 (c) built mask image created after selecting a threshold; (d) image without shadow, created after applying the mask.
4.2 Identify and mapping tree species
Figure 5 shows the NDVI map of the whole study area. It appears in a grey color because it consists of one band, which
is created based on two WV2 bands (Eq. 2). In this map, the very white color represents the vegetation area, while the
dark grey and black represent the non-vegetation area.
Figure 5: (a) Color composite satellite imagery of Mangish area; (b) NDVI Image of Mangish was made using Eq. 2.
Figure 6 shows the results of this process for a small portion of the study area. The image after removing the shadow is
shown in Figure 6(a), while the calculated NDVI image from SI image is shown in Figure 6(b). Meanwhile, Figure 6(c)
Proc. of SPIE Vol. 9239 92390G-6
Downloaded From: http://proceedings.spiedigitallibrary.org/ on 03/28/2015 Terms of Use: http://spiedl.org/terms
shows the created mask image from NDVI image. This image was created after selecting a threshold value for NDVI
which was 0.32 in this study. The final result of this process is shown in Figure 6(d) which includes the vegetation
canopy only in the image without urban, water, roads, and soil.
L
k
10 m
(a)
(b)
(c)
(d)
li
e
A
ee
1
i
20 m
Figure 6: A portion of a study area showing the results of the NDVI process. (a) image without shadow; (b) NDVI image; (c) built
mask image resulted after selecting a threshold; (d) image with vegetation canopy only.
Although, NDVI helps to distinguish between vegetation and non-vegetation, grass and trees cannot be distinguished.
This is because NDVI of a dense (or healthy) grass may have the same (or close) value of NDVI of the tree crown as
shown in Figure 6(d). Therefore, considering the texture feature as an additional process overcomes this limitation.
Figure 7 shows examples of four textural features calculated using GLCM algorithm for NIR band of the WV2 image.
These features contributed to the process of separation tree canopy and non-tree canopy. Therefore, a threshold was used
for that purpose. However, the criteria used in selecting a proper threshold were depending on a combination of at least
two textures feature measurements. For instance, with the tree canopy the value of angular second moment and
homogeneity decreased to a very low value while simultaneously the value, of contrast and variance increased to become
a very high value. This is noticeable in Figure 8(a*) and (a**) with their attached chart. Next, the grass was masked out
from the whole image and kept tree canopy. Figure 8(c) shows the final results of this process for such a particular
portion of the study area. It contains tree canopy only with no other information around the trees which turns into what is
so called IO that represent tree crown. Next, the image becomes ready for the classification process. Figure 9 shows the
IO of the whole study area.
Proc. of SPIE Vol. 9239 92390G-7
Downloaded From: http://proceedings.spiedigitallibrary.org/ on 03/28/2015 Terms of Use: http://spiedl.org/terms
(b)
(a)
(d)
(c)
I
i
il
20 m
Figure 7: GLCM textures (a) angular second moment; (b) contrast; (c) homogeneity; (d) variance.
200
Grass
(a*)
140.8
100
Texture Values
150
192
50
0
ASM
CON
HO
OM
am
86
70
VAR
(b)
(a)
Ia
(c)
.
r
4
700
Tree
600
500
(a**)
200
300
400
443.2
r
71.3
4
28.8
0
100
Texture Values
i
639.5
20 m
ASM
CON
HO
VAR
Figure 8: (a) image with tree canopy and non-tree canopy (a*) example of the non-tree canopy (grass) with its chart; (a**) example of
the tree canopy with its chart; (b) original image; (c) final image after texture process (IO raster).
Proc. of SPIE Vol. 9239 92390G-8
Downloaded From: http://proceedings.spiedigitallibrary.org/ on 03/28/2015 Terms of Use: http://spiedl.org/terms
Jkm
10
Figure 9: WV2 image of the study area containing IO (tree crown) only.
4.3 IO classification
Four different classification algorithms (ML, SAM, MahaD, NN) were used to perform digital image classification of
tree species. Each classification followed by a sequence of training, and evaluation. The results of each classification
were evaluated using confusion matrix. This procedure was implemented and evaluated for both groups (main and
secondary trees) independently. Next, a better classifier performance was selected based on the accuracy criteria that
were resulted from confusion matrix. The resulted classification accuracy (overall accuracy-OA, and Kappa coefficientKC) gained from each classifier is reported in Table 3.
Table 3. The resulted classification accuracy of implementing four classifiers for main and secondary trees.
Secondary
Main
Trees
Criteria
ML
MahaD
SAM
NN
OA (%)
73.31
73.63
71.41
77.50
KC
0.65
0.65
0.62
0.75
OA (%)
64.52
41.72
30.46
76.00
KC
0.52
0.30
0.17
0.63
From Table 3 we noticed that the best accuracy of both groups (main and secondary) is NN where the OA accuracy of
main and secondary trees is 77.5% and 76%, with KC of 0.75 and 0.63, respectively. Therefore based on this result the
NN was adapted for further process of creating trees map. Figure 10 is a graphical comparison between all classifiers in
terms of their accuracy. It shows that NN classifier gives a better accuracy for both main and secondary trees.
The resulted producer accuracy (Prod. Acc.) and the user accuracy (User Acc.) from the confusion matrix by NN
classifier for the main and secondary trees are reported in the Table 4. The values of Prod. Acc. And User Acc. were
varying from species to species. For example, the Prod. Acc. of Pinus brutia was 41.43, while it was 74.00 for Quercus
aegilops. This might be due to the shape and size of the tree leaves (as broad vs needle) that has an influence of the
reflected value that was detected by the satellite sensor. In addition, the low Prod. Acc. of Pinus brutia is due to the
interference between the reflected values from Pinus brutia with the reflected values of some other species which has a
dark color (or close to black). However, this value (Prod. Acc.) was much lower before removing the tree crown shadow,
because the reflected values from the shade of the tree crown gave the similar reflected values of Pinus brutia.
Proc. of SPIE Vol. 9239 92390G-9
Downloaded From: http://proceedings.spiedigitallibrary.org/ on 03/28/2015 Terms of Use: http://spiedl.org/terms
Overall Accuracy (%)
1.0
100
(a )
73.63
73.31
77.5 76
Kappa Coefficient
Main Trees
Secondary Trees
0.8
80
Main Trees
Secondary Trees
(b )
0.75
71.41
0.65
60
0.6
64.52
0.65
0.63
0.62
0.52
40
0.4
41.72
0.3
0.17
0
0.0
20
0.2
30.46
ML
MahaD
SAM
NN
ML
MahaD
SAM
NN
Figure 10. Resulted (a) Overall Accuracy (OA %) and (b) Kappa Coefficient (KC) from four classifiers of the main and secondary
trees.
Table 4. Producer's Accuracy (Prod. Acc.) and User's Accuracy (User Acc.) resulted from implementing NN classifier for the main
and secondary trees.
NN
Secondary
Main
Species
Pinus brutia
Quercus aegilops
Quercus Infectoria
Pistacia
Juglans regia
Prunus duclis
Crataegus azarolus
Junipirus oxycedrus
Platanus orientalis
Populus euphratica
Salix
Paliurus spina Christi
Morus
Ficus carica
Elaeagnu angustifolia
Prod. Acc.%
User Acc.%
41.43
74.00
51.82
88.39
96.50
78.26
63.43
49.56
51.79
80.00
77.59
61.00
48.06
81.17
58.90
81.03
82.93
52.17
62.60
98.41
54.68
53.46
100.0
58.52
90.68
79.22
46.9
61.36
65.10
72.30
Furthermore, some other issues require further work. For instance, using a spectroradiometer device in the field survey to
measure the reflectance value of the tree species may help to identify which WV2 band (bands combination) is best
matching with each tree species. This may aid to avoid the problem of having two classes for one tree species, which in
turns improves the result accuracy. Further, other types of classifier such as Support Vector Machine might be useful to
use and to be investigated for such a study.
The classification map of the main and secondary tree species that are resulted from NN classifier is shown in Figure 11
with some sample locations. Most of the tree species (Pinus brutia, Quercus aegilops, and Morus) in Mangish shown in
Figure 11(b), (c), and (d) are classified correctly. We can notice that all other land types (as buildings and streets) were
not taken in the classification process. This indicates that our procedure reported in Figure 3 of identifying the tree crown
Proc. of SPIE Vol. 9239 92390G-10
Downloaded From: http://proceedings.spiedigitallibrary.org/ on 03/28/2015 Terms of Use: http://spiedl.org/terms
was successful. Figure 11(b) shows only one species which is Quercus aegilops. Meanwhile, a mixed species (Quercus
aegilops, Pinus brutia, Morus) appears in Figure 11(c) and (d). The occupied area of each tree species is determined and
calculated. Table 5 shows the area in hectares of each tree species.
(b(a)
)
(c )
(d )
(a)
Mangish area
Pistada
Junipiers oxycednrs K Pa /iurus spina
K Pious bruta
K Juglans regia
Platanus orainfa /is
K Quemas Aegi/ops K Prunus duc/is
Popu /us euphratca
06 Quercus /nfectorla K Crataegus azaro/us K Sa/ix
Morus
Ficus cana
Elaeagnu angusfih /ia
Figure 11: (a) Classification map of all tree species within Mangish; (b) Quercus aegilops species within forest land; (c) Quercus
aegilops, and Pinus brutia species within mixed land type; (d) Quercus aegilops, Pinus brutia, and Morus species within urban land
type.
Table 5: Area in hectares of the (main and secondary) tree species in the study area
Tree species
Main Trees:
Pinus brutia
Quercus aegilops
Quercus Infectoria
Pistacia
Juglans regia
Secondary Trees:
Prunus duclis
Crataegus azarolus
Junipirus oxycedrus
Platanus orientalis
Populus euphratica
Salix
Paliurus spina Christi
Morus
Ficus carica
Elaeagnu angustifolia
Total
Area (ha)
100.39
1378.83
454.17
731.85
27.13
13.40
5.90
0.19
4.49
188.45
89.24
0.03
99.17
40.30
0.01
Proc. of SPIE Vol. 9239 92390G-11
Downloaded From: http://proceedings.spiedigitallibrary.org/ on 03/28/2015 Terms of Use: http://spiedl.org/terms
5. CONCLUSION
In this work 15 tree species were identified and mapped in Mangish, Duhok, Kurdistan Region-Iraq. This is achieved
using VHR WV2 imagery and an advanced methodology that served the objective of this study. The study leads to the
following conclusions:
1- Fifteen tree species were identified and mapped with a satisfactory accuracy within Mangish, Duhok, Kurdistan
Region-Iraq.
2- The WV2 sensor has a great impact in improving the accuracy of vegetation classification and tree species
recognition.
3- The NN classifier gave more realistic result among four other supervised classifiers.
4- Given the presented method and results in this study, this approach can be applied in classifying and mapping
several and different kind of trees.
REFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
Wamelink, G. W. W., Wieggers, H. J. J., Reinds, G. J. et al., “Modelling impacts of changes in carbon dioxide
concentration, climate and nitrogen deposition on carbon sequestration by European forests and forest soils,”
Forest Ecology and Management, 258(8), 1794-1805 (2009).
Grebner, D. L., Bettinger, P., and Siry, J. P., [Chapter 1 - A brief history of forestry and natural resource
management] Academic Press, San Diego(2013).
Yang, J., “Urban forestry in challenging environments,” Urban Forestry & Urban Greening, 11(2), 103-104
(2012).
Tasoulas, E., Varras, G., Tsirogiannis, I. et al., “Development of a GIS application for urban forestry
management planning,” Procedia Technology, 8(0), 70-80 (2013).
Kim, M., M., M., and Warner, T. A., “Forest type mapping using object-specific texture measures from
multispectral Ikonos imagery: Segmentation Quality and image classification issues,” Photogrammetric
Engineering & Remote Sensing, 75(7), 819-829 (2009).
Czerwinski, C. J., King, D. J., and Mitchell, S. W., “Mapping forest growth and decline in a temperate mixed
forest using temporal trend analysis of Landsat imagery, 1987–2010,” Remote Sensing of Environment, 141(0),
188-200 (2014).
Salehi, B., Zhang, Y., and Zhong, M., “A Combined Object- and Pixel-Based Image Analysis Framework for
Urban Land Cover Classification of VHR Imagery,” Photogrammetric Engineering & Remote Sensing, 79(11),
999-1014 (2013).
Arenas-Castro, S., Fernández-Haeger, J., and Jordano-Barbudo, D., “Evaluation and Comparison of QuickBird
and ADS40-SH52 Multispectral Imagery for Mapping Iberian Wild Pear Trees (Pyrus bourgaeana, Decne) in a
Mediterranean Mixed Forest,” Forests, 5(6), 1304-1330 (2014).
Chen, G., Hay, G. J., Carvalho, L. M. T. et al., “Object-based change detection,” International Journal of
Remote Sensing, 33(14), 4434-4457 (2012).
Dingle Robertson, L., and King, D. J., “Comparison of pixel- and object-based classification in land cover
change mapping,” International Journal of Remote Sensing, 32(6), 1505-1529 (2011).
Pu, R., and Landry, S., “A comparative analysis of high spatial resolution IKONOS and WorldView-2 imagery
for mapping urban tree species,” Remote Sensing of Environment, 124(0), 516-533 (2012).
Chen, Q., [Comparison of Worldview-2 and IKONOS-2 imagery for identifying tree species in the habitat of an
endangered bird species in Hawaii] DigitalGlobe Longmont, CO, USA, (2011).
Immitzer, M., Atzberger, C., and Koukal, T., “Tree Species Classification with Random Forest Using Very
High Spatial Resolution 8-Band WorldView-2 Satellite Data,” Remote Sensing, 4(9), 2661-2693 (2012).
Al-doski, H. S., [A study of the quantity of interception and chemistry of gross rainfall, throughfall, stemflow
and soil in natural and artificial stands of Zawita Pine] Uninversity of Duhok, Duhok(2012).
ASTER-GDEM, [ASTER Global Digital Elevation Model (GDEM)], (2013).
Congalton, R. G., “A comparison of sampling schemes used in generating error matrices for assessing the
accuracy of maps generated from remotely sensed data,” Photogrammetric Engineering & Remote Sensing, 54,
593-600 (1988).
DigitalGlobe, [The benefits of the 8 spectral bands of WorldView-2,], (2009).
Proc. of SPIE Vol. 9239 92390G-12
Downloaded From: http://proceedings.spiedigitallibrary.org/ on 03/28/2015 Terms of Use: http://spiedl.org/terms
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
EXELIS, [ENVI Classic Orthorectifying Aerial Photographs], (2013 a).
USGS, [The Shuttle Radar Topography Mission], (2005).
Kosaka, N., Akiyama, T., Bien, T. et al., "Forest type classification using data fusion of multispectral and
panchromatic high-resolution satellite imageries." 4, 2980-2983.
Fontana, F. M. A., Coops, N. C., Khlopenkov, K. V. et al., “Generation of a novel 1 km NDVI data set over
Canada, the northern United States, and Greenland based on historical AVHRR data,” Remote Sensing of
Environment, 121(0), 171-185 (2012).
Lillesand, T. M., Kiefer, R. W., and Chipman, J. W., [Remote sensing and image interpretation] John Wiley &
Sons, Hoboken, NJ(2008).
Haralick, R. M., Shanmugam, K., and Dinstein, I. H., “Textural Features for Image Classification,” IEEE
Transactions on Systems, Man and Cybernetics, SMC-3(6), 610-621 (1973).
Hay, G. J., Niemann, K. O., and McLean, G. F., “An object-specific image texture analysis of H-resolution
forest imagery,” Remote Sensing of Environment, 55(2), 108-122 (1996).
Haralick, R. M., “Statistical and structural approaches to texture,” Proceedings of the IEEE, 67(5), 786-804
(1979).
Salehi, B., Zhang, Y., Zhong, M. et al., “A review of the effectiveness of spatial information used in urban land
cover classification of VHR imagery,” International Journal of Geoinformatics, 8(2), 16 (2012).
Jensen, J. R., [Introductory digital image processing : a remote sensing perspective] Prentice Hall, Upper Saddle
River, N.J.(2005).
Proc. of SPIE Vol. 9239 92390G-13
Downloaded From: http://proceedings.spiedigitallibrary.org/ on 03/28/2015 Terms of Use: http://spiedl.org/terms