[go: up one dir, main page]

0% found this document useful (0 votes)
23 views7 pages

Final IEEEversion

The document discusses a method for 3D reconstruction of underwater coral reef images using low-cost multi-view cameras. Stereo image pairs are extracted from video footage and corresponding points are identified between images using SIFT. 3D positions are determined through triangulation. The 3D points are then used to generate a surface reconstruction through Delaunay triangulation.

Uploaded by

p31202202481
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views7 pages

Final IEEEversion

The document discusses a method for 3D reconstruction of underwater coral reef images using low-cost multi-view cameras. Stereo image pairs are extracted from video footage and corresponding points are identified between images using SIFT. 3D positions are determined through triangulation. The 3D points are then used to generate a surface reconstruction through Delaunay triangulation.

Uploaded by

p31202202481
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/261278737

3D reconstruction of under water coral reef images using low cost multi-view
cameras

Conference Paper · May 2012


DOI: 10.1109/ICMCS.2012.6320131

CITATIONS READS

16 638

4 authors:

Pulung Nurtantio Andono Eko mulyanto Yuniarno


Universitas Dian Nuswantoro Semarang Institut Teknologi Sepuluh Nopember
57 PUBLICATIONS 354 CITATIONS 229 PUBLICATIONS 731 CITATIONS

SEE PROFILE SEE PROFILE

Mochamad Hariadi Valentijn Venus


Institut Teknologi Sepuluh Nopember University of Twente
236 PUBLICATIONS 986 CITATIONS 41 PUBLICATIONS 1,480 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Pulung Nurtantio Andono on 19 November 2015.

The user has requested enhancement of the downloaded file.


3D Reconstruction of Under Water Coral Reef Images
Using Low Cost Multi-View Cameras

* **
Pulung Nurtantio Andono, Eko Mulyanto Yuniarno , Mochamad Hariadj , Valentijn Venus
Dept. of Computer Science Dept. of Electrical Eng, Faculty of Geo-Information
Dian Nuswantoro University Sepuluh Nopember Institute of Technology, Science and Earth Observation
Semarang, Indonesia Surabaya, Indonesia (ITC)
e-mail: pulung@dinus.ac.id.
* University of Twente,
e-mail: ekomulyato@gmail.com
** Enschede The Netherlands
e-mail: mochar@ee.its.ac.id
e-mail: venus@itc.nl

Abstract-This research describes a 3D reconstruction In this research, we propose a technique to create a 3D


method of coral reefs using low-cost underwater cameras. We reconstruction using low cost multi-view camera system that is
employed a multi-view camera system consisting of 3 confIgured into stereo camera. From this stereo camera, a pair
identical waterproof cameras arrayed on a stereo-base, and of stereo images is acquired. Correspondence points between
collected footage of the seafloor in linear transects. To stereo images are generated using SIFT Image matching
develop a 3D-representation of the seafloor image-pairs were algorithm. 3D position would be determined using
fIrst extracted from the video footage manually. Then Triangulation technique. Afterward the number of 3D points
corresponding points are automatically extracted from the are processed with Delaunay triangulation algorithm to
produce surface reconstruction.
stereo-pairs by the well known SIFT algorithm, which is
invariant to scale, translation, and rotation. Based on the
resultant x,y,z point cloud the 3D appearance of the coral reef
is approximated by a Triangulation technique utilizing
Delaunay Triangulation. The experimental result demonstrate
robust 3D reconstruction with manual adjustment of camera
A, B or C selection.

Keywords-Underwater Vision, 3D Reconstruction, Multi­


view Camera System

I. INTRODUCTION
NDERWATER ecosystems have evolved into

U international attention since its effect on environmental


changes that includes the evolution of coral reef
environments. Indonesia as part of the coral-reef triangle has
18% of coral reefs around the world and placed as one of the Fig. 1 Karimunjava's Coral Reefs
countries where coral reefs are most threatened according to
The remainder of this paper is organized as follows. Section
reefs a risk [1]. The survey result of the Indonesian Institute of
II briefly describes fundamentals of 3D underwater Images
Science [2] states that only 5.23% of coral reefs in Indonesia
reconstruction and its implementation. Section III explains our
are in good condition.
method for 3D reconstruction. Section IV demonstrates
Karimunjawa is a National Marine Park declared as a detailed experiments. Conclusion and road a head are in section
Natural Conservation Area by Decree of the Minister of V and VI respectively.
Forestry [3]. This area has high level of biodiversity that
represent the ecosystem of northern coast of Central Java, II. STATE OF THE ART
Indonesia. It consists of 22 islands and it has fIve types of
In the fIeld of ocean study, 3D reconstruction and
ecosystems are coral reefs, sea grass beds and sea grass,
measurement of underwater images can be generated by 3D
mangroves, coastal forests, and tropical rain forests lowlands,
visual observations [4]. Reconstruction of 3D structures useful
is the habitat of various flora typical of such Dewadaru
in underwater applications. 3D mosaicking is one important
(Fragaea fagran), Kalimasada (Cordia subcordata), Setigi
tool for exploration, visualization, underwater navigation, and
(Pemphis acudula) and protected wildlife like the green turtle
can estimates the size of the objects of interest such as
(Chelonia mydas), Hawksbill (Eretmochelys imbricata), Junai
organisms and structures [5].
Gold (Caloenas nicobarica), and conch bun (Nautilus
pompillus).

978-1-4673-1520-3/12/$31.00 ©2012 IEEE


There are several work has been done regarding the projection. The point of intersection with the line of the image
i
implementation of 3D reconstruction for underwater utility. plane produces p which is the projection of point p to the
An early stage technique of 3D underwater mapping was image plane.
introduced in [6] that shows a complete framework stereo
imaging in deployment remotely operated vehicles (ROY) that
proposed online processing video image with real time data
p
acquisition for 3D mapping of the seafloor.


In [7] describes a development sensor that is capable of
generating 3D models of underwater coral reefs where the
experiment uses multi - camera stereo. The sensor is capable to •
Center of
estimates volumetric scene structure. The sensor ego-motion Projection
that has the capability for collecting high- resolution video Image Plane
data, generating accurate 3D models, and estimating the
trajectory of the sensor as it travels. In [8] a combination of Fig. 2 Pinhole Camera Model
techniques from computer vision, photogrammetric, and
robotics that focus in low level image Algorithm processing, B. Calibration
feature extraction until transformation position between two World coordinate system is a coordinate system that is used
cameras. The result was adapted structure from motion (SFM) for reference with two or more cameras. By using the
technique for reconstruction of 3D structure on underwater
reference coordinates, the relationship between the camera
condition, where this method has been tested and validate by
orientations can be determined.
comparing image based reconstruction with laser scan
measurements in controlled water tank environment. If the two axis systems (cameras and world) in Fig.3 are
attached, the object in the area of the world and imaginary (in
Some underwater 3D reconstruction which discuss about
density has been discussed in [4] [9]. Density reconstruction of the image plane) will form a triangle congruent, so the
underwater structures has been obtained in [4] uses stereo transformation of 3D world coordinates (Xs' Ys' Zs) to the
vision system to get quantitative measurements of the objects camera coordinates (Xi, }f, f ) in homogeneous coordinates
in the scene. This method works well when the camera rotation specified by the following equation:
and translations were known. In [9] was investigate feasibility
and limitations considering image sea floor conditions. It
Xs

l:l=lf �l
propose a structure from motion system (SFM) that sanctioning 0 0
to compute detailed 3D reconstructions from submerged Ys
objects or scenes which presented a system for 3D f 0
reconstruction predicated on submerged video that requires no Zs
0 1
special equipment. 1
(1)
In [10] proposes a technique for reconstructing underwater
Where Xi fXs i fYs or Xi u v
structures on large scale area, where its technique are takes -,
Zs Y ;, Yi ;
-
= = = =

Ys
stereo image pairs, detects important features, computes 3D
points, and estimates the path of the camera poses. The result is Image Plane 3D Point
Center of
an optimized camera trajectory, 3D point cloud, which in turn Projection
is the basis for creating a mesh.
Due to the information above, there has been no specific
research that reconstruct underwater coral reef into 3D surface z

reconstruction uses triangulation of SIFT image matching,


where the images are gathered from Multi-view camera
system.

III. FUNDAMENTALS
Fig. 3 The camera coordinate system coincides
A. Pinhole Camera Geometry with the world coordinate system
This paper used pinhole camera model. Pinhole Camera
model consists of image plane and the center of projection. In fact, there is a difference between the distance of the
Image plane is the place where the image is reflected, while pixel in the image of the unit u axes and v axes of the camera
the central projection is the place where the rays converge. coordinate system [11]. When the distance axis X of the pixel
The distance between the centers of projection is called the unit is kx and Y axis in pixel units are ky and the center of the
image is located at xo, Yo, the relationship between image
focal length (f).
coordinates and camera coordinates specified by the equation:
In Fig.2 if P is the observed point, p is projected into the
image plane by drawing a line from p to the center of
xikx = (Xpix - xo)
[a� ay ] [1 1 �]
(2)
s Xo 0 0

1 1
Image P oint K[I3103] (6)
Yo 0 0 =

[a�X ay ]
YpiX Y •
0 0 0
s Xo

1
(XpYi,J) K is named as calibration matrix, where K = Yo
Yo Center of In age x 0
Xo xpix
C. SIFT Algorithm
Registration process using a feature based on the intensity is
Fig. 4 Transformation of distance to pixel very sensitive to scale and rotation transformations. To
overcome this problem, Scale invariant Scale Transform or
SIFT method was developed by [12]. The method includes
Given this difference, therefore equation (1) will turn into the four phase:
equations of transformation are as follows:
1) Scale-space extreme detection
The first stage in the process of computing is to identify all
potential key point on all scales. Scale an image space is
defined as a function of L(x,y,a) which is the product of
convolution between the Gaussian kernel G (x,y,a) with
the image J(x,y). To find features on the imagery used
operators Difference Of Gaussian (DOG) by compiling
octave image pyramid with different scales.
(3) 2) Keypoint localization
From the keypoint candidates obtained from a scale-space
a x = fk Xpix
x
= u'l w' extrema detection, high stability keypoint will be selected.
The emergence of feature-level stability is based on the
ay = fky YpiX =
v'l w'(4) features of each octave.
,
3) Orientation assignment
Orientation of the keypoint based on local gradient
aX and ay is the focal length expressed in pixels, Xo and Yo direction of each image. Any operation performed on the
while are the coordinates of the center of the image which is image based on the direction, orientation and location of
also expressed in pixels. the keypoint.
4) Keypoint descriptor
Another thing to note is the skew factor. Skew factor is the Local gradient image is computed at each scale region
factor that indicates the level of the slope of the images. Fig. around the keypoint. In that situation, then transformed to
5.b shows that the slope is caused by a shift in the x axis are local distortions and illumination changes in the area
linearly on the y axis. around the keypoint.
D. Delaunay Triangulation

o
Reconstructed 3D coordinates of features can be obtained
by using triangulation formulas. Set of features that have been
reconstructed is called 3D point. Then a number of 3D points
a. b.
are combined with Delaunay triangulation algorithm. The
Fig. 5 a. Normal image. b. Tilt image circumcircles of a triangle in Delaunay triangulation of a set of
point C contain no point of C in its interior [13].

By adding a skew factor that is expressed by s in equation (3)


c
then equation (4) is converted into

Xs

r:,1f� �1
s Xo
Ys
ay Yo
Zs
0 1
1
(5) a. b. c.
Existing transformation matrix in equation (5) can be Fig. 6 (a) Illegal and (b) (c) Legal Delaunay Triangulation
transformed into the equation:
IV. DESIGN AND METHOD J) Camera Calibration
Generally speaking, this paper consists of several stages: MATLAB Calibration Toolbox is performed to obtain
data acquisition, camera calibration, image matching, and 3D intrinsic and extrinsic parameters. This paper used
reconstruction. These stages are explained below. chessboard 17.5 x 22.5 em to perform camera calibration.
Size of square in chessboard is 2.5 x 2.5 em (Fig. 10). The
A. Data Acquisition
first step of image calibration is takes some pictures of the
Our data was collected at Karimunjawa Island Central Java chessboard pattern, then searching points chessboard
Indonesia. Three camera Olympus Jl Tough-80lO cameras and pattern in the image. By knowing the location of the points
resolution of 1280 x 720 pixels are used to obtain the scene. on the world coordinate and pixel coordinate, it can be
In this research, we installed three cameras into one stereo searched matrix that connects both of them. The matrix is
frame (Fig.7). Fig.8 shows the acquisition of our data. called the matrix of intrinsic and extrinsic. Intrinsic and
extrinsic parameter is shown in Fig.II.

Fig. 10 Camera Calibration using Checkerboard

200
200

Fig. 11 Intrinsic and Extrinsic Parameter

2) Preprocessing
Fig. 8 Data Acquisition Preprocessing is performed by human assistant to select
B. Multi- View 3D Reconstruction System which stereo images are suitable for 3D reconstruction.
This is performed to get more correspondence points
This section discussed the steps of multi-view 3D
between stereo images to build more accurate 3D
reconstruction system (Fig. 9). The steps are explained in the
reconstruction.
next subsection.
3) SIFT Keypoints Detection
SIFT algorithm is employed to perform image matching.
3D Reconstruction Fig.I2 (a) and (b) show the stereo images. The result of

I Image Preprocessing


I image matching is in Fig.I2 (c).
4) Outlier Removal
• • Outlier removal is done by human assistant to select which

I I I I
Image Matching using SIFT Camera Calibration stereo images are suitable for 3D reconstruction. This is
done to get more correspondence points between stereo
1 Paired-Points with Outlier
Internal and images to build 3D reconstruction more accurate.
I Outlier Removal
I External
Parameter
5) 3D Reconstruction using Triangulation
1 Paired-Points
Employing correspondence paired points and internal,
external parameter that calculated with triangulation to
I 3D Reconstruction using Triangulation
I produced 3D points.
6) Surface reconstruction using Dealaunay Triagu/ation
... 3D Points
Reconstructed 3D coordinates points that obtained by
I Surface Reconstruction using Delaunay
I triangulation formulas. The number of 3D points are
processed with Delaunay triangulation algorithm to
Fig. 9 3D Reconstruction Model produce surface reconstruction.
:L, matching point
accuracy (i) = (7)
:L, matching point+:L, Outlier

To obtain the whole performance measurement, the average


accuracy and average error is measured by using formula (8).
Where N is the number of image pair.

N
:L, _o accuracy
avg accuracy = '
N
x 100% (8)

Experimental results in Fig.14 and 15 demonstrate the


accuracy for each camera in different scenes. Fig. 14 describes
camera A performs better accuracy than B or C in scene 1, on
the other hand Fig.15 describes vice versa in scene 2. Fig.16
and 17 describe the visualization of 3D reconstruction of
scene 1 and scene 2 respectively. The better 3D reconstruction
performance visualizes the raw structures coral reefs.
Therefore, FigI6(a) and 17(c) demonstrate better 3D
(c) reconstruction of camera A and C respectively.
Fig. 12 (a) left camera image, (b) right camera image and
(c) SIFT Matching Result
• Accuracy • Error

V. EXPERIMENT
97.00%
89.40% 86.10%
The experiment used 1350 frames in each camera A, B and
C. The performance of our model is observed by the number
of point matching in image-pair. To perform image matching,
image-pair is gathered from each camera A, B and C. The
position of camera A, B and C is shown in Fig.13. The
difference of camera position may affect the sharpness of the Camera A Camera B Camera C
underwater image in each camera. Probably, image acquired
from one of the camera is more sharpness than the image
Fig. 14 Accuracy and error of image matching Scene 1
acquired from other cameras. The sharper image is expected to
improve the performance of image matching. We describe the
performance of our system by measuring the accuracy of • Accuracy • Error
image matching between two consequtive images for each
camera. The accuracy is the ammount of matching points 98.00%
94.00% 88.00%
devide by the sum of the ammount of matching points and
outlier(7).

2.00%

8cm
Sea water
Camera A Camera B Camera C
8cm
A.
4cm t B
Fig. 15 Accuracy and error of image matching Scene 2

VI. CONCLUSIONS
We developed a low cost multi view camera system for
under water 3D reconstruction of coral reef. The configuration
Coral reef is built on 3 cameras with same focal length and configured
into stereo camera. The experiment demonstrated robust 3D
reconstruction which the accuracy of image matching in
Fig. 13 Camera's position underwater image achieved more than 87% accuracy.
VII. FUTURE WORKS [6] A Leone, G Diarco, and C Distante, "A Stereo Vision Framework for 3-
D Underwater Mosaicking," 2008.
To increase the accuracy of the 3D reconstruction, human [7] S. Negahdaripour and H. Madjidi, ""Stereovision Imaging on
assistance is required in preprocessing and outlier removal Submersible Plaforms for 3D Mapping of Benthic Habitats and Sea Floor
steps. In the future, we will remove the human assistance Structures"," Oceanic Engineering, IEEE Journal vol.28,no 4, pp. 625-
650, 2003.
using automatic outlier detection method.
[8] M Jenkin A Hogue, "Development of an Underwater Vision Sensor for
VIII. REFERENCES 3D Reef Mapping," 2006.

[9] 0 Pizarro, R Eustice, and H Singh, "Large area 3-d reconstructions from
[I] C Beall, B J Lawrence, V IIa, and F Dellaert, "Reconstruction 3D
underwater optical surveys," vol. vol. 34, no. 2, April 20I O.
Underwater Structures," , Atlantic, 2010.
[10] A Sedlazeck, C Albrechts, K Koser, and R Koch, "3D reconstruction
[2] Abdullah Habibi, Naneng Setiasih, and Jensi Sartin, "A Decade of Reef
based on underwater video from ROV KIEL 6000 considering
Check Monitoring: Indonesian Coral Reefs, Condition and Trends," The
underwater imaging conditions," , Bremen, German, 2009.
Indonesian Reef Check Network , 2007.
[II] Tsai, A versatile camera calibration technique for high-accuracy 3D
[3] G. Diansyah, T.Z. Ulqodry, M. Rasyid, and A. Djawanas, "The
machine vision metrology using off the shelf tv cameras and lenses.:
Measurements of Calcification Rates in Reef Corals Using Radioisotope
IEEE Journal of Robotics and Automations, 1987.
45 Ca at Pongok Sea, South Bangka," Atom Indonesia , vol. Vol. 37 , pp.
ll-16, December 20II. [12] David G Lowe, Distinctive Image Featuresfrom Scale-Invariant
Keypoints.: International Journal of Computer Vision, 2004.
[4] (2009) Ministry of Forestry Republic of Indonesia. [Online].
http://www.dephut.go.id/index.php?g=id/node/1405 [13] 0yvind Hjelle and Morten Drehlen, Triangulations and Applications.
Berlin Heidelberg: Springer, 2006.
[5] V Brandou et aI., "3D reconstruction of natural underwater scenes using
the stereovision system iris," , Aberdeen, 2007.

·Hm 100 ·100 -J)} ·DJ -400

(a) (b) (c)

Fig. 16 Scene no 1, 3D surface reconstruction obtained from (a) camera A, (b) camera B, and (c) camera C

22OO--t-'-�1:Htf
=-1 •... .... w
l�l ·························· 7··············.· .. . .... ....�...·.·....... ; ..·.· . . •·.. .. . �
U'"

2m 100 ·100 -2(0 ·DJ -400


·400

(a) (b) (c)


Fig. 17 Scene no 2, 3D surface reconstruction obtained from (a) camera A, (b) camera B, and (c) camera C

View publication stats

You might also like