SIRENA: An Open Source, Low Cost Video-Based Coastal Zone Monitoring System

Page created by Phyllis Dennis
 
CONTINUE READING
SIRENA: An Open Source, Low Cost
Video-Based Coastal Zone Monitoring System

      G. A. Zarruk a , A. Orfila a , M. A. Nieto a , B. Garau a ,
    S. Balle a , G. Simarro c , A.Ortiz b , G. Vizoso a , J. Tintore a
                   a IMEDEA(CSIC-UIB).     07190, Esporles, Spain
  b Dpto.   de Matemáticas e Informática. UIB. 07122, Palma de Mallorca, Spain
      c ETS    Caminos. Univ. Castilla la Mancha. 19071 Ciudad Real, Spain

Abstract

Coastal sciences suffer from a lack of information with appropriate temporal and
spatial resolution. This information is needed to monitor and study the wide range
of processes occurring in coastal areas and subsequently take informed decisions
concerning their management. For many years scientists have used laboratory ex-
periments or field campaigns to obtain punctual information. However, the numer-
ous variables and processes that influence coastal areas are practically impossible
to observe and measure in this way.
  SIRENA is an open source software developed to help filling this information
gap. It was conceived with the objective of video monitoring coastal areas and
obtaining quantitative and qualitative information from it. It manages a group of
CCD cameras, processes the images, delivers a set of statistical products (snapshot,
mean, variance, and timestack image), and sends them to a remote station through
the internet. The open structure of the software allows the user to modify it to fit
their personal needs and develop new applications or statistical products.
  A prototype system has been working since December 2006 at a pilot site. The
investment on hardware (e.g., cameras, computers, cables) was relatively low. The
statistical products obtained to date are being stored. This represents a large data
base advantageous for coastal morphodynamic and hydrodynamic studies. A sample
application is shown in which mean images were used to estimate the shoreline
location at the pilot site. The software and image data base can be obtained on
request.

Key words: Coastal monitoring, surf zone, video system, remote sensor

∗ Corresponding author. Tel: +34 971 611 834
  Email address: a.orfila@uib.es (A. Orfila).

Preprint submitted to Elsevier Preprint                             28 January 2008
Software availability

Name of the software: SIRENA
Developer: Miguel Angel Nieto
Contact address: IMEDEA, Miquel Marques, 21, 07190, Esporles, Mallorca,
  Spain. Tel: +(34) 971 611 834. Fax: +(34) 971 611 761.
Email: manieto@gmail.com
  a.orfila@uib.es
Year first available: 2006
Hardware required: CCD cameras (with FireWire connection), internet
  router, 2 computers (one with a FireWire card), ethernet cable
Software requirement: GNU/Linux 10.1 (or higher; the software was tested
  on openSUSE 10.1), C++ compiler, MATLAB R2007a (for postprocessing)
Availability: Freeware, on request.
Program language: C++
Program size: 8 MB (compressed 2 MB).

1   Introduction

Coastal areas are among the most complex, variable, and fragile marine sys-
tems since dynamics are subjected to the effects of complex geometry forced
in all boundaries (i.e., surface, bottom, lateral, and internally) by many pro-
cesses at a wide range of temporal and spatial scales. Physical forcing such
as waves, currents, wind, and tides are the essential driving forces in coastal
dynamics. However, many other factors play a role in the evolution of the
coastal surf zone. The particularities of these areas, where many spatial and
temporal scales are present, make observation and continuous monitoring of
coastal variability very complex, expensive, and sometimes impossible to ob-
tain. However, comprehensive information in coastal areas is required in order
to establish efficient coastal zone monitoring and management programs and
effectively study these marine systems. The scarcity, and in most cases the lack,
of information becomes a problem when users and governments need to asses
the current state and possible scenarios after intervention in coastal zones. Al-
though many efforts have been made to provide information on coastal zone
processes, only recently, with the development of new technologies, it became
possible to obtain information with appropriate temporal and spatial resolu-
tion.

Several experiments and coastal monitoring schemes have been developed and
carried out in the last decade. The most complex ones are field based coastal
and oceanographic facilities. These have a broad capacity to produce spatial
and temporal measurements of physical and environmental variables. A few
laboratories have been established around the world in recent years. Among

                                       2
them, two examples are the Proudman Oceanographic Laboratory in Liver-
pool, UK, and the Martha’s Vineyard Coastal Observatory in Massachusetts,
USA (Proctor et al., 2004; Petit et al., 2001). Unfortunately, these facilities re-
quire a large economical investment and have a relatively long installation and
set-up times before producing meaningful data sets. Besides long term main-
tenance and sustainability become a liability if funds are not readily available.

Another alternative to measure coastal processes is based on remote sensors.
In this way, information can be acquired automatically, continuously, and pe-
riodically from high resolution digital images. The quality of remotely sensed
information depends on the image resolution and quality; images are affected
by adverse weather conditions and the accuracy of measurements can be lower
than traditional techniques. However, it is an alternative that utilizes a sig-
nificantly lower amount of human, economic, and computational resources.
Hence, allowing a better continuity and frequency in data acquisition. Among
optical remote sensors, fixed digital video cameras is an attractive alternative
for coastal monitoring. Video based coastal surf zone monitoring systems are
low cost systems, that can be implemented in coastal areas and permit the
estimation of several littoral processes from surface signatures on the image.

A broad variety of coastal processes can be monitored using remote techniques.
For instance, researchers have used video images to study several littoral pro-
cesses like sand bar morphology (Lippman and Holman, 1989), near shore hy-
drodynamics (Lippmann and Holman, 1991; Chickadel et al., 2003), beach and
near-shore bathymetry extraction (Stockdon and Holman, 2000; Aarninkhof
et al., 2003). Recently, with the commercialization of ARGUS (Holman and
Stanley, 2007), a video based coastal monitoring system developed by the
Coastal Imaging Lab at Oregon Sate University, other applications of video
systems have been developed. Examples of these are applications to coastal
zone management (Koningsveld et al., 2007; Turner and Anderson, 2007; Smit
et al., 2007), management of navigational channels (Medina et al., 2007), and
beach recreation indicators (Jiménez et al., 2007). The capabilities and func-
tionality of video based costal monitoring systems are evolving rapidly, more
so with the CoastView project (Davidson et al., 2007). Researchers as well
as private and governmental organizations are finding new spaces to employ
these type of monitoring technology.

This paper presents, contrary to the previously mentioned solutions, a low
cost, open source video based surf zone monitoring system that aims to obtain
a platform capable of providing long term data for coastal management and
monitoring. Besides, the whole system follows the COTS (Components Of The
Shelf) philosophy, that will have a positive impact on its reproducibility and
maintainability.

The paper is structured as follows. First, the hardware and software architec-

                                        3
ture are described. The different stages of image processing will be detailed,
from the pre-processing in the remote station to the post-processing in the cen-
tral server, as well as camera calibration for geographical referencing. Then,
a pilot study area will be presented along with some results obtained since
the installation of the system. Finally, a discussion of system and its results
conclude the paper.

2     Description of the System

The video-based coastal zone monitoring system presented here, denominated
SIRENA (from its acronym in Spanish), is intended to be a low cost, auto-
mated, remote monitoring tool. Standard Commercial Off-The-Shelf (COTS)
components coupled with open technologies were implemented to the image
acquisition process, generation of statistical products, transfer of information
between work field and the data processing center, postprocessing algorithms,
and presentation of results to the general public. Below is a description of the
hardware, software, and products obtained with the system.

2.1    Hardware

The SIRENA prototype system is composed of two nodes: the remote station
and the central server. The technological requirements of each node drasti-
cally differ due to the distinct functional requirements of each. At the remote
station, the important factors are processor performance and the I/O system
since it should acquire and pre-process in real time the video images. The
central server stores statistical products delivered by the remote station and
applies post-processing algorithms on them. Therefore, storage and computa-
tional efficiency are its main characteristics.

All components of the remote station are heavily protected against harsh
weather conditions and salinity of the environment following the IP66 standard
(ingress protection standard against dust and water set by the International
Electrotechnical Commission). The remote station is composed of a computer,
an IEEE 1394 FireWire card, three CCD cameras (the system is capable of
handling seven cameras) connected to the FireWire card, and a UPS (Unin-
terrupted Power Supply). The cameras (The Imaging Source, DFK 31BF03)
work with an IIDC/DCAM protocol, currently a popular standard in real time
digital image transmission. This protocol facilitates the camera configuration
and control, and the transmission of digital images; it defines the transference
of images from one camera at a time and without compression, facilitating
the development and operation of the pre-processing software. Images are ac-

                                       4
quired at 7.5 fps (frames-per-second), high enough to observe relevant coastal
phenomenons, but the frequency can be increased up to 30 fps using a PCI
Express Firewire card. Images are delivered in RAW format, reducing the bus
traffic down to 1/3 and allowing the use of more cameras or using a simpler
FireWire card. Each camera is suited with a CS optical mount with a CS/C
conversion ring that permits adaptation with any mount complying with any
of these two standards. These types of mounts are common in computer based
visualization. Thus, a wide variety of lenses are available and can be selected
according to the interest of each application of the tool.

The remote station is equipped with a monitor, keyboard, and mouse. These,
although rarely used, facilitate direct maintainer interaction with the remote
station. However, control, supervision, and software maintenance of the remote
station are done through a remote connection (e.g., ssh, ftp) from the central
server. It is also possible to access the remote station from any computer
if permission is granted by the system administrator. The central server is
located at the IMEDEA headquarters and is equipped with a large capacity
of secondary storage memory and a read/write unit of optical disks. Direct
utilization of the server by the users is not frequent but access to the database
through the intranet and internet is very common.

The data stored in the central server is the main product of the system. There-
fore, a two level system to protect the information was arranged. The first level
stores the recently acquired information, at least those acquired within the last
two years. The main characteristics of this level are: high storage capacity, high
tolerance for failure, and rapid access to the information. This was achieved
with a RAID 1 disk system (with mirroring) which adapts very well to the
system requirements and is a simple and portable solution. The second level
is a backup periodically done that recurrently incorporates the data from the
first level. This level should have storage capacity that can be easily increased,
be highly reliable, and have a low cost, completely ignoring access speed. The
solution adopted for this level was a periodic double copy of the information
on DVDs.

Both computers (remote and central) work under GNU/Linux (openSUSE
10.1). This is a robust operating system, highly functional, and easy to install
and manage. Furthermore, suites very well the philosophy of the system con-
sidering that it has an standard, open architecture, and is freely distributed.

2.2   Acquisition and pre-processing software

The acquisition and pre-processing software was developed using open archi-
tecture standards and components. The software is written in C++ language,

                                       5
complies with POSIX (Portable Operating System Interface, software stan-
dard compatible with UNIX environments), and all the libraries used have
GPL (General Public License) or LGPL (Lesser General Public License). This
guarantees continuity on the development of the software by other users and its
availability to interested parties. The acquisition and pre-processing software
is freely available on request.

The software is installed at the remote station performing autonomously and
automatically the following tasks: manage the CCD cameras; plan and run im-
age acquisition; process image series and generate statistical products; store
statistical products and transfer them to the central server; periodically no-
tify its status to the central server, log the results of main tasks and errors
occurred, and advice the central node of the remote system status so it can
be maintained promptly.

During the development of the software, two critical aspects were taken into ac-
count. First, the large amount of images produced by the cameras can rapidly
increment the amount of memory required and difficult their management in
real time. Second, management of the system and the software itself, making
it possible to configure, control, monitor, and update the remote station from
the central server or any other external work station.

To solve the first critical aspect, the elevated amount of images, a pre-processing
task is executed. This aims to drastically reduce the data volume to be sent
and stored, while minimizing the loss of information. For that purpose, four
types of statistical products are generated: a mean image, a variance image,
timestacks, and a snapshot. These products, and some of the information that
can be extracted from them, have been widely reported (Holland et al., 1997;
Holman and Stanley, 2007).

From the experience during the set-up of the system, it became clear that an
important goal of the pre-processing software should be to optimize the use of
the capturing platform resources. Hence, several measures and computational
techniques were adapted into the software. Below is a description of them and
of the software and system management strategies.

2.2.1   RGB mode vs. RAW mode

Commercial cameras normally have one CCD. These devices are “color blind”.
The usual technique to obtain color images from these devices is to place a
Bayer filter mosaic in front of the CCD, so each pixel is sensible only to one of
the three primary colors (Red-Green-Blue, RGB). In this way, at each pixel,
the value of one color is known, while the values of the other two colors are
interpolated from the known values in the neighborhood. Therefore, know-
ing the filter pattern used by the camera allows the software to transfer the

                                        6
image from the camera to the PC in RAW mode (only 1 byte/pixel instead
of 3 bytes/pixel), processing the image in this mode to obtain the statistical
products and finally, apply the filter to the resulting images to convert them
to color images. This approach has two advantages. First, the bandwidth of
the connection between the camera and the computer is reduced to one third
of the original, thus improving the use of the I/O channels. Second, most of
the processing stage will have one third of the original data to process, leaving
more computation time for other tasks.

2.2.2   Real-Time pre-processing

As mentioned before, there is a large amount of data to be managed. If the
data was to be stored first during a run and processed at the end, there would
be a need of a large I/O memory activity as well as in the hard disk of the
computer. But, as the frame rate is relatively low (7.5 fps), the acquisition
platform can perform some pre-processing between frames. In this way, the
software can discard the captured images, once the data of interest has been
extracted and accumulated in the corresponding statistical products. Thus,
the memory used is reduced to the strictly necessary for variables used in the
construction of statistical products and the last captured image.

2.2.3   Concurrent programming

The software is written using a multithread approach, thus allowing the un-
derlying operating system to schedule different execution threads depending
on the available resources. For example, several threads can be scheduled si-
multaneously in different processors, if possible. This feature is desirable with
the appearance of multiprocessor systems to take advantage of this multitask-
ing capability. Concurrent programming allows a natural way of defining the
behavior of each task independently from the others and assigning an exe-
cution thread for each one. While one thread of execution is waiting for an
I/O operation or waiting until the next activation period, other threads of the
application can make use of the processor to compute whatever is needed. In
general, this feature allows a better use of the computer resources.

2.2.4   Real-Time transmission

The file transfers are carried out after each capturing run. In this way, the use
of the communication channel is balanced, and the central node is kept up
to date, minimizing the probability of loosing data if problems appear in the
remote node.

                                       7
2.2.5    Software and system management

External management of the remote station was the second critical aspect
considered during the development of the software. The management is di-
vided into four groups of actions. The first group deals with configuring the
software, setting up the scheduling and properties of the image series (e.g.,
acquisition starting time as a function of the season, duration of the time
series), setting up the camera parameters, and defining the timestacks. The
second group is for controlling the software and the remote station and for
performing actions such as stop and restart the software, fix problems with
the OS, and restart the station. The third group is for monitoring the status
of the software and cameras using a log file with basic information and a daily
status e-mail. Finally, the last group of actions takes cares of upgrading the
software, libraries, and some OS components. These tasks are performed using
SSH, telnet, and FTP tools. In this way, performing usual maintenance oper-
ations, such as stoping the software, changing parameters in the configuration
file and restarting execution, becomes straightforward.

2.3     Postprocessing

The main task of the postprocessing software is to apply different algorithms
to the stored images to measure features of interest (which are related to
morphodynamic properties and wave dynamics of the study zone). First, using
computer vision techniques, features like coastline and wave-breaking zones
can be detected and located in the image space. Then, applying different
corrections to overcome the lens distortions as well as rectifying the perspective
projection, these features are georeferenced in a world coordinate system, so
measurements of these features can be carried out in the real space.

The information for the georeferencing stage is extracted from the calibration
steps, both intrinsic (optical lens) and extrinsic (relative position of the camera
with respect to the world coordinate system).

2.4     Camera calibration

Prior to work with remote sensing images one has to carry out a system
calibration. In this context, calibration refers to the process of determining
the geometric and optical parameters of the camera as well as the three di-
mensional position and orientation of the camera relative to a certain world
coordinate system (Tsai, 1987). The first step where the camera characteristic
are found to correct the possible distortions is known in image vision as the
intrinsic parameter calibration and usually is solved by minimizing a nonlinear

                                        8
error function (Slama, 1980) while the second one is the extrinsic parameter
calibration and requires knowledge about the 3D world coordinate system and
the 2D image coordinates.

2.4.1   Intrinsic parameters

For the intrinsic camera calibration, we follow a two step explicit method to
save computational costs in which the initial parameters are solved linearly
and the final values are obtained by a nonlinear minimization (Heikkila and
Sliven, 1997). The aim of this process is to obtain the exact focal length in the
x and y directions, fx and fy , respectively; the principal point coordinates, cx
and cy , and the radial distortion factor, kc . Following Bouguet (1999), we apply
the pinhole camera model in which each point in the object space is projected
by a straight line through the projection center into the image plane. In this
dual space formalism, the correspondence between pixel positions [px , py ] of
the images and the coordinate on the image plane [x, y] are obtained.

To compute the intrinsic parameters for the three cameras, we use the Matlabr
calibration toolbox available at http://www.vision.caltech.edu/bouguetj/. The
standard method consists in the acquisition of a set of images with an object
of known dimensions, such as a board pattern, and estimating the set of pa-
rameters that best matches the computed projection of the structure with the
observed projection on the image.The calibration is done in two steps. First the
initialization, which computes a closed form solution assuming no lens distor-
tion and second a non linear optimization where the total re-projection error
is minimized. The non-linear optimization is solved using standard gradient
descent techniques (Bouguet, 1999). The four extreme corners are needed to
localize the vanishing points for each pattern (all interior nodes can be found
from perspective warping). After the gradient descent iterations, the solved
parameters are stored (the 4th order radial distortion parameters). Table 1
presents the resulting parameters for the three cameras taking into account
the pixel size provided by the manufacturer (dx = dy = 4.65µm).

2.4.2   Extrinsic parameters

The extrinsic camera calibration deals with the problem of assigning a real
coordinate in the (X,Y,Z) world to each pixel. For simplicity, we will fur-
ther assume that our monitoring site is located on a flat world; a reasonable
assumption considering that the study site is within 2 km from the camera
position and thus in a flat earth. Under this hypothesis, the perspective cor-
rection equation is represented by,

                                    Ax + By + C
                               u=                                             (1)
                                    Dx + Ey + 1

                                       9
F x + Gy + H
                               v=                                             (2)
                                    Dx + Ey + 1

where (u, v) represent the pixel position in the image and (x, y) their corre-
sponding coordinates in the world. From (1) and (2) it is clear that 4 points
are enough to solve the system of equations. An analysis using 4 to 30 control
points on the beach and computing all possible combinations showed that the
global error in the whole image was minimized when using 8 control points;
regularly distributed and cover the largest possible area. The accuracy did not
improve by adding more points. Since this work focuses on the software, the
accuracy analysis is not shown here. Figure 1 shows a sample rectified image
in real world coordinates: UTM-WGS84.

3   Study area

A remote station of the prototype system has been located in Cala Millor
Beach in the northeast coast of Mallorca Island (Figure 2). It is a two kilometer
long beach with an open bay with an area of approximately 14 Km2 , extending
to depths up to 20-25 meters, with a regular slope and sand bars near the
shore. The beach has modal conditions in the intermediate state, although
skewed to reflective positions (Gómez-Pujol et al., 2007). The tidal regime is
microtidal with a spring range of less than 0.25 m. Deep water climate at a
location 10 km offshore (50 m depth) indicates that significant wave heights
above 1 meter are reached 50% of time during the year, with typical spectral
peak periods ranging between 3 and 6 seconds. Morphodynamic analysis of
the area shows that the beach has a dynamic balance where the sediment is
in constant movement (IMEDEA, 2004). These characteristics together with
its beach dynamics, and high degree of tourist occupation during most of the
year, make Cala Millor a very suitable study site. In recent years, the quality of
the beach has deteriorated, increasing the number of rocks emerging along the
coastline and sandbar formation and migration has been detected. Previous
studies provide in situ observations of wave climate, bathymetry, coastline
variation, sediment characteristics, surface and bottom currents, rip currents,
and wind, among other variables. Furthermore, information from numerical
models and wave buoys near Cala Millor are available through the Spanish
Harbor Authority (Puertos del Estado). All these make Cala Millor and ideal
site for SIRENA to be tested in morphodynamic studies as well as integrated
coastal zone management research.

                                       10
4     Results

4.1     Morphological and hydrodynamical database

The main product of remote sensing observing systems is the generation of
large data sets of images for morphological evolution studies as well as for the
estimation of sea surface hydrodynamical patterns, thus allowing to study long
term beach evolution (from weeks to months) but also short scales (hours).
Therefore statistical products are the main results of such a system.

4.1.1    Mean images

Mean images are generated to reduce the amount of data to be managed with-
out loosing any significant information. The software is set-up to generate a
mean image, per camera, per hour, between sunrise and sunset. SIRENA is set
up to acquire images at a frequency of 7.5 Hz during 10 minutes, turning off
the cameras the remaining 50 minutes. The assumption underlying this is that
wave climate at a certain location does not change drastically within the data
acquisition time. Mean images are computed immediately from the prepro-
cessing software by adding a new image as the system acquires it. Therefore,
each hour SIRENA stores 3 mean images (one for each camera) as the result
of processing 13,500 images (4,5000 images/camera). Figure 3 shows a sample
of the mean images for the three cameras. As seen, mean images show the pat-
terns of high frequency variability; white pixels in the images indicate areas
where waves are breaking and therefore an indirect estimation of the position
of submerged sandbars.

4.1.2    Timestacks

In order to follow wave rays and to derive from them some hydrodynamical
wave characteristics, at each camera several time stacks were defined. Time
stacks consists in cross-shore transects perpendicular to the coast (in the real
world) where all pixel intensity is stored. Thus, timestack are a spatial and
temporal representation of way rays consisting in the cross-shore position on
the x-axis and the temporal evolution in the y-axis. Wave rays and breaking
zones can be determined from these products, as seen in Figure 4. If waves
propagate to the shore in a normal direction these images allow the estimation
of wave celerity (using water wave theory) and therefore the estimation of
bathymetry assuming shallow water theory. However in real situations the
wave incidence angle can be in any direction and the cross-shore timestack
only provides the wave number in the cross-shore direction. To obtain the wave
number in the direction parallel to the coast, the software can be configured

                                      11
to store an array of adjacent pixels, as shown in Figure 5.

4.1.3    Variance

The image variance is used to filter some postprocessing products indicating
those areas where variability is higher (figure 6).

4.1.4    Snapshot

During the image capture process, SIRENA stores the an image. Hence, an
hourly snapshot is recorded. The user can chose which image to save or saving
any number of images during the acquisition process. This product can be
the basis for beach and coastal zone management activities (e.g. beach users,
beach cleaning, rip currents).

4.2     Sample application: shoreline detection

Detection of the shoreline with video imaging is done by defining a shoreline
indicator that acts as a proxy for the land-water interface and then detecting
the indicator in the images (Boak and Turner, 2005). We took advantage of
the large percentage that the ocean occupies in the images as the indicator.
With digital manipulation of the mean images it is possible to identify the
two largest “objects” in the image; the ocean surface and the beach or land
surface.

Figure 7 describes the procedure used to estimate the shoreline. The mean
image, originally in RGB, is transformed to HSV (Hue-Saturation-Value) color
space. In this way, the ocean surface and land surface are discernible from the
Hue component of the image. This component is used to estimate the gray
threshold level of the image. The data is converted into a binary matrix or
black and white image. This can be used to identify big “objects” in the
image by searching for groups of adjacent pixel with the same value. The two
biggest objects are assumed to be either the ocean or land surface. Therefore,
the interface between them is labelled as the shoreline and identified with an
edge detection algorithm (Canny, 1986) and filtered to remove spurious data
points. Figure 8 shows the estimated shoreline location on a mean image.

                                       12
5   Conclusions

SIRENA is an open source software conceived with the objective of video
monitoring coastal areas. The software was developed using open architec-
ture standards and components. It is written in C++ language, complies with
POSIX (UNIX compatible) and all the libraries used have GPL or LGPL
(Lesser General Public License).

The software manages several CCD cameras, process digital images, obtains
statistical products, stores them, and delivers the information in real time to
a remote station through the internet. The statistical products/images cur-
rently available are: snapshot, mean, variance, timestacks, and pixel arrays.
Image post-processing software has been adapted to calibrate the cameras and
transform the image information into real world coordinates.

The prototype version of the software has been working since December 2006
in Cala Millor, Mallorca, Spain. The hardware used was relatively inexpensive,
considering the functionality and advantages offered to scientists, managers,
and coastal communities in general. The products from the prototype system
are being stored and represent a large raw morphological and hydrodynamical
database.

All the software developed and the morphological and hydrodynamical database
can be consulted and downloaded from the internet on request.

6   Acknowledgments

This work has been possible thanks to financial support from the Govern de
les Illes Balears throughout UGIZC project. Authors would like to thank the
field support from Benjamin Casas. We are also grateful to the Hotel Castell
del Mar for making possible the installation of the system. A. Orfila would
like to thank CSIC-COLCIENCIAS for their financial aid.

References

Aarninkhof, S. G. J., Turner, I. L., Dronkers, T. D. T., Caljouw, M., Nipius, L.,
  2003. A video-technique for mapping intertidal beach bathymetry. Coastal
  Eng. 49, 275–289.
Boak, E. H., Turner, I. L., 2005. Shoreline definition and detection: A review.
  Journal of Coastal Research 21 (4), 688–703.

                                      13
Bouguet, J.-Y., 1999. Visual methods for three-dimensional modeling. Ph.D.
  thesis, California Institute of Technology.
Canny, J., 1986. A computational approach to edge-detection. IEEE Transac-
  tions on Pattern Analysis and Machine Intelligence 8 (6), 679–698.
Chickadel, C. C., Holman, R. A., Freilich, M. H., 2003. An optical technique
  for the measurement of longshore currents. J. Geophys. Res. 108 (C11),
  3364.
Davidson, M., Koningsveld, M. V., de Kruif, A., Rawson, J., Holman, R.,
  Lamberti, A., Medina, R., Kroon, A., Aarninkhof, S., June-July 2007. The
  coastview project: Developing coastal video monitoring systems in support
  of coastal zone management. Coastal Engineering 54 (6-7), 463–475.
Gómez-Pujol, L., Orfila, A., Cañellas, B., Alvarez-Ellacuria, A., Mendez, F.,
  Medina, R., Tintore, J., 2007. Morphodynamical classification of sandy
  beaches in a microtidal, low energy marine environment. Marine Geology.
  242 (4), 235–246.
Heikkila, J., Sliven, O. A., 1997. Four-step camera calibration procedure with
  implicit image correction. In: Computer Society Conference on Computer
  Vision and Pattern Recognition (CVPR’97). San Juan,Puerto Rico. IEEE,
  pp. 1106–1112.
Holland, K. T., Holman, R. A., Lippmann, T. C., Stanley, J., Plant, N., Jan.
  1997. Practical use of video imagery in nearshore oceanographic field stud-
  ies. IEEE J. Oceanic Eng. 22 (1), 81–92.
Holman, R. A., Stanley, J., June-July 2007. The history and technical capa-
  bilities of argus. Coastal Engineering 54 (6-7), 477–491.
IMEDEA, 2004. Variabilidad y dinámica sedimentaria de las playas de Cala
  Millor y Cala Sant Vicenç: alternativas de gestión basadas en el cono-
  ciemiento cientı́fico. Tech. rep., Govern de les Illes Balears, Conselleria de
  Medi Ambient.
Jiménez, J. A., Osorio, A., Marino-Tapia, I., Davidson, M., Medina, R., Kroon,
  A., Archetti, R., Ciavola, P., Aarnikhof, S. G. J., June-July 2007. Beach
  recreation planning using video-derived coastal state indicators. Coastal En-
  gineering 54 (6-7), 507–521.
Koningsveld, M. V., M. Davidson, D. H., Medina, R., Aarninkhof, S., Jimenez,
  J. A., Ridgewell, J., de Kruif, A., June-July 2007. A critical review of the
  CoastView project: Recent and future developments in coastal management
  video systems. Coastal Engineering 54 (6-7), 567–576.
Lippman, T. C., Holman, R. A., 1989. Quantification of sand bar morphology:
  a video technique based on wave dissipation. J. Geophys. Res 94, 995–1011.
Lippmann, T. C., Holman, R. A., June 1991. Pahse speed and angle of break-
  ing waves measured with video techniques. In: Kraus, N. (Ed.), Costal Sed-
  iments ’91. ASCE, Seatlle, WA, USA, pp. 542–556.
Medina, R., Marino-Tapia, I., Osorio, A., Davidson, M., Martin, F. L., June-
  July 2007. Management of dynamic navigational channels using video tech-
  niques. Coastal Engineering 54 (6-7), 523–537.
Petit, R., Austin, T. C., Edson, J., McGillis, W., Purcell, M., McElroy, M.,

                                      14
Dec. 2001. The Martha’s Vineyard Coastal Observatory: Architehture, In-
  stallation, and Operation. AGU Fall Meeting Abstracts, D2+.
Proctor, R., Howarth, J., Knight, P., Mills, D. K., 2004. The pol coastal ob-
  servatory - methodology and some first results. In: Spaulding, M. L. (Ed.),
  ESTUARINE AND COASTAL MODELING, PROCEEDINGS. pp. 273–
  287.
Slama, C. C., 1980. Manual of Photogrammetry. American Society of Pho-
  togrammetry, Falls Chuch, VA. USA.
Smit, M. W. J., Aarninkhof, S. G. J., Wijnberg, K. M., Gonzalez, M.,
  Kingston, K. S., Southgate, H. N., Ruessink, B. G., Holman, R. A., Siegle,
  E., Davidson, M., Medina, R., June-July 2007. The role of video imagery in
  predicting daily to monthly coastal evolution. Coastal Engineering 54 (6-7),
  539–553.
Stockdon, H. F., Holman, R. A., September 2000. Estimation of wave phase
  speed and nearshore bathymetry from video imagery. J. Geophys. Res.
  105 (C9), 22105–22033.
Tsai, R. Y., 1987. A versatile camera calibration technique for high accuracy
  3D machine vision metrology using off-the-shelf tv cameras and lenses. IEEE
  J. Robotics Automat. RA-3 (4), 323–344.
Turner, I. L., Anderson, D. J., June-July 2007. Web-based and “real-time”
  beach management system. Coastal Engineering 54 (6-7), 555–565.

                                     15
Table 1
Intrinsic parameters after the two step camera calibration. Cameras resolution are
1024 × 768 squared pixels with a size length of 4.65 µm.
                       Camera 1         Camera 2         Camera 3
           fx (m)        0.0062             0.0046         0.0062
           fy (m)        0.0062             0.0046         0.0062
              cx    537.86 ± 17.88    512.46 ± 13.95   477.27 ± 13.06
              cy    421.43 ± 14.11    382.76 ± 12.61   398.26 ± 13.60
             kc1        -0.26867         -0.24812         -0.26867
             kc2        0.61220             0.01010        0.47929
             kc3        0.00220          -0.00246         -0.00344
             kc4        -0.00154         -0.00306         -0.00035

                                       16
Fig. 1. Rectified images in UTM WGS 84 coordinates

                       17
Fig. 2. Study site: Cala Millor, Mallorca, Balearic Islands, Spain

                   200
      v (pixels)

                   400

                   600

                                500        1000         1500       2000       2500
                                                    u (pixels)

Fig. 3. Mean images of the study site. The light colored pixels on the ocean surface
represent the wave breaking zone.

                                                   18
Fig. 4. Timestack image.

          19
Fig. 5. Timestack locations and pixel arrays. The circle is a zoom of the pixel array
used to estimate wave direction.

                    1

                  200
     v (pixels)

                  400

                  600

                  768
                     1   200       400                600    800            1024
                                         u (pixels)

Fig. 6. Variance image. Bright regions represent areas of high temporal variability
(e.g., the wave breaking zone).

                                         20
1                                                  1
                                                                (a)                                                       (b)

                           200                                                200
          v (pixels)

                           400                                                400

                           600                                                600

                           768                                                768
                              1   200    400      600     800     1024           1        200         400     600   800     1024

                             1                                                  1
                                                                (c)                                                       (d)

                           200                                                200
          v (pixels)

                           400                                                400

                           600                                                600

                           768                                                768
                              1   200    400    600       800     1024           1        200         400    600    800     1024
                                          u (pixels)                                                   u (pixels)

Fig. 7. Shoreline detection process. (a) Original image; (b) H channel of image in
HSV color space; (c) Binary image; (d) Binary image after detecting the two largest
objects in the previous image.

                           200

                           300
              v (pixels)

                           400

                           500
                                   100      200         300     400       500       600         700         800     900    1000
                                                                         u (pixels)

                           Fig. 8. Estimated shoreline for the central camera image.

                                                                      21
You can also read