|
VISAPP 2007 Abstracts |
|
Conference
Area 1 - Image Formation and Processing
Area 2 - Image Analysis
Area 3 - Image Understanding
Area 4 - Motion, Tracking and Stereo Vision
Special Sessions
Human Presence Detection for Context-aware Systems
3D Model Aquisition and Representation
Beyond Image Enhancement
Computer Vision Methods in Medicine
Bayesian Approach for Inverse Problems in Computer Vision
Mathematical and Linguistic Techniques for Image Mining
Workshops
THe First International Workshop on Robot Vision
|
Area 1 - Image Formation and Processing
|
Title: |
Automated Separation of Reflections from a Single Image based on Edge Classification
|
Author(s): |
Kenji Hara |
Abstract: |
Looking through a window, the object behind the window is often disturbed by areflection of another object. In the paper, we present a new method for separating reflections from a single image. Most existing techniques require the programmer to create an image database or require the user to manually provide the position and layer information of feature points in the input image, and thus suffer from being extremely laborious. Our method is realized by classifying edges in the input image based on the belonging layer and formalizing the problem of decomposing the single image into two layer images as an optimization problem easier to solve based on this classification, and then solving this optimization with a pyramid structure and deterministic annealing. As a result, we are able to accomplish almost fully automated separation of reflections from a single image. |
|
Title: |
MULTIRESOLUTION IMAGE FUSION BASED ON SUPER-RESOLUTION RESTORATION
|
Author(s): |
Valery Starovoitov, Aliaksei Makarau, Dmitry Dovnar and Igor Zakharov |
Abstract: |
A principally new technique for fast fusion of multiresolution satellite images with minimal colour distortion is presented in the paper. The technique allows to reconstruct multispectral images with resolution higher than resolution of the panchromatic image. It is based on combination of a method for image super-resolution restoration and an algorithm for image fusion based on global regression. Super-resolution image restoration is based on simultaneous processing of several multispectral images to reconstruct a panchromatic image with higher resolution. This method is quasi-optimal on minimum squared errors of image restoration. |
|
Title: |
Evaluating Stitching Quality
|
Author(s): |
Jani Boutellier, Olli Silvén, Lassi Korhonen and Marius Tico |
Abstract: |
Until now, there has been no objective measure for the quality of mosaic images or mosaicking algorithms. To mend this shortcoming, a new method is proposed. In this new approach, the algorithm that is to be tested, acquires a set of synthetically created test images for constructing a mosaic. The synthetic images are created from a reference image that is also used as a basis in the evaluation of the image mosaic. To simulate the effects of actual photography, various camera-related distortions along with perspective warps, are applied to the computer-generated synthetic images. The proposed approach can be used to test all kinds of computer-based stitching algorithms and presents the computed mosaic quality as a single number. |
|
Title: |
Improvement of deblocking corrections on flat areas and proposal for a low-cost implementation
|
Author(s): |
Frederique Crete, Marina Nicolas and Patricia Ladret |
Abstract: |
The quantization of data from individual block-based Discrete Cosine Transform generates the blocking effect estimated as the most annoying compression artefact. It appears as an artificial structure caused by noticeable changes in pixel values along the block boundaries. Due to the masking effect, the blocking artefact is more annoying in flat areas than in textured or detailed areas. Existing low-cost algorithms propose strong low-pass filters to correct this artefact in flat areas. Nevertheless, they are confronted to a limitation based on their filter length. This limitation can introduce other artefacts such as ghost boundaries. We propose a new principle to detect and correct the boundaries on flat areas without being limited to a fix number of pixels. This principle can be easily implemented in a low-cost post processing algorithm and completed with other corrections for perceptible boundaries on non-flat areas. This new method produces results which are perceived as more pleasing for the human eye than the other traditional low-cost methods. |
|
Title: |
Adaptive Data-Driven Regularization for Variational Image Restoration in the BV Space
|
Author(s): |
Hongwei Zheng and Olaf Hellwich |
Abstract: |
We present a novel variational regularization in the space of
functions of Bounded Variation ($BV$) for adaptive data-driven image
restoration. The discontinuities are important features in image
processing. The $BV$ space is well adapted for the measure of
gradient and discontinuities. More over, the degradation of images
includes not only random noises but also multiplicative, spatial
degradations, i.e., blur. To achieve simultaneous image deblurring
and denoising, a variant exponent linear growth functional in the
$BV$ space is extended in Bayesian estimation with respect to
deblurring and denoising. The selection of regularization parameters
is self-adjusted based on spatially local variances. Simultaneously,
the linear and non-linear smoothing operators are continuously
changed following the strength of discontinuities. The time of
stopping the process is optimally determined by measuring the
signal-to-noise ratio. The algorithm is robust in that it can handle
images that are formed with different types of noises and blur.
Numerical experiments show that the algorithm achieves more
encouraging perceptual image restoration results. |
|
Title: |
A Closed-Form Solution for the Generic Self-Calibration of Central Cameras from Two Rotational Flows
|
Author(s): |
Ferran Espuny |
Abstract: |
In this paper we address the problem
of self-calibrating a differentiable generic camera from two
rotational flows defined on an open set of the image. Such a
camera model can be used for any central smooth imaging system,
and thus any given method for the generic model can be applied to
many different vision systems.
We give a theoretical closed-form solution to the problem, proving
that the ambiguity in the obtained solution is "metric" (up to an
orthogonal linear transformation). Based in the theoretical results,
we contribute with an algorithm to achieve metric self-calibration of
any central generic camera using two optical flows observed in (part
of) the image, which correspond to two infinitesimal rotations of the
camera. |
|
Title: |
A ROBUST WATERMARKING SCHEME BASED ON EDGE DETECTION AND CONTRAST SENSITIVITY FUNCTION
|
Author(s): |
John Ellinas and Dimitrios Manolakis |
Abstract: |
The efficiency of an image watermarking technique depends on the preservation of visually significant information. This is attained by embedding the watermark transparently with the maximum possible strength. The current paper presents an approach for still image digital watermarking in which the watermark embedding process employs the wavelet transform and incorporates Human Visual System (HVS) characteristics. The sensitivity of a human observer to contrast with respect to spatial frequency is described by the Contrast Sensitivity Function (CSF). The strength of the watermark within the decomposition subbands, which occupy an interval on the spatial frequencies, is adjusted according to this sensitivity. Moreover, the watermark embedding process is carried over the subband coefficients that lie on edges where distortions are less noticeable. The experimental evaluation of the proposed method shows very good results in terms of robustness and transparency. |
|
Title: |
Threshold Decomposition driven Adaptive Morphological Filter for Image Sharpening
|
Author(s): |
Tarek Mahmoud and Stephen Marshall |
Abstract: |
A new method is proposed to sharpen digital images. This sharpening method is based on edge detection and a class of morphological filtering. Motivated by the success of threshold decomposition, gradient-based operators, such as Prewitt operators, are used to detect the locations of the edges. A morphological filter is used to sharpen these detected edges. Experimental results demonstrate that the performance of these detected edge deblurring filters is superior to that of the traditional sharpening filter family |
|
Title: |
Adaptive Image Restoration using a Local Neural Approach
|
Author(s): |
Ignazio Gallo and Elisabetta Binaghi |
Abstract: |
This work aims at defining and experimentally evaluating an iterative strategy based on neural learning for blind image restoration in the presence of blur and noise. A salient aspect of our solution is the local estimation of the restored image based on gradient descent strategies able to estimate both the blurring function and the regularized terms
adaptively. Instead of explicitly defining the values of local regularization parameters through predefined functions, an adaptive learning approach is proposed. The method was evaluated experimentally using a test pattern generated by a function checkerboard in Matlab. To investigate whether the strategy can be considered an alternative to conventional restoration procedures the results were compared with those obtained
by a well known neural restoration approach. |
|
Title: |
A FAST AND EFFICIENT METHOD FOR CHECK IMAGE QUALITY ASSESSMENT
|
Author(s): |
Pramod Kumar, Raju Gupta, Tavneet Batra and Dinesh Ganotra |
Abstract: |
With the enactment of check 21 Act, check image quality has become a critical requirement. Banks responsible for capturing check images (check truncation) have to warrant their images. With a plethora of capturing devices and outsourcing of check acquisition process, assurance of check quality becomes complex. Currently, banks deploy separate subsystem for Image Quality Analysis (IQA), which is based on defect metrics defined by Financial Services Technology Consortium (FSTC) Phase I project (Image quality and usability assurance, 2004). The problem with this approach is that IQA cannot match the scanning speed and has to be deployed as a separate process. Another problem with a predefined defect metrics is that it is dependent on check content. This paper proposes a fast and efficient method to estimate the quality and usability of check images. The method is independent of the check content or layout. IQA based on this algorithm can be deployed at the scanning stage. The checks will have a pre-printed pattern in the form of a logo. This pattern will be detected and analysed for quality and usability. The results show that our algorithm is able to sort unusable check images efficiently. In future, we plan to use this pre-printed pattern as a measure of check security. |
|
Title: |
SKEW CORRECTION IN DOCUMENTS WITH SEVERAL DIFFERENTLY SKEWED TEXT AREAS
|
Author(s): |
Nikos Papamarkos and Panagiotis Saragiotis |
Abstract: |
In this paper we propose a technique for detecting and correcting the skew of text areas in a document. The documents we work with may contain several areas of text with different skew angles. In the first stage, a text localization procedure is applied based on connected components analysis. Specifically, the connected components of the document are extracted and filtered according to their size and geometric characteristics. Next, the candidate characters are grouped using a nearest neighbour approach to form words, in a first step, and then text lines of any skew, in a second step. Using linear regression, two lines are estimated for each text line representing its top and bottom boundaries. The text lines in near locations with similar skew angles are grown to form text areas. These text areas are rotated independently to a horizontal or vertical plane. This technique has been tested and proved efficient and robust on a wide variety of documents including spreadsheets, book and magazine covers and advertisements. |
|
Title: |
Rapid Development of Retinex Algorithm on TI C6000-based Digital Signal Processor
|
Author(s): |
Juan Zapata and Ramón Ruiz |
Abstract: |
The Retinex is an image enhancement
algorithm that improves the brightness, contrast and sharpness of an
image. This work discusses an easy and rapid DSP implementation of the
Retinex algorithm on a hardware/software platform which integrates
MATLAB/Simulink, Texas Instruments (TI) eXpressDSP Tools and C6000
digital signal processing (DSP) target. This platform automates rapid
prototyping on C6000 hardware targets because lets use Simulink to
model the Retinex algorithm from blocks in the Signal Processing
Blockset, and then use Real-Time Workshop to generate C code targeted
to the TI DSP board by mean Code Composer Studio (CCS IDE). The build
process downloads the targeted machine code to the selected hardware
and runs the executable on the digital signal processor. After
downloading the code to the board, our Retinex application runs
automatically on our target. It performs a non-linear spatial/spectral
transform that synthesizes strong local contrast enhancement. The
library real time data exchange (RTDX) instrumentation that contains
RTDX input and output blocks let transfer image to and from memory on
any C6000-based target. |
|
Title: |
Vector Quantisation based Image Enhancement
|
Author(s): |
W. Paul Cockshott, Sumitha L. Balasuriya, J. Paul Siebert and Irwan Prasetya Gunawan |
Abstract: |
We present a new algorithm for rescaling images inspired by fractal coding.
It uses a statistical model of the relationship between detail at different
scales of the image to interpolate detail at one octave above the highest spatial frequency
in the originan image.
We compare it with Bspline and bilinear interpolation techniques and show that it
yields a sharper looking rescaled image.
|
|
Title: |
An Unsupervised Sonar Images Segmentation Approach
|
Author(s): |
Abdel-Ouahab BOUDRAA |
Abstract: |
In this work an unsupervised Sonar (Sound navigation and ranging) images segmentation is proposed.
Due to the textural nature of the Sonar images, a band-pass filtering that takes into account the local spatial frequency of these images is proposed. Sonar image is passed through a bank of Gabor filters and the filtered images that possess a significant component of the
original image are selected. To calculate the radial frequencies, a new approach is proposed. The selected filtered images are then subjected to a non-linear transformation. An energy measure is defined on the transformed images in order to compute texture features.
The texture energy features are used as input to a clustering algorithm. The segmentation scheme has been successfully tested on real high-resolution Sonar images, yielding very promising results. |
|
Title: |
GAP FILLING IN 3D VESSEL LIKE PATTERNS WITH TENSOR FIELDS
|
Author(s): |
Laurent Risser, Franck Plouraboué and Xavier Descombes |
Abstract: |
We present an algorithm for merging discontinuities in three-dimensional (3D) images of
tubular structures. The application of the proposed method is associated with large
3D images presenting undesirable discontinuities. In order to recover the real network
topology, we need to fill the gap between the closest discontinuous tubular segments.
We present a new algorithm to achieve this goal based on a tensor voting method.
This algorithm is robust, relatively fast and does not require numerous parameters nor
manual intervention. Representative results are illustrated on real 3D micro-vascular
networks. |
|
Title: |
A Self-Calibrating Chrominance Model Applied to Skin Color Detection
|
Author(s): |
Jeroen Lichtenauer, Emile Hendriks and Marcel Reinders |
Abstract: |
In case of the absence of a calibration procedure, or when there exists a color difference between direct and ambient light, standard chrominance models are not completely brightness invariant. Therefore, they cannot provide the best space for robust color modeling. Instead of using a fixed chrominance model, our method estimates the actual dependency between color appearance and brightness. This is done by fitting a linear function to a small set of color samples. In the resulting self-calibrated chromatic space, orthogonal to this line, the color distribution is modeled as a 2D Gaussian distribution. The method is applied to skin detection, where the face provides the initialization samples to detect the skin of hands and arms. A comparison with fixed chrominance models shows an overall improvement and also an increased reliability of detection performance in different environments. |
|
Title: |
Hue Variance Prediction
|
Author(s): |
Robert Grant, Richard Green and Adrian Clark |
Abstract: |
In the area of vision-based local environment mapping, inconsistent lighting can interfere with a robust system. The HLS colour model can be useful when working with varying illumination as it tries to separate illumination levels from hue. This means that using hue information can result in an image invariant to illumination. This can be valuable when trying to determine object boundaries, object identification and image correspondence. The problem is that noise is greater at lower illumination levels. While removing the illumination effects on the image, separating out hue means that the noise effects of non-optimal illumination remain. This paper looks at how the known illumination information of pixels can be used to accurately predict and reduce noise in the hue obtained in video from a colour digital camera.
|
|
Title: |
IMAGE AND VIDEO NOISE
|
Author(s): |
Adrian Clark, Richard Green and Robert Grant |
Abstract: |
Despite the steady advancement of digital camera technology, noise is an ever present problem with image
processing. Low light levels, fast camera motion, and even sources of electromagnetic fields such as electric
motors can degrade image quality and increase noise levels. Many approaches to remove this noise from
images concentrate on a single image, although more data relevant to noise removal can be obtained from
video streams. This paper discusses the advantages of using multiple images over an individual image when
removing both local noise, such as salt and pepper noise, and global noise, such as motion blur. |
|
Title: |
A robust image watermarking technique based on spectrum analysis and pseudorandom sequences
|
Author(s): |
Anastasios Kesidis and Basilios Gatos |
Abstract: |
In this paper a watermarking scheme is presented that embeds the watermark message in randomly chosen coefficients along a ring in the frequency domain using non maximal pseudorandom sequences. The proposed method determines the longest possible sequence that corresponds to each watermark bit for a given number of available coefficients. Furthermore, an extra parameter is introduced that controls the robustness versus security performance of the encoding process. This parameter defines the size of a subset of available coefficients in the transform domain which are used for watermark embedding. Experimental results show that the method is robust to a variety of image processing operations and geometric transformations. |
|
Title: |
COLOR CALIBRATION OF AN ACQUISITION DEVICE
|
Author(s): |
LEGRAND Anne-Claire, TREMAU Alain and VURPILLOT Virginie |
Abstract: |
Color calibrated acquisition is of strategic importance when high quality imaging is required, such as for work of art imaging. The aim of calibration is to correct raw acquired image for the various acquisition device signal deformation, such as noise, lighting uniformity, white balance and color deformation, due, for a great part, to camera spectral sensitivities. We first present reference color data computation obtained from camera’s spectral sensitivities and reflectance of reference patches, taken form Gretag MacBeth Color Chart DC. Then we give a color calibration method based on linear regression. We finally evaluate the quality of applied calibration and present some resulting calibrated images. |
|
Title: |
IMAGE BASED STEGANOGRAPHY AND CRYPTOGRAPHY
|
Author(s): |
Luca Iocchi and Domenico Bloisi |
Abstract: |
In this paper we describe a method for integrating together Cryptography and Steganography through image processing. In particular, we present a system able to perform steganography and cryptography at the same time using images as cover objects for steganography and as keys for cryptography. We will show such system is an effective steganographic one (making a comparison with the well known F5 algorithm) and is also a theoretically unbreakable cryptographic one (demonstrating that our system is equivalent to the Vernam Cipher). |
|
Title: |
Methods used in increased resolution processing
|
Author(s): |
Stefan van der Walt and Ben Herbst |
Abstract: |
A polygon-based interpolation algorithm is
presented for use in stacking RAW CCD images. The algorithm improves
on linear interpolation in this scenario by closely describing the
underlying geometry. 25 frames are stacked in a comparison.
When stacking images, it is required that these images are accurately
aligned. We present a novel implementation of the log-polar transform
that overcomes its prohibitively expensive computation, resulting in
fast, robust image registration. This is demonstrated by registering
and stacking CCD frames of stars taken by a telescope. |
|
Title: |
ROBUST CAMERA CALIBRATION
|
Author(s): |
Stephan Rupp and Matthias Elter |
Abstract: |
The estimation of camera parameters is a fundamental step for many applications in the industrial and medical field, especially when the extraction of 3d information from 2d intensity images is in the focus of a particular application. Usually, the estimation process is called camera calibration and it is performed by taking images of a special calibration object. From these shots the image coordinates of the projected calibration marks are extracted and the mapping from the 3d world coordinates to the 2d image coordinates is calculated. To attain a well-suited mapping, the calibration images must suffice certain constraints in order to ensure that the underlying mathmatical algorithms are well-posed. Thus, the quality of the estimation severly depends on the choice of the input images. In this paper we propose a generic calibration framework that is robust against ill-posed images as it determines the subset of images yielding the optimal model fit error with respect to a certain quality measure. |
|
Title: |
CIRCULAR PROCESSING OF THE HUE VARIABLE
|
Author(s): |
Alfredo Restrepo, Carlos Rodríguez and Camilo Vejarano |
Abstract: |
Unlike many magnitudes dealt with in engineering, the hue variable of a colour image is circular and requires a special treatment. Special techniques have been advanced in statistics for the analysis of data from angular variables; likewise in image processing for the processing of the hue variable. We give definitions of the median and of the range of angular data and apply their running versions on images to smooth them and to detect hue edges. We also give definitions of hue morphology; one based on the topological concept of lifting and on grey level morphology; another definition is wholly given in a circular context. |
|
Title: |
Spherical Image Denoising and its Application to Omnidirectional Imaging
|
Author(s): |
KACHI DJEMAA, STEPHANIE BIGOT, SYLVAIN DURAND and EL MUSTAPHA MOUADDIB |
Abstract: |
This paper addresses the problem of spherical
image processing. Thanks to projective geometry, the omnidirectional image can
be presented as a function on sphere S˛. The target application includes
omnidirectional image smoothing. We describe a new method of smoothing for
spherical images. For that purpose, we introduce a suitable Wiener filter and
we use the Tikhonov method to these images. In order to compare their
performances, we present the most used classical spherical kernels.
We present several examples for filtering real and synthetical spherical images. |
|
Title: |
DEMOSAICING LOW RESOLUTION QVGA BAYER PATTERN
|
Author(s): |
Emanuele Menegatti and Tommaso Guseo |
Abstract: |
In this paper, we present a solution for the interpolation of low resolution digital images. Many digital cameras
can function in two resolution modes: VGA (i.e., 640×480) and QVGA (i.e., 320×240). These cameras use a
single sensor covered with a Color Filter Array (CFA). The CFA allows only one color component to be measured
at each pixel, the remaining color components must be interpolated, this operation is called demosaicing.
There is not a standard way to interpolate the QVGA Bayer pattern and most of the known demosaicing algorithms
are not suitable. In this paper, we propose a new solution for the interpolation of QVGA Bayer
pattern. Experimental results using digital images and an evaluation function confirm the effectiveness of the
interpolation method.
The use of the QVGA resolution is important in low-cost and low-power embedded hardware. As an application,
we chose the RoboCup domain and in particular our Robovie-M humanoid robot competing in the
RoboCup Kid-Size Humanoids League. |
|
|
Area 2 - Image Analysis
|
Title: |
A NEW METHOD FOR VIDEO SOCCER SHOT CLASSIFICATION
|
Author(s): |
Youness TABII, Mohamed OULD DJIBRIL, Youssef HADI and Rachid OULAD HAJ THAMI |
Abstract: |
A shot is often used as the basic unit for both video analysis and indexing. In this paper we present a new method for soccer shot classification on the basis of playfield segmentation. First, we detect the dominant color component, by supposing that playfield pixels are green (dominant color). Second, the segmentation process begins by dividing frames into a 3:5:3 format and then classifying them. The experimental results of our method are very promising, and improve the performance of shot detection. |
|
Title: |
Fully-Automatic Improvement of the Geometry of a Vessel Graph
|
Author(s): |
Jan Bruijns, Frans Peters, Robert-Paul Berretty and Bart Barenbrug |
Abstract: |
Volume representations of blood vessels acquired by 3D rotational angiography
are very suitable for diagnosing a stenosis or an aneurysm. For optimal
treatment, physicians need to know the shape parameters of the diseased vessel
parts. Therefore, we developed a method for semi-automatic extraction of these
parameters from a surface model of the vessel boundaries. To facilitate
fully-automatic shape extraction along the vessels, we developed a method to
generate a vessel graph. This vessel graph represents the topology faithfully.
However, the nodes and the branches are not always located close to the center
lines of the vessels. Nodes and branches outside the center region decrease the
accuracy of the extracted shape parameters. In this paper we present a method
to improve the geometry of a vessel graph. |
|
Title: |
Robust Skyline Extraction Algorithm for Mountainous Images
|
Author(s): |
Sung Woo Yang, Ihn Cheol Kim and Jin Soo Kim |
Abstract: |
Skyline extraction in mountainous images which has been used for navigation of vehicles or micro unmanned air vehicles is very hard to implement because of the complexity of skyline shapes, occlusions by environments, difficulties to detect precise edges and noises in an image. In spite of these difficulties, skyline extraction is a very important theme that can be applied to the various fields of unmanned vehicles applications. In this paper, we developed a robust skyline extraction algorithm using two-scale canny edge images, topological information and location of the skyline in an image. Two-scale canny edge images are composed of High Scale Canny edge image that satisfies good localization criterion and Low Scale Canny edge image that satisfies good detection criterion. By applying each image to the proper steps of the algorithm, we could obtain good performance to extract skyline in images under complex environments. The performance of the proposed algorithm is proved by experimental results using various images and compared with an existing method. |
|
Title: |
IMPROVED ADAPTIVE BINARIZATION TECHNIQUE FOR DOCUMENT IMAGE ANALYSIS
|
Author(s): |
Puja Lal, Lal Chandra, Raju Gupta, Arun Tayal and Dinesh Ganotra |
Abstract: |
Technology of image capturing devices has graduated from Black & White (B&W) to Color, still majority of document image analysis and extraction functionalities work on B&W documents only. Quality of document images directly scanned as B&W is not good enough for further analysis. Moreover, nowadays documents are getting more and more complex with use of variety of background schemes, color combinations, light text on dark background (reverse video) etc. Hence an efficient binarization algorithm becomes an integral step of preprocessing stage. In our proposed algorithm we have modified Adaptive Niblack's Method [1] of thresholding to make it more efficient and handle reverse video cases also. The proposed algorithm is fast and invariant of factors involved in thresholding of document images like ambient illumination, contrast stretch and shading effects. We have also used gamma correction before applying the proposed binarization algorithm. This gamma correction is adaptive to brightness of document image and is found from predetermined equation of brightness versus gamma. Based upon result of experiments, an optimal size of window for local binarization scheme is also proposed |
|
Title: |
Color Models of Shadow Detection in Video Scenes
|
Author(s): |
Csaba Benedek and Tamás Szirányi |
Abstract: |
In this paper we address the problem of appropriate modelling of shadows in color images. While some previous works compared the different approaches regarding their model structure, a comparative study of color models has still missed. This paper attacks a continuous need for defining the appropriate color space for this main surveillance problem. We introduce a statistical and parametric shadow model-framework, which can work with different color spaces, and perform a detailed comparision with it. We show experimental results regarding the following questions: (1) What is the gain of using color images instead of grayscale ones? (2) What is the gain of using uncorrelated spaces instead of the standard RGB? (3) Chrominance (illumination invariant), luminance, or "mixed" spaces are more effective? (4) In which scenes are the differences significant? We qualified the metrics both in color based clustering of the individual pixels and in the case of Bayesian foreground-background-shadow segmentation. Experimental results on real-life videos show that CIE L*u*v* color space is the most efficient. |
|
Title: |
Fast spot hypothesizer for 2-DE research
|
Author(s): |
Peter Peer and Luis Galo Corzo |
Abstract: |
Two-dimensional gel electrophoresis (2-DE) images show the expression levels
of several hundred of proteins where each protein is represented as a blob
shaped spot of grey level values. The spot detection, i.e. segmentation
process has to be efficient as it is the first step in the gel processing.
Such extraction of information is a very complex task. In this paper we
propose a real time spot detector that is basically a morphology based
method with use of seeded region growing as a central paradigm and which
relies on the spot correlation information. The method is tested on gels
with human samples in SWISS-2DPAGE (two-dimensional polyacrylamide gel
electrophoresis) database. The average time to process the image is less
than a second, while the results are very intuitive for human perception and
as such they help the user to focus on important parts of the gel in the
subsequent processing. In gels with less than 50 identified spots as
proteins (proteins that compose a proteome) in the mentioned database, the
algorithm detects all obvious spots. |
|
Title: |
Log-Unbiased Large-Deformation Image Registration
|
Author(s): |
Igor Yanovsky, Stanley Osher, Paul M. Thompson and Alex D. Leow |
Abstract: |
In the past decade, information theory has been studied extensively in medical imaging. In particular, image matching by maximizing mutual information has been shown to yield good results in multi-modal image registration. However, there has been few rigorous studies to date that investigate the statistical aspect of the resulting deformation fields. Different regularization techniques have been proposed, sometimes generating deformations very different from one another. In this paper, we apply information theory to quantifying the magnitude of deformations. We examine the statistical distributions of Jacobian maps in the logarithmic space, and develop a new framework for constructing log-unbiased image registration methods. The proposed framework yields both theoretically and intuitively correct deformation maps, and is compatible with large-deformation models. In the results section, we tested the proposed method using pairs of synthetic binary images, two-dimensional serial MRI images, and three-dimensional serial MRI volumes. We compared our results to those computed using the viscous fluid registration method, and demonstrated that the proposed method is advantageous when recovering voxel-wise local tissue change. |
|
Title: |
Compact (and Accurate) Early Vision Processing in the Harmonic Space
|
Author(s): |
Silvio P. Sabatini, Giulia Gastaldi, Fabio Solari, Karl Pauwels, Marc M. Van Hulle, Javier Diaz, Eduardo Ros, Nicolas Pugeault and Norbert Krueger |
Abstract: |
The efficacy of anisotropic versus isotropic filtering is anayzed with respect to general phase-based metrics for early vision attributes. We verified that the spectral information content gathered through oriented frequency channels is characterized by high compactness and flexibility, since a wide range of visual attributes emerge from different hierarchical combinations of the same channels. We observed that it is preferable to construct a multichannel, multiorientation representation, rather than using a more compact representation based on an isotropic generalization of the analytic signal. The complete harmonic content is then combined in the phase-orientation space at the final stage, only, to come up with the ultimate perceptual decisions, thus avoiding an "early condensation" of basic features. The resulting algorithmic solutions reach high performance in real-world situations at an affordable computational cost.
|
|
Title: |
COMPARATIVE STUDY OF CONTOUR FITTING METHODS IN SPECKLED IMAGES
|
Author(s): |
Juliana Gambini, María Elena Buemi, Alejandro Frery, Julio Jacobo Berllés and Marta Mejail |
Abstract: |
Images obtained with the use of coherent illumination are affected by a noise called speckle, which is inherent to this type of imaging systems. In this work, speckled data have been statistically treated with a multiplicative model using the family of G distributions. One of the parameters of these distributions can be used to characterize
the different degrees of roughness found in speckled data. We used this information to find boundaries between different regions within the image.
Two different region contour detection methods for speckled imagery, are presented and compared. The first one maximizes a likelihood function over the speckled data and the second one uses anisotropic difussion over roughness estimates. To represent detected contours, the B-Spline curve representation is used.
In order to compare the behaviour of the two methods we performed a Monte Carlo experience. It consisted of the generation of a set of test images with a randomly shaped region, which is considered in the literature as a difficult contour to fit. Then, the mean square error was calculated for each test image, for both methods. |
|
Title: |
Improving junction detection by semantic interpretation
|
Author(s): |
Shi Yan, Florian Pilz, Norbert Krueger and Sinan Kalkan |
Abstract: |
Every junction detector has a set of thresholds to make decisions about the
junctionness of image points. Low-contrast junctions may pass such thresholds and
may not be detected. Lowering the thresholds to find such junctions will lead to spurious
junction detections at other image points.
In this paper, we implement a junction-regularity measure to improve localization of junctions, and
we develop a method to create semantic interpretations of arbitrary junction configurations at
improved junction positions.
We propose to utilize such a semantic interpretation as a \emph{feedback mechanism}
to filter false-positive junctions. We test our proposals
on natural images using Harris and SUSAN operators as well as a continuous concept of intrinsic dimensionality. |
|
Title: |
VIDEO MODELING USING 3-D HIDDEN MARKOV MODEL
|
Author(s): |
Joakim Jitén and Bernard Merialdo |
Abstract: |
Statistical modeling methods have become critical for many image processing problems, such as segmentation, compression and classification. In this paper we are proposing and experimenting a computationally efficient simplification of 3-Dimensional Hidden Markov Models. Our proposed model relaxes the dependencies between neighboring state nodes to a random uni-directional dependency by introducing a three dimensional dependency tree (3D-DT HMM). To demonstrate the potential of the model we apply it to the problem of tracking objects in a video sequence. We explore various issues about the effect of the random tree and smoothing techniques. Experiments demonstrate the potential of the model as a tool for tracking video objects with an efficient computational cost. |
|
Title: |
Human Eye Localization Using Edge Projections
|
Author(s): |
Mehmet Turkan, Montse Pardas and A. Enis Cetin |
Abstract: |
In this paper, a human eye localization algorithm in images and video is presented for faces with frontal pose and upright orientation. A given face region is filtered by a high-pass filter of a wavelet transform. In this way, edges of the region are highlighted, and a caricature-like representation is obtained. After analyzing horizontal projections and profiles of edge regions in the high-pass filtered image, the candidate points for each eye are detected. All the candidate points are then classified using a support vector machine based classifier. Locations of each eye are estimated according to the most probable ones among the candidate points. It is experimentally observed that our eye localization method provides promising results for both image and video processing applications. |
|
Title: |
Optimal Spanning Trees Mixture based probability approximation for Skin Detection
|
Author(s): |
Sanaa EL FKIHI, Mohamed Daoudi and Driss Aboutajdine |
Abstract: |
In this paper we develop a skin detection approach for machine learning in color image. Our contribution is based on the optimal spanning tree distributions that are widely used in many optimization areas. Thus, by making some assumptions we propose the mixture of the optimal spanning trees to approximate the true Skin (or Non-Skin) class probability in a supervised algorithm.
The theoretical proof of the optimal spanning trees mixture is drawn. Furthermore, the performance of our method is assessed on the Compaq database by measuring the Receiver Operating Characteristic curve and its under area. These measures have proved better results of the proposed model compared with the results of a random optimal spanning tree model and the baseline one. |
|
Title: |
3D Volume watermarking using 3D Krawtchouk moments
|
Author(s): |
Petros Daras, Athanasios Mademlis, Dimitrios Tzovaras and Michael Strintzis |
Abstract: |
In this paper a novel blind watermarking method of 3D volumes based on the Weighted 3D Krawtchouk Moments is proposed. The watermark is created by a pseudo-random number generator and is embedded on low order Weighted 3D Krawtchouk Moments. The watermark detection is blind, requiring only the user's key. The watermark bit sequence is created using the key and its cross correlation with the Weighted 3D Krawtchouk Moments of the possible watermarked volume. The proposed method is imperceivable to the
user, robust to geometric transformations (translation, rotation) and to cropping attacks. |
|
Title: |
INTELLIGENT TOPOLOGY PRESEVING GEOMETRIC DEFORMABLE MODEL
|
Author(s): |
Renato Dedic and Madjid Allili |
Abstract: |
Geometric deformable models (GDM) using the level sets method provide a very efficient framework for image
segmentation. However, the segmentation results provided by these models are dependent on the contour
initialization. Moreover, sometimes it is necessary to prevent the contours from splitting and merging in order
to preserve topology. In this work, we propose a new method that can detect the correct boundary information
of segmented objects while preserving topology when needed. We adapt the stoping function g in a way that
allows us to control the contours’ topology. By analyzing the region where the edges of the contours are close
we decide if the contours should merge, split or remain the way they are. This new formulation maintains the
advantages of standard (GDM). Moreover,the topology-preserving constraint is enforced efficiently therefore,
the new algorithm is only slightly computationally slower over standard (GDM). |
|
Title: |
Parametrization, Alignment and Shape of Spherical Surfaces
|
Author(s): |
Washington Mio, Xiuwen Liu and John Bowers |
Abstract: |
We develop parametrization and alignment techniques for shapes of spherical surfaces in 3D space with the goals of quantifying shape similarities and dissimilarities and modeling shape variations observed within a class of objects. The parametrization techniques are refinements of methods due to Praun and Hoppe and yield parametric mesh
representations of spherical surfaces. The main new element is an automated technique to align parametric meshes for shape interpolation and comparison. We sample aligned surfaces at the vertices of a dense common mesh structure to obtain a representation of the shapes as organized point-clouds. We apply Kendall's shape theory
to these dense point clouds to define geodesic shape distance, to obtain geodesic interpolations, and to study statistical properties of shapes that are relevant to problems in computer vision. Applications to the construction of compatible texture maps for a family of surfaces are also discussed. |
|
Title: |
Automated Image Analysis of Noisy Microarrays
|
Author(s): |
Sharon Greenblum, Max Krucoff, Jacob Furst and Daniela Raicu |
Abstract: |
A recent extension of DNA microarray technology has been its use in DNA fingerprinting. Our research involved developing an algorithm that automatically analyzes microarray images by extracting useful information while ignoring the large amounts of noise. Our data set consisted of slides generated from DNA strands of 24 different cultures of anthrax from isolated locations (all the same strain that differ only in origin-specific neutral mutations). The data set was provided by Argonne National Laboratories in Illinois. Here we present a fully automated method that classifies these isolates at least as well as the published AMIA (Automated Microarray Image Analysis) Toolbox for MATLAB with virtually no required user interaction or external information, greatly increasing efficiency of the image analysis. |
|
Title: |
Two-Level Method for 3D Non-Rigid Registration
|
Author(s): |
Chenyu Wu, Patrica E. Murtha, Andy B. Mor and Branko Jaramaz |
Abstract: |
We propose a two-level method for 3D
non-rigid registration and apply the method to the problem of
building statistical atlases of 3D anatomical structures. 3D
registration is an important problem in computer vision and a
challenge topic in medical image field due to the geometrical
complexity of anatomical shapes and size of medical image data. In
this work we adopt a two-level strategy to deal with these
problems. Compared with a general multi-resolution framework, we
use an interpolation to propagate the matching instead of
repeating registration scheme in each resolution. Our algorithm is
divided into two main parts: a low-resolution solution to the
correspondences and mapping of surface models using Chui and
Rangarajan's robust point matching algorithm, followed by an
interpolation to achieve high-resolution correspondences.
Experimental results demonstrate that our simple approach can
efficiently and accurately solve the non-rigid registration and
correspondences within complicated 3D data sets. In this paper we
present an example of this method in the construction of a
statistical atlas of the femur. |
|
Title: |
AN ADAPTIVE REGION GROWING SEGMENTATION FOR BLOOD VESSEL DETECTION FROM RETINAL IMAGES
|
Author(s): |
Md. Alauddin Bhuiyan, Baikunth Nath and Joselito Chua |
Abstract: |
Blood vessel segmentation from the retinal images is an important issue. Although enormous researches have been performed on blood vessel segmentation, significant improvements still a necessity particularly on minor vessel segmentation. In this paper we propose an edge based vessel segmentation technique, which is very efficient for segmenting blood vessels. As the local contrast of vessel is unstable (i.e., intensity variation), especially in unhealthy retinal images it is very complicated to detect the vessels from the retinal images. Therefore, we consider the edge based vessel segmentation technique to overcome the problem of large intensity variation between major and minor vessels. The edge is detected with considering the adaptive value of gradient employing Region Growing Algorithm from where parallel edges are computed to select vessels, which shows a very good performance in blood vessel detection including minor vessels. |
|
Title: |
Genetic Algorithm For Summarizing News Stories
|
Author(s): |
Mehdi Ellouze, Hichem Karray and Adel Mohamed Alimi |
Abstract: |
This paper presents a new approach summarizing broadcast news using Genetic Algorithms. We propose to segment the news programs into stories, and then summarize stories by selecting from every one of them frames considered important to obtain an informative pictorial abstract. The summaries can help viewers to estimate the importance of the news video. Indeed, by consulting stories summaries we can affirm if the news video contain desired topics. |
|
Title: |
A Loopy Belief Propagation approach to the Shape from Shading problem
|
Author(s): |
Markus Louw, Fred Nicolls and Dee Bradshaw |
Abstract: |
This paper describes a new approach to the shape from shading problem, using loopy belief propagation
which is simple and intuitive. The algorithm is called Loopy Belief Propagation Shape-From-Shading (LBP-SFS).
It produces reasonable results on real and synthetic data, and surface information from sources other
than the image (eg range or stereo data) can be readily incorporated as prior information about the surface
elevation at any point, using this framework. In addition, this algorithm proves the use of linear interpolation
at the message passing level within a loopy Bayesian network, which to the authors’ knowledge has not been
previously explored. |
|
Title: |
A COMPARISION OF MODEL-BASED METHODS FOR KNEE CARTILAGE SEGMENTATION
|
Author(s): |
James Cheong, Nathan Faggian, Georg Langs, David Suter and Flavia Cicuttini |
Abstract: |
Osteoarthritis is a chronic and crippling disease affecting an increasing number of people each year. With no known cure, it is expected to reach epidemic proportions in the near future. Accurate segmentation of knee cartilage from magnetic resonance imaging (MRI) scans facilitates the measurement of cartilage volume present in a patient's knee,
thus enabling medical clinicians to detect the onset of osteoarthritis and also crucially, to study its effects. This paper compares four model-based segmentation methods popular for medical data segmentation, namely Active Shape Models (ASM), Active Appearance Models (AAM), Patch-based Active Appearance Models (PAAM), and Active Feature Models (AFM). A comprehensive analysis of how accurately these methods segment human tibial cartilage is presented. The results obtained were benchmarked against the current ``gold standard" (cartilage segmented manually by trained clinicians) and indicate that modeling local texture features around each landmark provides the best results for segmenting human tibial cartilage. |
|
Title: |
PROBABILISTIC MODELLING AND FUSION FOR IMAGE FEATURE EXTRACTION WITH APPLICATIONS TO LICENCE PLATE DETECTION
|
Author(s): |
Rami Al-Hmouz, Subhash Challa and Duc Vo |
Abstract: |
The paper proposes a novel feature fusion concept for object extraction. The image feature extraction process
is modeled as a feature detection problem in noise. The geometric features are probabilistically modeled
and detected under various detection thresholds. These detection results are then fused within the Bayesian
framework to obtain the final features for further processing. The performance of this approach is compared
with the traditional approaches of image feature extraction in the context of automatic license plate detection
problem. |
|
Title: |
Distinguishing Liquid and Viscous Black Inks using RGB Colour Space
|
Author(s): |
Haritha Dasari and Chakravarthy Bhagvati |
Abstract: |
Analysis of inks on Questioned documents is often required
in the field of document examination. This paper provides
a novel approach for ink type recognition for black inks using
image processing techniques. Ink types Liquid ink or
Viscous ink will be derived from the colour properties of
ink, by extracting its amount of blackness. This classication
helps in distinguishing Gel and Roller pens versus Ball
pens . Different types of inks exhibit different absorption
characteristics that causes colour and distribution of colour
pixels to change. So we have done a detailed analysis of all
colour spaces and in particular RGB colour space as it is
the most widely used colour space. We use Multiple Linear
Regression to model the RGB data points of the writings to
fit a plane in the RGB cubical space. The distance from
the pure black point to the fitted plane reveals the difference
in colour characteristics of inks. The distance measures in
RGB and HSV colour spaces are used to identify different
inks. The accuracy of identification is analysed using Type I and Type II errors. |
|
Title: |
REMOVING THE TEXTURE FEATURE RESPONSE TO OBJECT BOUNDARIES
|
Author(s): |
Padraig Corcoran and Adam Winstanley |
Abstract: |
Texture is a spatial property and thus any features used to describe it must be calculated within a
neighbourhood. This process of integrating information over a neighbourhood leads to what I will refer to as
the texture boundary response problem, where an unwanted response is observed at object boundaries. This
response is due to features being extracted from a mixture of textures and/or an intensity edge between
objects. If segmentation is performed using these raw features this will lead to the generation of unwanted
classes along object boundaries. To overcome this post processing of feature images must be performed to
remove this response before a classification algorithm can be applied. To date this problem has received
little attention with no evaluation of the alternative solutions available in the literature of which we are
aware. In this work we perform an evaluation of known solutions to the boundary response problem and
discover separable median filter to be the current best choice. A in depth evaluation of the separable median
filtering approach shows that it fails to remove certain parts or types of object boundary response. To
overcome this failing we propose two alternative techniques which involve either post processing of the
separable median filtered result or an alternative filtering technique. |
|
Title: |
Towards Objective Quality Assessment of Image Registration Results
|
Author(s): |
Birgit Moeller, Rafael Garcia and Stefan Posch |
Abstract: |
Geometric registration of visual images is a fundamental intermediate processing step in a wide variety of computer vision applications dealing with image sequence analysis. 2D motion recovery and mosaicing, 3D scene reconstruction and also motion detection approaches strongly rely on accurate registration results. However, automatically assessing the overall quality of a registration is a challenging task. In particular, optimization criteria used in registration are not necessarily deeply linked to the final quality of the result and often show a lack of local sensitivity. Within this paper we present a new approach for an objective quality metric in 2D image registration. The proposed method is based on local structure analysis and facilitates voting-techniques for error pooling leading to an objective measure that correlates well with the visual appearance of registered images. Since also a more detailed classification of observed differences according to various underlying error sources is provided the new measure not only yields a suitable base for objective quality assessment, but also opens perspectives towards an automatic and optimally adjusted correction of errors. |
|
Title: |
MULTIDIMENSIONAL WAVELET ANALYSIS FOR RECOGNITION OF LESIONS IN COLPOSCOPY TEST
|
Author(s): |
Diana Ivone Tapia López, Aldrin Barreto Flores and Leopoldo Altamirano Robles |
Abstract: |
Cervical cancer is an important worldwide disease due the high rate of incidence in the population. Colposcopy is one of the diagnostic tests employed in recognition of lesions, which performs a visual examination of the cervix based on temporal reaction of the surface stained with acetic acid. It is proposed in this paper to evaluate the temporal texture changes produced by the acetic acid based on the concept of the wavelet-aggregated signal in order to identify lesions. An aggregated signal is a scalar signal providing maximum information on the most general variations present in all the processes analyzed and at the same time suppressing components that are characteristic of individual processes. Texture metrics based on spatial information are used in order to analyze temporally the acetic acid response and deduce appropriate signatures. Later, temporal information is analyzed using multidimensional wavelet analysis for identification of lesions. |
|
Title: |
Image Matting using SVM and Neighboring Information
|
Author(s): |
Tadaaki Hosaka, Takumi Kobayashi and Nobuyuki Otsu |
Abstract: |
Image matting is to extract a foreground object in a static
image by estimating the opacity
for each pixel of the foreground image layer.
This problem
has recently been studied in a framework of
optimizing a cost function.
The common drawback of previous approaches is the decrease in
performance when a
foreground
and its background have similar colors.
For dealing with this difficulty, we propose a cost function
considering not only a single pixel but also its neighboring pixels,
and utilizing
the support vector machine to
enhance the discrimination between
foreground and background.
Optimization of the cost function
can be performed by belief propagation.
Experimental results show
a favorable matting performance in many cases. |
|
Title: |
Left Ventricle Image Landmarks Extraction Using Support Vector Machines
|
Author(s): |
Miguel Vera and Miguel Vera |
Abstract: |
This paper introduces an approach for efficient myocardial landmarks detection in angiograms. Several anatomical landmarks located on the left ventricle are obtained by mean of a support vector machine. Training set corresponds a dataset of landmark and non-landmark 31x31 pixel patterns. Our support vector machine uses the structural risk minimization principle as inference rule and radial basis function kernel. In the training phase false positives were not registered and in the detection phase 100% of recognition was obtained |
|
Title: |
ARRAY PROCESSING FOR PECTORAL MUSCLE SEGMENTATION in Mammographic Images
|
Author(s): |
Marot Julien, Adel Mouloud and Bourennane Salah |
Abstract: |
Thanks to a specific formalism for signal generation, it is possible to transpose an image processing problem to an array processing problem. The existing straight line characterization method called Subspace-based LIne
DEtection (SLIDE) leads to models with orientations and offsets of straight lines as the desired parameters.
We propose to extend the proposed method to the case of distorted contour detection. For this purpose we
develop an automatic global threshold method that provides a binary image containing the expected contour.
We apply our method to the detection of the pectoral muscle in mammographic images. |
|
Title: |
Geometric and Information Constraints for Automatic Landmark Selection in Colposcopy Sequences
|
Author(s): |
Juan David Garcia-Arteaga, Jan Kybic, Jia Gu and Wenjing Li |
Abstract: |
Colposcopy is a diagnostic method to visually detect cancerous and pre-cancerous tissue regions in the uterine cervix. A typical result is a sequence of cervical images captured at different times after the application of a contrast agent that must be spatially registered to compensate for patient, camera and tissue movement and on which progressive color and texture changes may be seen.
We present a method to automatically select correct landmarks for non-consecutive sequence frames captured at long time intervals from a group of candidate matches. Candidate matches are extracted by detecting and matching feature points in consecutive images. Selection is based on geometrical constraints and a local rigid registration using Mutual Information. The results show that these landmarks may be subsequently used to either guide or evaluate the registration of these type of images.
|
|
Title: |
AN ACCURATE ALGORITHM FOR AUTOMATIC IMAGE STITCHING IN ONE DIMENSION
|
Author(s): |
Hitesh Ganjoo, Venkateswarlu Karnati, Pramod Kumar and Raju Gupta |
Abstract: |
Automatic stitching to generate panoramic or composite image finds usage in a number of application areas. The paper addresses the issues in accuracy of various image-stitching algorithms used in the industry today on different types of real time images shot under different conditions. Our paper proposes a stitching algorithm for stitching images in one dimension. The most robust image stitching algorithms make use of feature descriptors to achieve invariance to image zoom, rotation and exposure change. The use of invariant feature descriptors in image matching and alignment makes them more accurate and reliable for a variety of images under different real-time conditions. We assess the accuracy of one such industrial tool, [AUTOSTICH] for our dataset and its underlying Scale Invariant Feature Transform (SIFT) descriptors. The tool’s performance is low in certain scenarios. Many false matching points exist at the K-d tree (Friedman et al., 1977) matching algorithm to match corresponding descriptors. In this paper we propose refinements for K-d tree results and use them for registering the images. Our proposed automatic stitching process can be broadly divided into 3 stages: Feature Point Extraction, Points Refinement stage, and Image Transformation & Blending stage. Our approach builds on the underlying way a casual end-user captures images through cameras for panoramic image stitching. We have tested the proposed approach on a variety of images and the results show that the algorithm performs well in all the scenarios. |
|
Title: |
Motion Blur Estimation At Corners
|
Author(s): |
Giacomo Boracchi and Vincenzo Caglioti |
Abstract: |
In this paper we propose a novel
algorithm to estimate motion parameters from a single blurred
image, exploiting geometrical relations between image intensities
at pixels of a region that contains a corner. Corners are
significant both for scene and motion understanding since they
permit a univocal interpretation of motion parameters.
Motion parameters are estimated locally in image regions, without
assuming uniform blur on image so that the algorithm works also
with blur produced by camera rotation and, more in general, with
space variant blur. |
|
Title: |
COLOR AND TEXTURE BASED SEGMENTATION ALGORITHM FOR MULTICOLOR TEXTURED IMAGES
|
Author(s): |
Irene Fondón, Carmen Serrano and Begońa Acha |
Abstract: |
We propose a color-texture image segmentation algorithm based on multistep region growing. This algorithm is able to deal with multicolored textures. Each of the colors in the texture to be segmented is considered as reference color. In this algorithm color and texture information are extracted from the image by the construction of color distances images, one for each reference color, and a texture energy image. The color distance images are formed by calculating CIEDE2000 distance in the L*a*b* color space to the colors that compound the multicolored texture. The texture energy image is extracted from some statistical moments. The method segment the color information by means of an adaptative N-dimensional region growing where N is the number of reference colors. The tolerance parameter is increased iteratively until an optimum is found and its growth is determined by an step size which depends on the variance on each distance image for the actual grown region. The criterium to decide which is the optimum value of the tolerance parameter depends on the contrast along the edge of the region grown, choosing the one which provides the region with the highest mean contrast in relation to the background. Additionally, this color multistep region growing is texture-controlled, in the sense that a extra condition to include a particular pixel in a region is demanded: the pixel needs to have the same texture as the rest of the pixels within the region. Results prove that the proposed method works very well with general purpose images and significantly improves the results obtained with other previously published algorithm (author's work 1). |
|
Title: |
SHAPE COMPARISON OF FLEXIBLE OBJECTS. Similarity of Palm Silhouettes
|
Author(s): |
Leonid Mestetskiy |
Abstract: |
We consider the problem of comparison of the shapes of the elastic objects presented by binary bitmaps. Our approach to construction of a measure of similarity of such objects bases on the conception of a flexible object. A flexible object is defined as a family of circles of different size with the centers on planar graph with tree type structure. A set of admissible deformations which is described as a group of transforms of this family consisting in changing position of circles is connected with each flexible object. The problem of estimation of similarity of flexible objects is solved by their alignment within the limits of group of admissible transforms. In this abstract we present a method for representation of the shape of the binary bitmap as a flexible object. This method bases on a continuous skeleton of the binary bitmap. We consider an application to a problem of biometrical identification of a person by a shape of palm. |
|
Title: |
Simultaneous Robust Fitting of Multiple Curves
|
Author(s): |
Jean-Philippe Tarel, Sio-Song Ieng and Pierre Charbonnier |
Abstract: |
In this paper, we address the problem of robustly recovering several
instances of a curve model from a single noisy data set with
outliers. Using M-estimators revisited in a Lagrangian formalism, we
derive an algorithm that we call SMRF, which extends the classical
Iterative Reweighted Least Squares algorithm (IRLS). Compared to the
IRLS, it features an extra probability ratio, which is classical in
clustering algorithms, in the expression of the weights. Potential
numerical issues are tackled by banning zero probabilities in the
computation of the weights and by introducing a Gaussian prior on
curves coefficients. Applications to camera calibration and
lane-markings tracking show the effectiveness of the SMRF algorithm,
which outperforms classical mixture model algorithms in the presence
of outliers. |
|
Title: |
Robust Appearance Matching with Filtered Component Analysis
|
Author(s): |
|
Abstract: |
Appearance Models (AM) are commonly used
to model appearance and shape variation of objects in images. In
particular, they have proven useful to detection, tracking, and
synthesis of people's faces from video. While AM have numerous
advantages relative to alternative approaches, they have at least
two important drawbacks. First, they are especially prone to local
minima in fitting; this problem becomes increasingly problematic as
the number of parameters to estimate grows. Second, often few if any
of the local minima correspond to the correct location of the model
error. To address these problems, we propose Filtered Component
Analysis (FCA),
an extension of traditional Principal Component Analysis (PCA). FCA learns an optimal set of filters
with which to build a multi-band representation of the object. FCA
representations were found to be more robust than either grayscale
or Gabor filters to problems of local minima. The effectiveness and
robustness of the proposed algorithm is demonstrated in both
synthetic and real data.
|
|
Title: |
Modeling Non-Gaussian Noise for Robust Image Analysis
|
Author(s): |
Jean-Philippe Tarel, Sio-Song Ieng and Pierre Charbonnier |
Abstract: |
Accurate noise models are important to perform reliable robust
image analysis. Indeed, many vision problems can be seen as
parameter estimation problems. In this paper, two noise models are
presented and we show that these models are convenient to
approximate observation noise in different contexts related to image
analysis. In spite of the numerous results on M-estimators, their
robustness is not always clearly addressed in the image analysis
field. Based on Mizera and M\"{u}ller's recent fundamental work, we
study the robustness of M-estimators for the two presented
noise models, in the fixed design setting. To illustrate the
interest of these noise models, we present three important image
vision applications that can be solved within this framework: curves
fitting, clustering, and edge-preserving image smoothing.
|
|
Title: |
A Harris Corner Label Enhanced MMI Algorithm for Multi-Modal Airborne Image Registration
|
Author(s): |
Xiaofeng Fan, Harvey Rhody and Eli Saber |
Abstract: |
Maximization of Mutual information (MMI) is a method that is used widely for multi-modal image registration. However, classical MMI techniques utilize only regional and/or global statistical information and do not make use of spatial features. Several techniques have been proposed to extend MMI to use spatial information, but have proven to be computationally demanding. In this paper, a new approach is proposed to combine spatial information with MMI by using the Harris Corner Label (HCL) algorithm.
We use the HCL based MMI algorithm to accelerate the computation and improve the
registration over noisy images. Our results indicate that the HCL based registration technique yields superior performance on multimodal imagery when compared to its classical MMI based counterpart. |
|
Title: |
Accurate Image registration by combining feature-based matching and GLS-based motion estimation
|
Author(s): |
Raul Montoliu Colas and Filiberto Pla Bańon |
Abstract: |
In this paper, an accurate Image Registration method is presented. It combines a feature-based method, which allows to recover large motion magnitudes between images, with a Generalized Least-Squares (GLS) motion estimation technique which is able to estimate motion parameters in an accurate manner. The feature-based method gives an initial
estimation of the motion parameters, which will be refined using the GLS motion estimator. The proposed formulation of the motion estimation problem provides an additional constraint that helps to
match the pixels located in the edges of the images. That is
achieved thanks to the use of a weight for each observation. The
proposed method provides high weight values to the observations
considered as inliers, i.e. the ones that support the motion
model, and low values to the ones considered as outliers. Our
approach has been tested using challenging real images using both
affine and projective motion models. |
|
Title: |
Brain Volumetry Estimation Based on Statistical and Morphological Techniques
|
Author(s): |
Juan M. Molina, Augusto Silva, Joăo P. Cunha, Miguel Angel Guevara and Miguel Angel Guevara |
Abstract: |
Magnetic Resonance Imaging has become one of the most important tools for anatomic and functional assessment of the complex brain entities. In neurology exists different indicators related to the brain volume measurements with application to several fields such as diagnosis, surgical planning, study of pathologies, diseases preventing and tracking the evolution of diseases under (or not) medical treatments, etc. In this work we propose a new method for automatic brain volumetry based on a suitable combination of histogram analysis, optimal thresholding, prior geometric information and mathematical morphology techniques. Our results were compared with three different well established methods in the neuroscience community for brain volumetry: Brain Extraction Tool, Brain Suite and Statistical Parametric Mapping. Finally we evaluated a dataset of 25 patient studies related to precision, resolution as well as inter-examination features and statistically was demonstrated that our method present competitive results in relation to the others. |
|
Title: |
Branches Filtering Approach for Max-Tree
|
Author(s): |
I Ketut Eddy Purnama, Michael H.F. Wilkinson, Albert G. Veldhuizen, Peter M.A. van Ooijen, Jaap Lubbers, Tri A. Sardjono and Gijbertus J. Verkerke |
Abstract: |
A new filtering approach called branches filtering is presented. The filtering approach is applied to the Max-Tree representation of an image. Instead of applying filtering criteria to all nodes of the tree, this approach only evaluate the leaf nodes. The expected objects can be found by collecting a number of parent nodes of the selected leaf nodes. The more parent nodes involve the wider the area of the expected objects. The maximum value of the number of parents (PLmax) can be determined by inspecting the output image before having unexpected image. Different images have found have different PLmax values. The branches filtering approach is suitable to extract objects in a noisy image as long as these objects can be recognised from its prominent information such as intensity, shape, or other scalar or vector values. Furthermore, the optimum result can be achieved if the areas which have the prominent information are present in the leaf nodes. The experiments to extract bacteria from noisy image, localizing bony parts in a speckled ultrasound image, and acquiring certain features from a natural image appeared to be feasible give the expected results. The application of the branches filtering approach to a 3D MRA image of human brain to extract the blood vessels gave also the expected image. The results show that the branches filtering can be used as an alternative filtering approach to the original filtering approach of Max-Tree. |
|
|
Area 3 - Image Understanding
|
Title: |
Traffic Sign Classification using Error Correcting Techniques
|
Author(s): |
Sergio Escalera, Oriol Pujol and Petia Radeva |
Abstract: |
Traffic sign classification is a challenging problem in Computer Vision due to the high variability of sign appearance in uncontrolled environments. Lack of visibility, illumination changes, and partial occlusions are just a few problems. In this paper, we introduce a classification technique for traffic signs recognition by means of Error Correcting Output Codes. Recently, new proposals of coding and decoding strategies for the Error Correcting Output Codes framework have been shown to be very effective in front of multiclass problems. We review the state-of-the-art ECOC strategies and combinations of problem-dependent coding designs and decoding techniques. We apply these approaches to the Mobile Mapping problem. We detect the sign regions by means of Adaboost. The Adaboost in an attentional cascade with the extended set of Haar-like features estimated on the integral shows great performance at the detection step. Then, a spatial normalization using the Hough transform and the fast radial symmetry is done. The model fitting improves the final classification performance by normalizing the sign content. Finally, we classify a wide set of traffic signs types, obtaining high success in adverse conditions. |
|
Title: |
MODIFIED DISTANCE SIGNATURE AS AN ENHANCIVE DESCRIPTOR OF COMPLEX PLANAR SHAPES
|
Author(s): |
Andrzej Florek and Tomasz Piascik |
Abstract: |
In this paper, a simple and efficient approach to recognize and to classify planar shapes is proposed. This approach is based on comparison of areas of dynamic sampled classic signatures. Presented approach is dedicated to recognition of convex and non?convex planar shapes, containing openings in the area enclosed by boundary. A way to calculate of discrete representation of classic distance?versus?angle signatures, a reduction of memory requirements and number of calculations are presented. Analysis carried out from classification experiments applied to images of real objects (car?engine collector seals) indicates good properties of dissimilarity coefficients, determined by the use of modified signature, taken as an object descriptor. |
|
Title: |
Motion information combination for fast human action recognition
|
Author(s): |
|
Abstract: |
In this paper, we study the human action recognition problem based on motion features directly extracted from video. In order to implement a fast human action recognition system, we select simple features that can be obtained from non-intensive computation. We propose to use the motion history image (MHI) as our fundamental representation of the motion. This is then further processed to give a histogram of the MHI and the Haar wavelet transform of the MHI. The processed MHI thus allows a combined feature vector to be computed cheaply and this has a lower dimension than the original MHI. Finally, this feature vector is used in a SVM-based human action recognition system. Experimental results demonstrate the method to be efficient, allowing it to be used in real-time human action classification systems. |
|
Title: |
Active Object Detection
|
Author(s): |
Guido de Croon |
Abstract: |
We investigate an object-detection method that employs active image scanning. The method extracts a local sample at the current scanning position and maps it to a shifting vector indicating the next scanning position. The goal of the method is to move the scanning position to an object location, skipping regions in the image that are unlikely to contain an object. We apply the active object-detection method (AOD-method) to a real-world task of face detection and compare it with existing window-sliding object-detection methods. These methods employ passive scanning, since they extract local samples at all points of a predefined grid. From the empirical results, we conclude that the AOD-method performs at par with the window-sliding object-detection methods, while being computationally less expensive. In a conservative estimate the AOD-method extracts 45 times fewer local samples than an object detector of the window-sliding approach, approximately leading to a halvation of the computational effort. The saving of computational effort is obtained at the expense of application generality. |
|
Title: |
An approximate reasoning technique for segmentation on compressed MPEG video
|
Author(s): |
|
Abstract: |
In this work we present a system that describes linguistically the position of an object in motion in each frame of a video stream. This description is obtained directly from MPEG motion vectors by using the theory of fuzzy sets and approximate reasoning. The lack of information and noisy data over the compressed domain justifies the use of fuzzy logic. Besides, the use of linguistic labels is necessary since the system's output is a semantic description of trajectories and positions. Several methods of extraction of motion information from MPEG motion vectors can be found in the revised literature. As no numerical results are given of these methods, we present a statistical study of the input motion information and compare the output of the system depending on the selected extraction technique.
For system performance evaluation it would be necessary to determine the error between the semantic output and the desired object's description.
This comparison is carried out between the (x,y) pixel coordinates of the center position of the object and the resulting value of a defuzzification method applied to the description labels. The system has been evaluated using three different video samples of the standard datasets provided by several PETS (Performance Evaluation of Tracking and Surveillance) workshops. |
|
Title: |
Cued Speech hand shape recognition
|
Author(s): |
Thomas Burger, Oya Aran, Alexandra Urankar, Akarun Lale and Alice Caplier |
Abstract: |
As part of our work on hand gesture interpretation, we present our results on hand shape recognition. Our method is based on attribute extraction and multiple binary SVM classification. The novelty lies in the fashion the fusion of all the partial classification results are performed. This fusion is (1) more efficient in terms of information theory and leads to more accurate result, (2) general enough to allow other source of information to be taken into account: Each SVM output is transformed to a belief function, and all the corresponding functions are fused together with some other external "believian" sources of information. |
|
Title: |
DOCUMENT IMAGE ZONE CLASSIFICATION - A Simple High-Performance Approach
|
Author(s): |
Daniel Keysers, Faisal Shafait and Thomas Breuel |
Abstract: |
We describe a simple, fast, and accurate system for document image zone classification — an important subproblem
of document image analysis — that results from a detailed analysis of different features. Using
a novel combination of known algorithms, we achieve a very competitive error rate of 1.46% (n = 13811)
in comparison to (Wang et al., 2006) who report an error rate of 1.55% (n = 24177) using more complicated
techniques. The experiments were performed on zones extracted from the widely usedUW-III database, which
is representative of images of scanned journal pages and contains ground-truthed real-world data. |
|
Title: |
Statistical Analysis of Second-order Relations of 3D Structures
|
Author(s): |
Sinan Kalkan, Florentin Woergoetter and Norbert Krueger |
Abstract: |
Algorithmic 3D reconstruction methods like stereopsis or structure from motion fail to extract
depth at homogeneous image structures where the human visual system succeeds and is able to estimate
depth.
In this paper, using chromatic 3D range data, we analyze in which way depth in homogeneous structures
is related to the depth at the bounding edges. For this, we first extract the local 3D structure of
regularly sampled points, and then, analyze the coplanarity relation between these
local 3D structures. We can
statistically show that the likelihood to find a certain depth at a homogeneous image patch depends
on the distance between the image patch and its edges. Furthermore, we find that this prediction is higher
when there is a second edge which is proximate to and coplanar with the first edge. These results
allow deriving statistically based prediction models for depth extrapolation into homogeneous image structures.
We present initial results of a model that predicts depth based on these statistics. |
|
Title: |
Parallel Gabor PCA with Fusion of SVM Scores for Face Verification
|
Author(s): |
Ángel Serrano, Cristina Conde, Isaac Martín de Diego, Enrique Cabello, Li Bai and Linlin Shen |
Abstract: |
Here we present a novel fusion technique for Support Vector Machine scores, obtained after a dimension reduction with a Principal Component Analysis algorithm (PCA) for Gabor features applied to face verification. A total of 40 wavelets (5 frequencies, 8 orientations) have been convolved with public domain FRAV2D Face Database (109 subjects), with 4 frontal images with neutral expression per person for the SVM training and 4 different kinds of tests, each with 4 images, considering frontal views with neutral expression, gestures, occlusions and changes of illumination. Each set of wavelet-convolved images is considered in parallel or independently for the PCA and the SVM classification. A final fusion is performed taking into account all the SVM scores for the 40 wavelets. The proposed algorithm improves the Equal Error Rate for the occlusion experiment compared to a Downsampled Gabor PCA method and obtains similar EERs in the other experiments with fewer coefficients after the PCA dimension reduction stage. |
|
Title: |
Automated Star/Galaxy Discrimination in Multispectral Wide-Field Images
|
Author(s): |
Olac Fuentes and Jorge de la Calleja |
Abstract: |
In this paper we present an automated method for classifying
astronomical objects in multi-spectral wide-field images. The
classification method is divided into three main stages. The first
one consists of locating and matching the astronomical objects in
the multi-spectral images. In the second stage we create a compact
representation of each object applying principal component analysis
to the images. In the last stage we classify the astronomical
objects using locally weighted linear regression and a novel
oversampling algorithm to deal with the unbalance that is inherent
to this class of problems. Our experimental results show that our
method performs accurate classification using small training sets
and in the presence of significant class unbalance.
|
|
Title: |
INTEGRATED GLOBAL AND OBJECT-BASED IMAGE RETRIEVAL USING A MULTIPLE EXAMPLE QUERY SCHEME
|
Author(s): |
Gustavo B. Borba, Humberto R. Gamba, Liam M. Mayron and Oge Marques |
Abstract: |
Conventional content-based image
retrieval (CBIR) systems typically do not consider the limitations
of the feature extraction-distance measurement paradigm when
capturing a user's query. This issue is compounded by the
complicated interfaces that are featured by many CBIR systems. The
framework proposed in this work embodies new concepts that help
mitigate such limitations. The front-end includes an intuitive user
interface that allows for fast image organization though spatial
placement and scaling. Additionally, a multiple-image query is
combined with a region-of-interest extraction algorithm to
automatically trigger global or object-based image analysis. The
relative scale of the example images are considered to be indicative
of image relevant and are also considered during the retrieval
process. Experimental results demonstrate promising results.
|
|
Title: |
MULTIPLE CLASSIFIERS ERROR RATE OPTIMIZATION APPROACHES OF AN AUTOMATIC SIGNATURE VERIFICATION SYSTEM
|
Author(s): |
Sharifah Syed Ahmad |
Abstract: |
Decision level management is a crucial aspect in an Automatic Signature Verification (ASV) system, due to its nature as the centre of decision making that decides on the validity or otherwise of an input signature sample. Here, investigations are carried out in order to improve the performance of an ASV system by applying multiple classifier approaches, where features of the system are grouped into two different sub-sets, namely static and dynamic sub-sets, hence having two different classifiers. In this work, three decision fusion methods, namely Majority Voting, Borda Count and cascaded multi-stage cascaded classifiers are analyzed for their effectiveness in improving the error rate performance of the ASV system. The performance analysis is based upon a database that reflects an actual user population in a real application environment, where as the system performance improvement is calculated with respect to the initial system Equal Error Rate (EER) where multiple classifiers approaches were not adopted. |
|
Title: |
An Image Based Feature Space and Mapping for Linking Regions and Words
|
Author(s): |
Jiayu Tang and Paul Lewis |
Abstract: |
We propose an image based feature space and define a mapping of both image regions and textual labels into that space. We believe the embedding of both image regions and labels into the same space in this way is novel, and makes object recognition more straightforward. Each dimension of the space corresponds to an image from the database. The coordinates of an image segment(region) are calculated based on its distance to the closest segment within each of the images, while the coordinates of a label are generated based on their association with the images. As a result, similar image segments associated with the same objects are clustered together in this feature space, and should also be close to the labels representing the object. The link between image regions and words can be discovered from their separation in the feature space. The algorithm is applied to an image collection and preliminary results are encouraging. |
|
Title: |
Face Verification Sharing Knowledge from Different Subjects
|
Author(s): |
David Masip Rodo, Agata Lapedriza Garcia and Jordi Vitria Marca |
Abstract: |
In face verification problems the number of training samples from each class is usually reduced, difficulting the estimation of the classifier parameters. In this paper we propose a new method for face verification where we simultaneously train different face verification tasks, sharing the model parameter space. We use a multi-task extended logistic regression classifier to perform the classification. Our approach allows to share information from the different classification tasks (transfer knowledge), mitigating the effects of the reduced sample size problem. Our experiments performed using the publicly available AR Face Database, show lower error rates when multiple tasks are jointly trained sharing information, which confirms the theorethical approximations in the related literature. |
|
Title: |
Human visual perception, gestalt principles and duality region-contour. Application to Computer Image Analysis of Human Cornea Endothelium
|
Author(s): |
Yann Gavet, Jean-Charles Pinoli, Gilles Thuret and Philippe Gain |
Abstract: |
The human visual system is far more efficient than a computer to analyze images, especially when noise or poor acquisition process make the analysis impossible by lack of information.
To mimic the human visual system, we develop algorithms based on the gestalt theory principles: proximity and good continuation. We also introduce the notion of mosaic that we reconstruct with those principles. Mosaics can be defined as geometry figures (squares, triangles), or issued from a contour detection system or a skeletonization process.
The application presented here is the detection of cornea endothelial cells. They present a very geometric structure that give enough information for a non expert to be able to perform the same analysis as the ophthalmologist, that mainly consists on counting the cells and evaluating the cell density.
|
|
Title: |
DETECTION OF FACIAL CHARACTERISTICS BASED ON EDGE INFORMATION
|
Author(s): |
Nikolaos Nikolaidis, Stylianos Asteriadis, Ioannis Pitas and Montse Pardŕs |
Abstract: |
In this paper, a novel method for eye and mouth detection and eye center and mouth corner localization, based
on geometrical information is presented. First, a face detector is applied to detect the facial region, and the
edge map of this region is extracted. A vector pointing to the closest edge pixel is then assigned to every
pixel. x and y components of these vectors are used to detect the eyes and mouth. For eye center localization,
intensity information is used, after removing unwanted effects, such as light reflections. For the detection
of the mouth corners, the hue channel of the lip area is used. The proposed method can work efficiently on
low-resolution images and has been tested on the XM2VTS database with very good results. |
|
Title: |
INFORMATION FUSION TECHNIQUES FOR AUTOMATIC IMAGE ANNOTATION
|
Author(s): |
Filippo Vella and Chin-Hui Lee |
Abstract: |
Many recent techniques in Automatic Image Annotation use a description of image content based on visual terms and associate textual labels to visual terms through symbolic connection techniques. These symbolic visual elements are obtained by a tokenization process having as input the entire set of features extracted from the training images data set. An interesting issue for this approach is to exploit, through information fusion, the different representations of visual content coming from the characterization through different image features. We show different techniques for the integration of visual information from different image features and compare the results achieved by them. |
|
Title: |
Practical single view metrology for cuboids.
|
Author(s): |
Nick Pears, Paul Wright and Chris Bailey |
Abstract: |
Generally it is impossible to determine the size of an object from a single
image due to the depth-scale ambiguity problem. However, with knowledge of the
geometry of the scene and the existence of known reference distances in the
image, it is possible to infer the real world dimensions of objects with
only a single image. In this paper, we investigate different methods of automatically
determining the dimensions of cuboids (rectangular boxes) from a single image. In particular, two approaches will be considered: the first will use the cross-ratio projective invariant and the other will use the planar homography. The accuracy of the measurements will be evaluated and compared in both the presence and absence of noise in
the feature points. The effects of lens distortions on the accuracy of the measurements will be investigated. Feature detection and object recognition techniques for detecting the box automatically will also be considered. |
|
Title: |
AN ATTENTION-BASED METHOD FOR EXTRACTING SALIENT REGIONS OF INTEREST FROM STEREO IMAGES
|
Author(s): |
Oge Marques, Liam Mayron, Daniel Socek, Gustavo Borba and Humberto Gamba |
Abstract: |
The fundamental problem of computer vision is caused by the
translation of a three-dimensional world onto one or more
two-dimensional planes. As a result, methods for extracting regions
of interest (ROIs) have certain limitations that cannot be overcome
with traditional techniques that only utilize a single projection of
the image. For example, while it is difficult to distinguished two
overlapping, homogeneous regions with a single intensity or color
image, depth information can usually easily be used to separate the
regions. In this paper we present an extension to an existing
saliency-based ROI extraction method. By adding depth information to
the existing method many previously difficult scenarios can now be
handled. Experimental results show consistently improved ROI
segmentation.
|
|
Title: |
ADAPTIVE DOCUMENT BINARIZATION: A human vision approach
|
Author(s): |
Ioannis Andreadis, IOANNIS ANDREADIS, NIKOS PAPAMARKOS and ANTONIOS GASTERATOS |
Abstract: |
This paper presents a new approach to adaptive document binarization, inspired by the attributes of the Human Visual System (HVS). The proposed algorithm combines the characteristics of the OFF ganglion cells of the HVS with the classic Otsu binarization technique. Ganglion cells with four receptive field sizes tuned to different spatial frequencies are employed, which, adopting a new activation function, are independent of gradual illumination changes, such as shadows. The Otsu technique is then used for thresholding the outputs of the ganglion cells, resulting to the final segmentation between the characters and the background. The proposed method was quantitatively and qualitatively tested against other contemporary adaptive binarization techniques in various shadow levels and noise densities, and it was found to outperform them. |
|
Title: |
Detecting and Classifying Frontal, Back and Profile Views of Humans
|
Author(s): |
Narayanan Chatapuram Krishnan, Baoxin Li and Sethuraman Panchanathan |
Abstract: |
Detecting and estimating the presence and pose of a person in an image is a challenging problem. Literature has dealt with this as two separate problems. In this paper, we propose a system that introduces novel steps to segment the foreground object from the back ground and classifies the pose of the detected human as frontal, profile or back view. We use this as a front end to an intelligent environment we are developing to assist individuals who are blind in office spaces. The traditional background subtraction often results in silhouettes that are discontinuous, containing holes. We have incorporated the graph cut algorithm on top of background subtraction result and have observed a significant improvement in the performance of segmentation yielding continuous silhouettes without any holes. We then extract shape context features from the silhouette for training a classifier to distinguish between profile and nonprofile(frontal or back) views. Our system has shown promising results by achieving an accuracy of 87.5% for classifying profile and non profile views using an SVM on the real data sets that we have collected for our experiments. |
|
Title: |
EXTERIOR ORIENTATION USING LINE-BASED ROTATIONAL MOTION ANALYSIS
|
Author(s): |
Agustin Navarro, Edgar Villarraga and Joan Aranda |
Abstract: |
3D scene information obtained from a sequence of images is very useful in a variety of action-perception applications. Most of them require perceptual orientation of specific objects to interact with their environment. In the case of moving objects, the relation between changes in image features derived by 3D transformations can be used to estimate its orientation with respect to a fixed camera. Our purpose is to describe some properties of movement analysis over projected features of rigid objects represented by lines, and introduce a line-based orientation estimation algorithm through rotational motion analysis. Experimental results showed some advantages of this new algorithm such as simplicity and real-time performance. This algorithm demonstrates that it is possible to estimate the orientation with only two different rotations, having knowledge of the transformations applied to the object. |
|
Title: |
OBJECT RECOGNITION AND POSE ESTIMATION ACROSS ILLUMINATION CHANGES
|
Author(s): |
Damien MUSELET, Brian FUNT, Lilong Shi and Ludovic MACAIRE |
Abstract: |
Starting with Swain and Ballard's color indexing, color has proved to be a very important clue for object recognition. Following in this tradition, we present a new algorithm for color-based object recognition that detects objects and estimates their pose (position and orientation) in cluttered scenes under uncontrolled illumination conditions. As with so many other color-based object-recognition algorithms, color histograms are also fundamental to our approach; however, we use histograms obtained from overlapping subwindows, rather than the entire image. Furthermore, each local histogram is normalized using greyworld normalization.An object from a database of prototype objects is identified and located in an input image by matching the subwindow contents. The prototype is detected in the input whenever many good histogram matches are found between the subwindows of the input image and those of the prototype. In essence, normalized color histograms of subwindows are the local features being matched. Once an object has been recognized, its 2D pose is found by approximating the geometrical transformation most consistently mapping the locations of prototype's subwindows to their matching subwindow locations in the input image. |
|
Title: |
A Learning Approach to Content-Based Image Categorization and Retrieval
|
Author(s): |
Washington Mio, Yuhua Zhu and Xiuwen Liu |
Abstract: |
We develop a machine learning approach to content-based image categorization and retrieval. We represent images by histograms of their spectral components associated with a bank of filters and assume that a training database
of labeled images - that contains representative samples from each class - is available. We employ a linear dimension reduction technique, referred to as Optimal Factor Analysis, to identify and split off ``optimal'' low-dimensional factors of the features to solve a given semantic classification or indexing problem. This content-based categorization technique is used to structure databases of images for retrieval according to the likelihood of each class given a query image. |
|
Title: |
An Interpolation Method for the Reconstruction and Recognition of Face Images
|
Author(s): |
Ngoc Cuong Nguyen and Jaume Peraire |
Abstract: |
An interpolation method is presented for the reconstruction and recognition of human face images. Basic ingredients include an optimal basis set defining a low-dimensional face space and a set of ``best interpolation pixels'' capturing the most relevant characteristics of known faces. The best interpolation pixels are chosen as points of the pixel grid so as to best interpolate the set of known face images. These pixels are then used in a least-squares interpolation procedure to determine interpolant components of a face image very inexpensively, thereby providing efficient reconstruction of faces. In addition, the method allows a fully automatic computer system to be developed for the real-time recognition of faces. Two significant advantages of this method are: (1) the computational cost of recognizing a new face is independent of the size of the pixel grid; and (2) it allows for the reconstruction and recognition of incomplete images.
|
|
Title: |
Human Identification Using Facial Curves With Extensions to Joint
|
Author(s): |
chafik samir, Anuj Srivastava and Mohamed Daoudi |
Abstract: |
Recognition of human beings using shapes
of their full facial surfaces is a difficult problem. Our approach
is to approximate a facial surface using a collection of (closed)
facial curves, and to compare surfaces by comparing their
corresponding curves. The differences between shapes of curves are
quantified using lengths of geodesic paths between them on a
pre-defined curve shape space. Here these facial curves are chosen
to be the level sets of the depth function, although other such
functions can also be used. The method is further strengthened by
the use of texture maps (video images) associated with these faces.
Using the commonly used spectral representation of a texture image,
i.e. filter images using Gabor filters and compute histograms as
image representations, we can compare texture images by comparing
their corresponding histograms using the chi-squared distance. A
combination of shape and texture metrics provides a method to
compare textured, facial surfaces, and we demonstrate its
application in face recognition using 240 facial scans of 40
subjects. |
|
Title: |
2DOF Pose Estimation of Textured Objects with Angular Color Cooccurrence Histograms
|
Author(s): |
Thomas Nierobisch and Frank Hoffmann |
Abstract: |
Robust techniques for pose estimation are essential for robotic
manipulation and grasping tasks. We present a novel approach for
2DOF pose estimation based on angular color cooccurrence
histograms and its application to object grasping. The
representation of objects is based on pixel cooccurrence
histograms extracted from the color segmented image. The
confidence in the 2DOF pose estimate is predicted by a
probabilistic neural network based on the disambiguity of the
underlying matchvalue curve. In an experimental evaluation the
estimated pose is used as input to the open loop control of a
robotic grasp. For more complex manipulation tasks the 2DOF
estimate provides the basis for the initialization of a 6DOF
geometric based object tracking in real-time. |
|
Title: |
FACE ALIGNMENT USING ACTIVE APPEARANCE MODEL OPTIMIZED BY SIMPLEX
|
Author(s): |
Yasser AIDAROUS, Sylvain LE GALLOU, Abdul SATTAR and Renaud SEGUIER |
Abstract: |
The active appearance models (AAM) are robust in face alignment. We use this method to analyze movements and motions of faces in Human Machine Interfaces (HMI) for embedded systems (mobile phone, game console, PDA: Personal Digital Assistant). However these models are not only high memory consumer but also efficient especially when the aligning objects in the learning data base, which generate model, are imperfectly represented. We propose a new optimization method based on Nelder Mead Simplex (NELDER and MEAD,1965). The Simplex reduces 73% of memory requirement and improves the efficiency of AAM at the same time. The test carried out on unknown faces (from BioID data base) shows that our proposition provides accurate alignment whereas the classical AAM is unable to align the object. |
|
Title: |
MULTIRESOLUTION TEXT DETECTION IN VIDEO FRAMES
|
Author(s): |
Marios Anthimopoulos, Basilis Gatos and Ioannis Pratikakis |
Abstract: |
This paper proposes an algorithm for detecting artificial text in video frames using edge information. First, an edge map is created using the Canny edge detector. Then, morphological dilation and opening are used in order to connect the vertical edges and eliminate false alarms. Bounding boxes are determined for every non-zero valued connected component, consisting the initial candidate text areas. Finally, an edge projection analysis is applied, refining the result and splitting text areas in text lines. The whole algorithm is applied in different resolutions to ensure text detection with size variability. Experimental results prove that the method is highly effective and efficient for artificial text detection. |
|
Title: |
Fast and Robust image Matching using Contextual Information and Relaxation
|
Author(s): |
Desire Sidibe, Philippe Montesinos and Stefan Janaqi |
Abstract: |
This paper tackles the difficult, but fondamental, problem of image matching under important projective transformation. Recently, several algorithms capable of handling large changes of viewpoint as well as large scale changes have been proposed. They are based on the comparison of local, invariants descriptors which are robust to these transformations. However, since no image descriptor is robust enough to avoid mismatches, an additional step of outliers rejection is often needed. The accuracy of which strongly depends on the number of mismatches.
In this paper, we show that the matching process can be made robust to ensure a very few number of mismatches based on a relaxation labeling technique. The main contribution of this work is in providing an efficient and fast implementation of a relaxation method which can deal with large sets of features. Futhermore, we show how the contextual information can be obtained and used in this robust and fast algorithm.
Experiments with real data and comparison with other matching methods, clearly show the improvements in the matching results. |
|
Title: |
Extraction of wheat ears with statistical methods based on texture analysis
|
Author(s): |
frédéric cointault and Pierre Gouton |
Abstract: |
In the agronomic domain, the simplification of crop counting, necessary for yield prediction, is a very important step for technical institutes such as Arvalis . The last one has proposed us to use image processing to detect the number of wheat ears in images acquired directly in a field. Texture image segmentation techniques based on feature extraction by first and higher order statistical methods have been developped. The extracted features are used for unsupervised pixel classification to extract the different classes in the image. So, the K-Means algorithm is implemented before the choice of a threshold to highlight the ears. Three methods have been testesd with very heterogeneous results, except the run length technique for which the results are closed to the visual counting with an average error of 6%. Although the evaluation fo the quality of the detection is done visually, automatic evaluation algorithms are currently implementing. Moreover, other statistical methods of higher order must be implemented in the future jointly with mehohds based on spatio-frequential transforms and specific filtering. |
|
Title: |
An Efficient Fusion Strategy for Multimodal Biometric System
|
Author(s): |
C. Jinshong Hwang, Hunny Mehrotra , Nitin Agrawal and Phalguni Gupta |
Abstract: |
This paper proposes an efficient multi-step fusion strategy for multimodal biometric system. Fusion is done at two stages i.e., algorithm level and modality level. At algorithm level the important steps involved are normalization, data elimination and assignment of static and dynamic weights. Further, the individual recognizers are combined using sum of scores technique. Finally the integrated scores from individual traits are passed to decision module. Fusion at decision level is done using Support Vector Machines (SVM). The SVM is trained by the set of matching scores and it classifies the data into two known classes i.e., genuine and imposters. The system is tested on database collected for 200 individuals and is showing a considerable increase in accuracy (overall accuracy 98.42%) compared to individual accuracies (maximum accuracy 92.46%). |
|
Title: |
A NOVEL RELEVANCE FEEDBACK PROCEDURE BASED ON LOGISTIC REGRESSION AND OWA OPERATOR FOR CONTENT-BASED IMAGE RETRIEVAL SYSTEMS
|
Author(s): |
Pedro Zuccarello, Esther de Ves, Teresa León, Guillermo Ayala and Juan Domingo |
Abstract: |
This paper presents a new algorithm for
content based retrieval systems in large databases. The objective
of these systems is to find the images which are as similar as
possible to a user query from those contained in the global image
database without using textual annotations attached to the images.
The procedure proposed here to address this problem is based on
logistic regression model: the algorithm considers the probability
of an image to belong to the set of those desired by the user. In
this work a relevance proabaility $\pi (I)$ is a quantity wich
reflects the estimate of the relevance of the image $I$ with respect
to the user's preferences. The problem of the small sample size with
respect to the number of features is solved by adjusting several
partial linear models and combining its relevance probabilitis by
means of an ordered averaged weighted operator. Experimental results
are shown to evaluate the method on a large image database in term
of the average number of iterations needed to find a target image |
|
Title: |
Detection of Perfect and Approximate Reflective Symmetry in Arbitrary Dimension
|
Author(s): |
Darko Dimitrov and Klaus Kriegel |
Abstract: |
Symmetry detection is an important problem with many applications in pattern recognition, computer vision and computational geometry. In this paper, we propose a novel algorithm for computing a hyperplane of reflexive symmetry of a point set in arbitrary dimension with approximate symmetry. The algorithm is based on the geometric hashing technique. In addition, we consider a relation between the perfect reflective symmetry and the principal components of shapes, a relation that was already a base of few heuristic approaches that tackle the symmetry problem in 2D and 3D. From mechanics, it is known that, if $H$ is a plane of reflective symmetry of the 3D rigid body, then a principal component of the body is orthogonal to $H$. Here we extend that result to any point set (continuous or discrete) in arbitrary dimension. |
|
Title: |
AUTOMATIC LIP LOCALIZATION AND FEATURE EXTRACTION FOR LIP-READING
|
Author(s): |
Salah WERDA, Walid MAHDI and Abdelmajid BEN HAMADOU |
Abstract: |
In recent year, lip-reading systems have received a great attention, since it plays an important role in human communication with computer especially for hearing impaired or elderly people. The need for an automatic lip-reading system is ever increasing. Infact, today, extraction and reliable analysis of facial movements make up an important part in many multimedia systems such as videoconference, low communication systems, lip-reading systems. We can imagine, for example, a dependent person ordering a machine with an easy lip movement or by a simple syllable pronunciation. We present in this paper a new approach for lip localization and feature extraction in a speaker’s face. The extracted visual information is then classified in order to recognize the uttered viseme (visual phoneme). To check our system performance we have developed our Automatic Lip Feature Extraction prototype (ALiFE). Experiments include different digits articulated by different native speakers (Male & Female). Experiments revealed that our system recognizes 70.95 % of French digits uttered under natural conditions. |
|
Title: |
Biased Manifold Embedding for Person-Independent Head Pose Estimation
|
Author(s): |
Vineeth Nallure Balasubramanian and Sethuraman Panchanathan |
Abstract: |
Head pose estimation is an integral component of face recognition systems and human computer interfaces. To determine the head pose, face images with varying pose angles can be considered to be lying on a smooth low-dimensional manifold in high-dimensional feature space. In this paper, we propose a novel supervised approach to manifold-based non-linear dimensionality reduction for head pose estimation. The Biased Manifold Embedding method is pivoted on the ideology of using the pose angle information of the face images to compute a biased geodesic distance matrix, before determining the low-dimensional embedding. A Generalized Regression Neural Network (GRNN) is used to learn the non-linear mapping, and linear multi-variate regression is finally applied on the low-dimensional space to obtain the pose angle. We tested this approach on face images of 24 individuals with pose angles varying from -90 to +90 degrees with a granularity of 2. The results showed significant reduction in the error of pose angle estimation, and robustness to variations in feature spaces, dimensionality of embedding and other parameters. |
|
Title: |
PERFORMANCE OF A COMPACT FEATURE VECTOR IN CONTENT-BASED IMAGE RETRIEVAL
|
Author(s): |
Gita Das and Sid Ray |
Abstract: |
In this paper, we considered image retrieval as a dichotomous classification problem and studied the effect of sample size and dimensionality on the retrieval accuracy.
Finite sample size has always been a problem in Content-Based Image Retrieval (CBIR) system and it is more severe when feature dimension is high. Here, we have discussed feature vectors having different dimensions and their performance with real and synthetic data, with varying sample sizes. We reported experimental results and analysis with two different image databases of size 1000, each with 10 semantic categories. |
|
Title: |
CATEGORY LEVEL OBJECT SEGMENTATION
|
Author(s): |
Diane Larlus and Frédéric Jurie |
Abstract: |
We propose a new method for learning to segment objects in
images. This method is based on a latent variable model used for
representing images and objects, inspired by the LDA model. Like the
LDA model, our model is capable of automatically discovering which
visual information comes from which object. We extend LDA by
considering here that images are made of multiple overlapping regions,
treated as distinct documents, giving more chance to small objects to
be discovered as being the main topic of at least one image
sub-region. This model is extremely well suited for assigning image
patches to objects (even if they are small), and therefore for
segmenting objects. We apply this method on objects belonging to
categories with high intra-class variations and strong viewpoint
changes, such as those of the Graz-02 dataset and obtain impressive
results. |
|
Title: |
Informative Visualization for Browsing and Retrieval of Large-Scale News Video Collections
|
Author(s): |
Hangzai Luo, Jianping Fan, Shin'ichi Satoh, William Ribarsky and Mohand-Said Hacid |
Abstract: |
In this paper, we have developed a novel framework to enable more effective visual analysis and retrieval of large-scale news videos via interactive visualization, so that the audiences can find news stories of interest at first glance. Keyframes and keywords are automatically extracted from news video clips and visually represented according to their interestingness and informativeness measurement. A computational approach is also developed to quantify the interestingness measurement of video clips. Our experimental results have shown that our techniques for intelligent news video analysis have the capacity to enable more effective visualization and retrieval of large-scale news videos. Our visualization-based news video analysis and retrieval system is very useful for security applications and for general audiences to quickly find the news stories of interest from large-scale news videos among many channels. |
|
Title: |
PARTS-BASED FACE DETECTION AT MULTIPLE VIEWS
|
Author(s): |
Andreas Savakis |
Abstract: |
This paper presents a parts-based approach to face detection that is suitable for multiple views. Parts detectors for eyes, mouth and nose were implemented using artificial neural networks, which were trained using the bootstrapping method. Bayesian networks were utilized to incorporate the experimental performance of the detectors into a final decision. The results are comparable with other state-of the art face detection methods, while providing the additional benefits of support for different view angles and robustness for partial occlusions. |
|
Title: |
FACIAL POSE ESTIMATION FOR IMAGE RETRIEVAL
|
Author(s): |
Andreas Savakis |
Abstract: |
Face detection is a prominent semantic feature which, along with low-level features, is often used for content-based image retrieval. In this paper we present a human facial pose estimation method that can be used to generate additional metadata for more effective image retrieval when a face is already detected. Our computationally efficient pose estimation approach is based on a simplified geometric head model and combines artificial neural network (ANN) detectors with template matching. Testing at various poses demonstrated that the proposed method achieves pose estimation within 4.28 degrees on average, when the facial features are accurately detected. |
|
Title: |
Image retrieval with binary Hamming distance
|
Author(s): |
Jerome LANDRE and Frederic TRUCHETET |
Abstract: |
This article describes a content-based indexing and retrieval (CBIR) system based on hierarchical binary signatures. Binary signatures are obtained through a described binarization process of classical features (color, texture and shape). The Hamming binary distance (based on binary XOR operation) is used during retrieval. This technique was tested on a real image collection containing 7200 images and on a virtual collection of one million images. Results are very good both in terms of speed and accuracy allowing real-time image retrieval in very large image collections.
|
|
Title: |
Simultaneous registration and clustering for temporal segmentation of facial gestures from video.
|
Author(s): |
Fernando De la Torre, Joan Campoy, Jeff cohn and Takeo Kanade |
Abstract: |
Temporal segmentation of facial gestures
from video sequences is an important unsolved problem towards
automatic facial analysis. Recovering temporal gesture structure
from a set of 2D facial features tracked points is a challenging
problem because of the difficulty of factorizing rigid and non-rigid
motion and the large variability in the temporal scale of the facial
gestures. In this paper, we propose a two step approach for temporal
segmentation of facial gestures. The first step consist on
clustering shape and appearance features into a number of clusters
and the second step involves temporally grouping these clusters.
Results on clustering largely depend on the registration process. To
improve the clustering/registration, we propose Parameterized
Cluster Analysis (PaCA) that jointly performs registration and
clustering. Besides the joint clustering/registration, PaCA improves
the rounding off problem of existing spectral graph methods for
clustering. After the clustering is performed, we group set of
clusters into facial gestures. Several toy and real examples show
the benefits of our approach for temporal facial gesture
segmentation.
|
|
Title: |
Parameterized Kernels for Support Vector Machine Classification
|
Author(s): |
Fernando De la Torre and Oriol Vinyals |
Abstract: |
Kernel machines (e.g. SVM, KLDA) have
shown state of the art performance in several visual classification
tasks. The classification performance of kernel machines mostly
depend on the choice of kernels and its parameters. In this paper,
we propose a method to search over the space of parameterized
kernels using a linear time gradient based-method. Our method
effectively learns a non-linear representation of the data useful
for classification. Moreover, we introduce a new matrix formulation
that simplifies and unifies previous approaches. The effectiveness
and robustness of the proposed algorithm is demonstrated in both
synthetic and real examples of pedestrian and mouth detection from
images. |
|
Title: |
Coral Reef Texture Classification Using Support Vector Machines
|
Author(s): |
Eraldo Ribeiro, Anand Mehta, Jessica Gilner and Robert van Woesik |
Abstract: |
The development of tools to examine the ecological parameters of coral reefs is seriously lagging behind available computer-based technology. Until recently the use of images in environmental and ecological data gathering has been limited to terrestrial analysis because of
difficulties in underwater image capture and data analysis.
In this paper, we propose the application of computer
vision to address the problem of monitoring and classifying coral
reef colonies. More specifically, we present a method to classify
coral reef images based on their textural appearance using support
vector machines (SVM). Our algorithm does not require feature
extraction as a preprocessing stage, but instead uses raw pixel color values
directly as sample vectors. We show promising results on region
classification of three coral types for low quality
underwater images. This will allow for more timely
analysis of coral reef images and broaden the
capabilities of underwater data interpretation.
|
|
|
Area 4 - Motion, Tracking and Stereo Vision
|
Title: |
Disparity Contour Grouping for Multi-object Segmentation in Dynamically Textured Scenes
|
Author(s): |
Wei Sun and Stephen Spackman |
Abstract: |
A fast multi-object segmentation algorithm based on disparity contour grouping is described. It segments multiple objects at a wide range of depths from backgrounds of known geometry in a manner insensitive to changing lighting and the dynamic texture of, for example, display surfaces. Not relying on stereo reconstruction or prior knowledge of foreground objects, it is fast enough on commodity hardware for some real-time applications. Experimental results demonstrate its ability to extract object contour from a complex scene and distinguish multiple objects even when they are close together or partially occluded. |
|
Title: |
Differential techniques for motion computation
|
Author(s): |
BOUDEN Toufik and DOGHMANNE Noureddine |
Abstract: |
Optical flow computation is an important and challenging problem in the motion analysis of images sequence. It is a difficult and computationally expensive task and is an ill-posed problem, which expresses itself as the aperture problem. However, optical flow vectors or motion can be estimated by differential techniques using regularization methods; in which additional constraints functions are introduced [6, 7]. In this work we propose to improve differential methods for optical flow estimation by including colour information as constraints functions in the optimization process using a simple matrix inversion. The proposed technique has shown encouraging. |
|
Title: |
A Real-Time Tracking System Combining Template-Based and Feature-Based Approaches
|
Author(s): |
Alexander Ladikos, Selim Benhimane and Nassir Navab |
Abstract: |
In this paper we propose a complete real-time model-based tracking system for piecewise-planar objects which combines template-based and feature-based approaches. Our contributions are an extension to the ESM algorithm used for template-based tracking and the formulation of a feature-based tracking approach, which is specifically tailored for use in a real-time setting. In order to cope with highly dynamic scenarios, such as illumination changes, partial occlusions and fast object movement, the system adaptively switches between the template-based tracking, the feature-based tracking and a global initialization phase. Our tracking system achieves real-time performance by applying a coarse-to-fine optimization approach and includes means to detect a loss of track. |
|
Title: |
Using photometric stereo to refine the geometry of a 3D surface model
|
Author(s): |
Zsolt Jankó |
Abstract: |
In this paper we aim at refining the geometry of 3D models of real objects by adding surface bumpiness to them. 3D scanners are usually not accurate enough to measure fine
details, such as surface roughness. Photometric stereo is an appropriate technique to recover bumpiness. We use a number of images taken from the same viewpoint under varying illumination and an initial sparse 3D mesh obtained by a 3D scanner. We assume the surface
to be Lambertian, but the lighting properties are unknown. The novelty of our method is that the initial sparse 3D mesh is exploited to calibrate light sources and then to recover surface normals. The importance of refining the geometry of a bumpy surface is demonstrated by applying the method to synthetic and real data. |
|
Title: |
Generating Optimized Marker-based Rigid Bodies for Optical Tracking
|
Author(s): |
Frank Steinicke, Christian Jansen, Klaus Hinrichs, Jan Vahrenhold and Bernd Schwald |
Abstract: |
Marker-based optical tracking systems are often used to track objects that are equipped with a certain number of passive or active markers. Fixed configurations of these markers, so-called rigid bodies, can be detected by, for example, infrared stereo-based camera systems, and their position and orientation can be reconstructed by corresponding tracking algorithms.
The main issue in designing the geometrical constellation of these markers and their 3D positions is to allow robust identification and tracking of multiple objects, and this design process is considered to be an essential and challenging task.
At present, the design process is based on trial-and-error: the designer constructs a configuration, evaluates it in a given setup, and rearranges the marker positions within the configuration if necessary.
Even though single ready-made rigid bodies permit sufficiently good tracking, it is not ensured that the corresponding arrangements of markers meet any quality criteria in terms of reliability and robustness. Furthermore, it is unclear whether it is possible to add further rigid bodies to the setup which are sufficiently distinguishable from the given ones.
In this paper, we present an approach to semi-automatically generate
point-based rigid bodies which are optimal with respect to the properties of the corresponding tracking system, e.g., granularity, accuracy, or jitter. Our procedure which is aimed at supporting the design process as well as improving the tracking process generates configurations for several devices associated with an arbitrary set of markers.
We discuss both the technical background of our approach and the results of an evaluation comparing the tracking quality of commercially available devices to the rigid bodies generated by our approach. |
|
Title: |
Structural ICP Algorithm for Pose Estimation based on Local Features
|
Author(s): |
Marco Antonio Chavarria Fabila and Gerald Sommer |
Abstract: |
In this paper we present a new variant
of the ICP (iterative closest point) algorithm for finding
correspondences between image and model points. This new variant
uses structural information from the model points and contour
segments detected in images to find better conditioned
correspondence sets and to use them to compute the 3D pose. A
local representation of 3D free-form contours is used to get the
structural information in 3D space and in the image plane.
Furthermore, the local structure of free-form contours is combined
with orientation and phase as local features obtained from the
monogenic signal. Whit this combination, we achieve a more robust
correspondence search. Our approach was tested on synthetical and
real data to compare the convergence and performance of our
approach against the classical ICP approach. |
|
Title: |
Real Time Smart Surveillance using Motion Analysis
|
Author(s): |
Marco Leo, P. Spagnolo, T. D’Orazio, P. L. Mazzeo and A. Distante |
Abstract: |
Smart Surveillance is the use of automatic video analysis technologies for surveillance purposes and it is currently one of the most active research topics in computer vision because of the wide spectrum of promising applications. In general, the processing framework for smart surveillance consists of a preliminary and fundamental motion detection step in combination with higher level algorithms that are able to properly manage motion information. In this paper a reliable motion analysis approach is coupled with homographic transformations and a contour comparison procedure to achieve the automatic real-time monitoring of forbidden areas and the detection of abandoned or removed objects. Experimental tests were performed on real image sequences acquired from the Messapic museum of Egnathia (south of Italy). |
|
Title: |
Model-Based Shape From Silhouette: A Solution Involving a Small Number of Views
|
Author(s): |
Jean-François Menudet, Jean-Marie Becker, Thierry Fournel and Catherine Mennessier |
Abstract: |
This article presents a model-based approach to Shape From Silhouette
reconstruction. It is formulated as a problem of 3D-2D non-rigid registration:
a surface model is deformed until it correctly matches the detected silhouettes in the
images. An efficient and reliable solution is proposed, based on a
Radial Basis Function deformation driven by control points located
on the contour generators of the 3D model. Unlike previous methods
relying on non-linear optimization techniques, the proposed method
only requires a linear system solving. Another advantage of this model-based
approach is to produce a surface representation of
the visual hull. Moreover, the introduction of shape priors allows
to reduce in a dramatic way the number of views required to obtain a realistic
reconstruction. Application to human body modeling is given. |
|
Title: |
Fusion of GPS and visual motion estimates for robust outdoor open field localization
|
Author(s): |
Hans Jřrgen Andersen and Thomas Bak |
Abstract: |
localization is an essential part of
autonomous vehicles or robots navigating in an outdoor environment.
In the absence of an ideal sensor for localization, it is necessary
to use sensors in combination in order to achieve acceptable
results. In the present study we present a combination of GPS and
visual motion estimation, which have complementary strengths. The
visual motion estimation is based on the tracking of points in an
image sequence. In an open field outdoor environment the points
being tracked are typically distributed in one dimension (on a
line), which allows the ego motion to be determined by a new method
based on simple analysis of the image point set covariance
structure. Visual motion estimates are fused with GPS data in a
Kalman filter. Since the filter tracks the state estimate over time,
it is possible to use the prior estimate of the state to remove
errors in the landmark matching, simplifying the matching, and
increasing the robustness. The proposed algorithm is evaluated
against ground truth in a realistic outdoor experimental setup. |
|
Title: |
CAMERA BASED HEAD-MOUSE: Optimization of Template-Based Cross-Correlation Matching
|
Author(s): |
Grigori Evreinov, Tatiana V. Evreinova and Roope Raisamo |
Abstract: |
There is a challenge to employ video-based input in mobile applications for access control, games and entertainment computing. However, by virtue of computational complexity, most of algorithms have a low performance and high CPU usage. This paper presents the experimental results of testing the reduced spiral search with the sparse angular sampling and adaptive search radius. We demonstrated that a reliable tracking could be provided in a wide range of lighting conditions with the relative brightness of only 16 pixels composing the grid-like template (the grid step of 10-15 pixels). Cross-correlation matching of the template was implemented in eight directions with a shift of one pixel and adaptive search radius. The algorithm was thoroughly tested and after that used in text entry application. The mean typing speed achieved with the head tracker and on-screen keyboard was of about 6.2 wpm without prediction after 2 hours practice. |
|
Title: |
Occlusions and Active Appearance Models
|
Author(s): |
McElory Hoffmann, Ben Herbst and Karin Hunter |
Abstract: |
The deterministic Active Appearance Model (AAM) tracker fails to track objects under occlusion. In this paper, we discuss two approaches to improve this tracker 's robustness and tracking results under occlusion. The first approach initialises the AAM tracker with a shape estimate obtained from an active contour, incorporating shape history into the tracker. The second approach combines AAMs and the particle filter, consequently employing both shape and texture history into the tracker. For each approach, a simple occlusion detection method is suggested, enabling us to address occlusion. Experimental results indicate the effectiveness of these techniques. |
|
Title: |
Detecting coplanar feature points in handheld image sequences
|
Author(s): |
Olaf Kähler and Joachim Denzler |
Abstract: |
3D reconstruction applications can benefit greatly from knowledge about coplanar feature points. Extracting this knowledge from images alone is a difficult task, however. The typical approach to this problem is to search for homographies in a set of point correspondences using the RANSAC algorithm. In this work we focus on two open issues with a blind random search. First, we enforce the detected planes to represent physically present scene planes. Second, we propose methods to identify cases, in which a homography does not imply coplanarity of feature points. Experiments are performed to show applicability of the presented plane detection algorithms to handheld image sequences. |
|
Title: |
Real-Time Template Based Tracking with Global Motion Compensation in UAV Video
|
Author(s): |
Yuriy Luzanov, Todd Howlett and Mark Robertson |
Abstract: |
In this paper we describe a combination
of Kalman filter with global motion estimation, between consecutive frames,
implemented to improve target tracking in the presence of rapid motions of the
camera encountered in human operated UAV based video surveillance systems. The
global motion estimation allows to retain the localization of the tracked
targets provided by the Kalman filter. The original target template is selected
by the operator. SSD error measure is used to find the best match for the
template in video frames. |
|
Title: |
A stereophotogrammic system to position patients for Proton Therapy
|
Author(s): |
Neil Muller, Evan de Kock, Ruby van Rooyen and Chris Trauernicht |
Abstract: |
Proton therapy is a successful treatment for
many lesions that are hard to treat using conventional radiotherapy, as the
radiation does to nearby critical structures can be tightly controlled.
To realise these advantages, the patient needs to be accurately positioned
with respect to the beam, and monitored during treatment to ensure that
no motion occurs. Due the high cost of a gantry system for proton therapy,
iThemba LABS uses a fixed beam-line, and moves the patient using
a suitable positioning device. In this paper, we discuss several aspects the
stereo vision based system used to both determine the position of the patient
in the room, and to monitor the patient during treatment. |
|
Title: |
Combination of video-based camera trackers using a dynamically adapted particle filter
|
Author(s): |
David Marimon and Touradj Ebrahimi |
Abstract: |
This paper presents a video-based camera tracker that combines marker-based and feature point-based cues in a particle filter framework. The framework relies on their complementary performance. Marker-based trackers can robustly recover camera position and orientation when a reference (marker) is available, but fail once the reference becomes unavailable. On the other hand, feature point tracking can still provide estimates
given a limited number of feature points. However, these tend to drift and usually fail to recover when the reference reappears. Therefore, we propose a combination where the estimate of the filter is updated from the individual measurements of each cue. More precisely, the marker-based cue is selected when the marker is available whereas the feature point-based cue is selected otherwise. The feature points tracked are the corners of the marker. Evaluations on real cases show that the fusion of these two approaches outperforms the individual tracking results.
Filtering techniques often suffer from the difficulty of modeling the motion with precision. A second related topic presented is an adaptation method for the particle filer. It achieves tolerance to fast motion manoeuvres. |
|
Title: |
A Passive 3D Scanner
|
Author(s): |
Matthias Elter, Andreas Ernst and Christian Küblbeck |
Abstract: |
We present a low-cost, passive 3d scanning system using an off-the-shelf consumer digital camera for image acquisition. We have developed a state of the art structure from motion algorithm for camera pose estimation and a fast shape from stereo approach for shape reconstruction. We use a volumetric approach to fuse partial shape reconstructions and a texture mapping technique for appearance recovery. We extend the state of the art by applying modifications of standard computer vision techniques to images of very high resolution to generate high quality textured 3d models. Our reconstruction results are robust and visually convincing. |
|
Title: |
A spatial sampling mechanism for effective background subtraction
|
Author(s): |
Marco Cristani and Vittorio Murino |
Abstract: |
In the video surveillance
literature, background (BG) subtraction is an important and
fundamental issue. In this context, a consistent group of
methods operates at region level, evaluating in fixed
\its{zones of interest} pixel values' statistics, so that a
per-pixel foreground (FG) labeling can be performed. In this
paper, we propose a novel hybrid, pixel/region, approach for
background subtraction. The method, named Spatial-Time
Adaptive Per Pixel Mixture Of Gaussian (S-TAPPMOG), evaluates
pixel statistics considering zones of interest that change
continuously over time, adopting a sampling mechanism. In
this way, numerous classical BG issues can be efficiently
faced: actually, it is possible to model the background
information more accurately in the chromatic uniform regions
exhibiting stable behavior, thus minimizing foreground
camouflages. At the same time, it is possible to model
successfully regions of similar color but corrupted by heavy
noise, in order to minimize false FG detections. Such
approach, outperforming state of the art methods, is able to
run in quasi-real time and it can be used at a basis for more
structured background subtraction algorithms. |
|
Title: |
3D Human Tracking by Gaussian Process Annealing Particle Filter
|
Author(s): |
Michael Rudzsky, Leonid Raskin and Ehud Rivlin |
Abstract: |
We present an approach for tracking human body parts with prelearned motion models in 3D using multiple cameras. We use annealing particle filter to track the body parts and Gaussian Process Dynamical Model in order to reduce the dimensionality of the problem, increase the tracker's stability and learn the motion models. We also present an improvement for the weighting function that helps to use it for the occluded scenes. We compare our results to the results achieved by a regular annealing particle filter based tracker and show that our algorithm can track well even for the low frame rate sequences. |
|
Title: |
AUTOMATIC KERNEL WIDTH SELECTION FOR NEURAL NETWORK BASED VIDEO OBJECT SEGMENTATION
|
Author(s): |
Dubravko Culibrk, Daniel Socek, Oge Marques and Borko Furht |
Abstract: |
Background modelling Neural Networks (BNNs) represent an approach
to motion based object segmentation in video sequences. BNNs are
probabilistic classifiers with nonparametric, kernel-based
estimation of the underlying probability density functions. The
paper presents an enhancement of the methodology, introducing
automatic estimation and adaptation of the kernel width.
The proposed enhancement eliminates the need to determine kernel
width empirically. The selection of a kernel-width appropriate for
the features used for segmentation is critical to achieving good
segmentation results. The improvement makes the methodology easier
to use and more adaptive, and facilitates the evaluation of the
approach.
|
|
Title: |
DETECTION AND TRACKING OF MULTIPLE MOVING OBJECTS IN VIDEO
|
Author(s): |
Wei Huang and Jonathan Wu |
Abstract: |
This paper presents a method for detecting and tracking multiple moving objects in both outdoor and indoor environments. The proposed method measures the change of a combined color-texture feature vector in each image block to detect moving objects. The texture feature is extracted from DCT frequency domain. An attributed relational graph (ARG) is used to represent each object, in which vertices are associated to an object’s sub-regions and edges represent spatial relations among the sub-regions. Object tracking and identification are accomplished by matching the input graph to the model graph. The notion of inexact graph matching enables us to track partially occluded objects. The experimental results prove the efficiency of the proposed method. |
|
Title: |
Face Tracking Using Canonical Correlation Analysis
|
Author(s): |
José Alonso Ybanez Zepeda, Franck Davoine and Maurice Charbit |
Abstract: |
This paper presents an approach that incorporates canonical correlation analysis for monocular 3D face tracking as a rigid objectIt also provides the comparison between the linear and the non linear version (kernel) of the CCA. The 3D pose of the face is estimated from observed raw brightness shape-free 2D image patches. A parameterized geometric face model is adopted to crop out and to normalize the shape of patches of interest from video frames. Starting from a face model fitted to an observed human face, the relation between a set of perturbed pose parameters of the face model and the associated image patches is learned using CCA or KCCA. This knowledge is then used to estimate the correction to be added to the pose of the face from an observed patch in the current frame. Experimental results on tracking faces in long video sequences show the effectiveness of the two proposed methods. |
|
Title: |
Hierarchical Multi-Resolution Model for Fast Energy Minimization of Virtual Cloth
|
Author(s): |
Le Thanh Tung and André Gagalowicz |
Abstract: |
In this paper we present a method for the fast energy minimization of virtual cloth. Our method is based on the idea of multi-resolution particle system. Once the cloth garments are approximately positioned around a virtual character, their spring's energy is still high, this will cause a long execution time for the cloth simulation. An energy minimization algorithm is needed, with a given resolution of the cloth, it take many iterations, each iteration aim to move the particles (regarding the collisions with the character) in order to reduce its energy. Even through the complexities of each iteration is O(n), with a high resolution masse-spring system, this minimization process can take a hole day. Our hierarchy method presented in this paper is used to reduce significantly execution time for the minimization process. The garments are firstly discreted in numerous resolutions, from lowest to highest resolution accepted by the cloth simulator. Once the lowest resolution particles system is minimized in a short time, the higher resolution is reconstructed then minimized from the small energy of the previous resolution. We reach the highest resolution when the cloth energy were significantly
reduced within a reasonable time. |
|
Title: |
Hybrid dynamic sensors calibration from camera-to-camera mapping : an automatic approach
|
Author(s): |
Julie Badri, Christophe Tilmant, Jean-Marc Lavest, Quoc-Cong Pham and Patrick Sayd |
Abstract: |
Video surveillance becomes more and more extended in industry and often involves automatic calibration system to remain efficient.
In this paper, a video-surveillance system is presented that uses stationary-dynamic camera devices. The static camera is used to monitor a global scene. When it detects a moving object, the dynamic camera is controlled to be centered on this object. We describe a method of camera-to-camera calibration in order to command the dynamic camera. This method allows to take into account the intrinsic camera parameters, the 3D scene geometry and the fact that the mechanism of inexpensive dynamic camera does not fit the classical geometrical model. Finally, some experimental results attest the accuracy of the proposed solution. |
|
Title: |
ENERGY MINIMIZATION APPROACH FOR ONLINE DATA ASSOCIATION WITH MISSING DATA
|
Author(s): |
ABIR EL ABED, SÉVERINE DUBUISSON and DOMINIQUE BEREZIAT |
Abstract: |
Data association problem is of crucial importance to improve online target tracking performance in many difficult visual environments. Usually, association effectiveness is based on prior information and observation category. However, some problems can arise when targets are quite similar. Therefore, neither the color nor the shape could be helpful informations to achieve the task of data association. Likewise, problems can also arise when tracking deformable targets, under the constraint of missing data, with complex motions. Such restriction, \emph{i.e.} the lack in prior information, limit the association performance. To remedy, we propose a novel method for data association, inspired from the evolution of the target dynamic model, and based on a global minimization of an energy vector. The main idea is to measure the absolute geometric accuracy between features. Its parameterless constitutes the main advantage of our energy minimization approach. Only one information, the position, is used as input to our algorithm. We have tested our approach on several sequences to show its effectiveness. |
|
Title: |
Automatic augmented video creation for markerless environments
|
Author(s): |
Jairo Sanchez and Diego Borro |
Abstract: |
In this paper we present a step by step algorithm to calculate the camera motion in a video sequence. Our method can search and track feature points along the video sequence, calibrate pinhole cameras and estimate automatically the camera motion. In the first step, a 2D feature tracker finds and tracks points in the video. Using this information, in a second step outliers are detected using epipolar geometry robust estimation techniques. Finally, the geometry is refined using non linear optimization techniques obtaining the camera’s intrinsic and extrinsic parameters. Our approach does not need to use markers and there are no geometrical constraints in the scene either. Thanks to the calculated camera pose it is possible to add virtual objects in the video sequence in a realistic manner. |
|
Title: |
Football Player Tracking from Multiple Views
|
Author(s): |
Alexandra Koutsia, Nikos Grammalidis, Kosmas Dimitropoulos, Mustafa Karaman and Lutz Goldmann |
Abstract: |
In this work, our aim is to develop an automated system which provides data useful for football game analysis. Information from multiple cameras is used to perform player recognition and tracking. A background segmentation approach, which operates with the invariant Gaussian colour model and uses temporal information, is used to achieve more accurate results. This way the unsteady regions surrounding the players are eliminated. Information derived and matched from all cameras is then used to perform tracking, using an advanced Multiple Hypothesis Tracking algorithm. |
|
Title: |
RECONSTRUCTING WAFER SURFACES WITH MODEL BASED SHAPE FROM SHADING
|
Author(s): |
Alexander Nisenboim and Alfred Bruckstein |
Abstract: |
Model based Shape From Shading (SFS) is a promising paradigm
introduced by J.Atick for solving such inverse problems
when we happen to have some prior information on the depth profiles
to be recovered. In present work we adopt this approach to address
the problem of recovering wafer profiles from images taken by a Scanning
Electron Microscope (SEM). This problem arises naturally in the microelectronics
inspection industry. A low dimensional model based on our prior knowledge of
the types of depth profiles of wafer surfaces has been developed
and based on it the SFS problem becomes an optimal parameter estimation. Wavelet
techniques were then employed to calculate a good initial guess to be used in
Levenberg-Marguardt (LM) minimization process that yields the desired profile
parametrization. The proposed algorithm has been tested under both
Lambertian and SEM imaging models |
|
Title: |
Reliable Detection of Camera Motion Based on Weighted Optical Flow Fitting
|
Author(s): |
Rodrigo Minetto, Neucimar Leite and Jorge Stolfi |
Abstract: |
Our goal in this paper is the reliable detection of camera motion (pan/zoom/tilt) in video records. We propose an algorithm based on weighted optical flow least-square fitting, where an iterative procedure is used to improve the corresponding weights. To the optical flow computation we used the Kanade-Lucas-Tomasi feature tracker. Besides detecting camera motion, our algorithm provides a precise and reliable quantitative analysis of the movements. It also provides a rough segmentation of each frame into ``foreground'' and ``background'' regions, corresponding to the moving and stationary parts of the scene, respectively. Tests with two real videos show that the algorithm is fast and efficient, even in the presence of large objects movements.
|
|
Title: |
SMARTCAM FOR REAL-TIME STEREO VISION: address-event based embedded system
|
Author(s): |
Nenad Milosevic, Stephan Schraml and Peter Schön |
Abstract: |
We present a novel real-time stereo smart camera for sparse disparity (depth) map estimation of moving objects at up to 200 frames/sec. It is based on a 128x128 pixel asynchronous optical transient sensor, using address-event representation (AER) protocol. An address-event based algorithm for stereo depth calculation including calibration, correspondence and reconstruction processing steps is also presented. Due to the on-chip data pre-processing the algorithm can be implemented on a single low-power digital signal processor. |
|
Title: |
Estimating Large Local Motion in Live-Cell Imaging Using Variational Optical Flow
|
Author(s): |
Jan Hubený, Vladimír Ulman and Pavel Matula |
Abstract: |
The paper studies the state-of-the-art variational optical flow methods for motion tracking of fluorescently labeled targets in living cells. Variational optic-flow methods have not been applied in living cells experiments before. We propose two reasons: First, classical optical flow methods can reliably estimate the motion vector field only if the pixel displacement is small (up to one pixel/voxel). Unfortunately, this assumption is typically not fulfilled in time lapse sequences of living cells. Second, the computation of the flow field was relatively time consuming and therefore these methods were practically unusable especially when processing 3D image sequences that can be acquired using modern confocal microscopes. In this paper, we show that both limitations can be overcome when the latest variational optic flow methods and appropriate numerical solution are involved. |
|
Title: |
PLANNING OF A MULTI STEREO VISUAL SENSOR SYSTEM FOR A HUMAN ACTIVITIES SPACE
|
Author(s): |
Jiandan Chen, Siamak Khatibi and Wlodek Kulesza |
Abstract: |
The paper presents a method for planning the position of multiple stereo sensors in an indoor environment. This is a component of an Intelligent Vision Agent System. We propose a new approach to optimize the multiple stereo visual sensor configurations in 3D space in order to get efficient visibility for surveillance, tracking and 3D reconstruction. The paper introduces a constraints method for modelling a Field of View in spherical coordinates, a tetrahedron model for target objects, and a stereo view constraint for the relationship between paired cameras. The constraints were analyzed and the minimum amount of stereo pairs necessary to cover the entire target space was optimized by an integer linear programming. The 3D simulations for human body and activities space coverage in Matlab illustrate the problem. |
|
Title: |
Streaming Clustering Algorithms for Foreground Detection in Color Videos
|
Author(s): |
Zoran Duric, Dana Richards and Wallace Lawson |
Abstract: |
A new method is given for locating foreground objects in color videos.
This is an essential task in many applications such as surveillance.
The algorithm uses clustering techniques to permit flexibility and adaptability in the
description of the background.
The approach is an example of the streaming data paradigm of algorithms design,
which only permits limited information to be retained about a previous video frames.
Experimental results show that it is an effective and robust technique.
|
|
Title: |
A DISTRIBUTED VISION SYSTEM FOR BOAT TRAFFIC MONITORING IN THE VENICE GRAND CANAL
|
Author(s): |
Luca Iocchi, Domenico Bloisi, Riccardo Leone, Roberta Pigliacampo, Luigi Tombolini and Luca Novelli |
Abstract: |
In this paper we describe a system for boat trafic monitoring that has been realized for analyzing and computing statistics of trafic in the Grand Canal in Venice.
The system is based on a set of survey cells to monitor about 6 Km of canal.
Each survey cell contains three cameras oriented in three directions and
covering about 250-300 meters of the canal. This paper presents the segmentation
and tracking phases that are used to detect and track boats in the channel and experimental evaluation of the system showing the effectiveness of the approach in the required tasks. |
|
Title: |
A Double Layer Background Model to Detect Unusual Events
|
Author(s): |
Sandra Canchola, Joaquin Salas, Hugo Jimenez, Joel Gonzalez-Barbosa and Juan Hurtado-Ramos |
Abstract: |
We propose a double layer background representation to detect
novelty in image sequences capable of handling non-stationary
scenarios, such as vehicle intersections.
In the first layer,
an adaptive pixel appearance background model is computed. Its subtraction with respect to the current image
results in a blob description of moving objects. In the
second one, motion analysis is performed by a Mixture of Gaussian of motion direction on the blobs. We use the two layers for create the usual
space representation and detect unusual activity.
Our experiments give clear indication that the proposed scheme is
sound to detect such activity as vehicles running on red light and making forbidden turns.
|
|
Title: |
Autonomous Tracking System for Airport Lighting Quality Control
|
Author(s): |
James Niblock, Jian-xun Peng and Karen McMenemy |
Abstract: |
The central aim of this research is to develop an autonomous measurement system for assessing the performance of an airport lighting pattern. The system improves safety with regard to aircraft landing procedures by ensuring the airport lighting is properly maintained and conforms to current standards and recommendations laid down by the International Civil Aviation Organisation (ICAO).
A vision system, mounted in the cockpit of an aircraft, is capable of capturing sequences of airport lighting images during a normal approach to an aerodrome. These images are post-processed to determine the grey level of the approach lighting pattern (ALP). In this paper, two tracking algorithms are presented which can detect and track individual luminaires throughout the complete image sequence. The effective tracking of the luminaires is central to the long term goal of this research, which is to assess the performance of the luminaires' from the recorded grey level data extracted for each detected luminaire. The two algorithms presented are the NM* feature tracking algorithm has been optimised for the specific task of airport lighting and to assess its effectiveness it has been compared to the Kanade-Lucus-Tomasi (KLT) feature tracking algorithm. In order to validate both algorithms a synthetic 3D model of the ALP is presented. To further assess the robustness of the algorithms results from an actual approach to a UK aerodrome are presented.
The results show that although both KLT and NM feature trackers are both effective in tracking airport lighting the NM algorithm is better suited to the task due to its reliable grey level information. Limitations, such as the static window size, of the KLT algorithm result in a lossy grey level data and hence lead to inaccurate results.
*Removed due to containing author information. |
|
Title: |
Background Subtraction for Realtime Tracking of a Tennis Ball
|
Author(s): |
David Mould, Jinzi Mao and Sriram Subramanian |
Abstract: |
In this paper we investigate real-time tracking of a tennis-ball using various image differencing techniques. First, we considered a simple background subtraction method with subsequent ball verification (BS). We then implemented two variants of our initial background subtraction method. The first is an image differencing technique that considers the difference in ball position between the current and previous frames along with a background model that uses a single Gaussian distribution for each pixel. The second is uses a mixture of Gaussians to accurately model the background image. Each of these three techniques constitutes a complete solution to the tennis ball tracking problem. In a detailed evaluation of the techniques in different lighting conditions we found that the mixture of Gaussians model produces the best quality tracking. Our contribution in this paper is the observation that simple background subtraction can outperform more sophisticated techniques on difficult problems, and we provide a detailed evaluation and comparison of the performance of our techniques, including a breakdown of the sources of error.
|
|
Title: |
GENERIC OBJECT TRACKING FOR FAST VIDEO ANNOTATION
|
Author(s): |
remi trichet and Bernard Merialdo |
Abstract: |
This article describes a method for fast video annotation using an object tracking technique. This work is part of the development of a system for interactive television, where video objects have to be identified in the video program. This environment puts specific requirements on the object tracking technique. We propose to use a generic technique based on keypoints. We describe three contributions in order to best satisfy those requirements: a model for a broader temporal use of the keypoints, an ambient color adaptation pre-treatment enhancing the keypoint detector performance, and a motion based bounding box repositioning algorithm. Finally, we present experimental results to validate those contributions. |
|
|
Human Presence Detection for Context-aware Systems
|
Title: |
Fast Adaptable Skin Colour Detection In RGB Space
|
Author(s): |
|
Abstract: |
This paper presents a novel skin colour classifier that uses a linear container in order to confine a volume of the RGB space where skin colour is likely to appear. The container can be adapted, using a single training image, to maximize the detection of a particular skin tonality. The classifier has minimum storage requirements, it is very fast to evaluate, and despite operating in the RGB space, provides equivalent illumination (brightness) independence to that of classifiers that work in the rg-plane. The performance of the proposed classifier is evaluated and compared with other classifiers. Finally, conclusions are drawn. |
|
Title: |
STATISTICAL-BASED SKIN CLASSIFIER FOR OMNI-DIRECTIONAL IMAGES
|
Author(s): |
Bill Kapralos, Miguel Vargas Martin and Asaf Shupo |
Abstract: |
This paper describes the work in
progress of the development of a simple, video-based system capable
of efficiently detecting human skin in images captured with a
panoramic video sensor. The video sensor is used to provide a view
of the entire visual hemisphere thereby providing multiple dynamic
views of a scene. Color models of both skin and non-skin were
constructed with images obtained with the panoramic video sensor.
Using a stochastic weak estimator coupled with a linear classifier,
preliminary results suggest the system is capable of distinguishing
images that contain human skin from images that do not. The ability
to both obtain an image of the entire scene from a single viewpoint
using the panoramic video sensor and determine whether the image
contains human skin (e.g., one or more humans) is practical for a
number of applications including video surveillance and
teleconferencing. |
|
Title: |
PEDESTRIAN DETECTION BY RANGE IMAGING
|
Author(s): |
Heinz Hügli and Thierry Zamofing |
Abstract: |
Remote detection by camera offers a versatile means for recording people activities. Relying principally on changes in video images, the method tends to fail in presence of shadows and illumination changes. This paper explores a possible remedy to these problems by using range cameras instead of conventional video cameras. As range is an intrinsic measure of an object geometry, it is basically not affected by illumination. The study described in this paper considers range detection by two state-of-the art cameras, namely a stereo and a time-of-flight camera. Performed investigations consider typical situations of pedestrian detection. The presented results are analyzed and compared in performance with conventional results. The study shows the effective potential of range camera to get rid of light change problems like shadow effects but also presents some current limitations of range cameras. |
|
Title: |
DYNAMIC CONTEXT DRIVEN HUMAN DETECTION AND TRACKING IN MEETING SCENARIOS
|
Author(s): |
Peng Dai, Guangyou Xu and Linmi Tao |
Abstract: |
As a significant part of context-aware systems, human-centered visual processing is required to be adaptive and interactive within dynamically changing context in real-life situation. In this paper a novel bottom-up and top-down integrated approach is proposed to solve the problem of dynamic context driven visual processing in meeting scenarios. A set of visual detection, verification and tracking modules are effectively organized to extraction rough-level visual information, based on which a bottom-up context analysis is performed through Bayesian Network. In reverse, results of scene analysis are applied as top-down guidance to control detailed-level visual processing. The system has been tested under real-life meeting environment that includes three typical scenarios: presentation, discussion and meeting break. The experiments show the effectiveness and robustness of our approach within continuously changing meeting scenarios.
|
|
|
3D Model Aquisition and Representation
|
Title: |
On Projection Matrix Identification for Camera Calibration
|
Author(s): |
Michał Tomaszewski and Władyslaw Skarbek |
Abstract: |
The projection matrix identification problem is considered
with application to calibration of intrinsic camera parameters.
Physical and orthogonal intrinsic camera models
in context of 2D and 3D data is discussed. A novel nonlinear goal function is proposed
for homographic calibration method
having the fast convergence of Levenberg-Marquardt optimization procedure.
Three models (linear, quadratic, and rational) and five optimization
procedures for their identification were compared wrt their time complexity,
the projection accuracy, and the intrinsic parameters accuracy.
The analysis has been performed for both, the raw and the calibrated pixel data, too.
The recommended technique with the best performance in all used quality measures is
the Housholder QR decomposition for the linear least square method (LLSM)
of the linear form of projection equations.
|
|
Title: |
SKETCH INPUT OF 3D MODELS: CURRENT DIRECTIONS
|
Author(s): |
Peter Varley and Pedro Company |
Abstract: |
In the last few years, there has been considerable interest in sketch input of 3D solid models. This paper summarises recent developments and discusses the directions these developments are taking. We consider four developments in particular: the move away from line labelling as a technique in recognition of the problem posed by extended vertices; the increasing use of symmetry detection as a tool for reconstruction; and progress towards interpretation of drawings depicting curved objects. |
|
Title: |
EXPERIMENTAL STUDY FOR 3D RECONSTRUCTION BASED ON ROTATIONAL STEREO
|
Author(s): |
Xi-Qing Qi, S.Y. Chen, Sheng Liu and Jianwei Zhang |
Abstract: |
With a traditional stereo vision system, images of the object are acquired from different positions using two cameras and the depth (distance) information is resumed from disparity. This costs a little high and is still inconvenient to implement since the sensor needs to be moved by a manipulator to be around the object for complete model construction. In this paper, a 3D reconstruction method based on rotational stereo is proposed. The object to be reconstructed is placed on a rotational plate which can be precisely controlled by a computer. Only one camera is needed to capture object images and thus it can reduce the implementation cost and cut down the time needed for calibration. A series of images are easy to be obtained for recovering the complete model of the object. Results of the simulation and real experiment are very satisfactory in both feasibility and accuracy. |
|
Title: |
Cylindrical B-Spline Model For Representation and Fitting of Heart Surfaces
|
Author(s): |
Tingting Jiang, Shenyong Chen, Qiu Guan and Chunyan Yao |
Abstract: |
Heart diseases cause high mortality while the therapy of these diseases is still faulty. Consequently recovery of human’s heart is valuable for clinical diagnosis and treatment. This paper proposes a new approach for three-dimensional (3-D) representation of external surface of human hearts based on B-Spline model. The model is represented in both Cartesian and cylindrical coordinates. By comparison, we find that the cylindrical coordinate is more convenient and much closely fits the structure of human hearts. The fitting is based on a cloud of points which can be extracted from computed tomography (CT) slices by an edge detection method. Results show that cylindrical B-Spline with a given number of control points can well fit the external surface of a phantom heart, which can then be further used for quantitative and functional analysis of the heart easily and accurately. |
|
Title: |
Three-Dimensional Monocular Scene Reconstruction For Service-Robots -- An Application
|
Author(s): |
Sascha Jockel, Tim Baier-Löwenstein and Jianwei Zhang |
Abstract: |
This paper presents an image based three dimensional reconstruction system for service-robot applications in case of daily table scenarios. Image driven environment perception is one of the main research topics in the field of autonomous robot applications and fundamental for further action-plannings like three dimensional collision detection and prevention for grasping tasks.
Perception will be done at two spatial-temporal varying positions by a micro-head camera mounted on a six-degree-of-freedom robot-arm of our mobile service-robot TASER. The epipolar geometry and fundamentalmatrix will be computed by preliminary extracted corners of both input images detected by a Harris-corner-detector. The input images will be rectified using the fundamentalmatrix to align corresponding scanlines together on the same vertical image coordinates. Afterwards a stereo correspondence is accomplished by a fast Birchfield algorithm that provides a 2.5 dimensional depth map of the scene. Based on the depth map a three dimensional textured point-cloud is represented as interactive OpenGL scene model for further action-planning algorithms in three dimensional space. |
|
Title: |
ONTOLOGY-DRIVEN 3D RECONSTRUCTION OF ARCHITECTURAL OBJECTS
|
Author(s): |
Christophe CRUZ, Franck Marzani and Frank BOOCHS |
Abstract: |
This paper presents an ontology-driven 3D architectural reconstruction approach based on the survey with a 3D scanner. This solution is powerful in the field of civil engineering projects to save time during the cost process estimation. This time is saved using efficient scanning instruments and a fast reconstruction of a digital mock-up that can be used in specific software. The reconstruction approach considers the three following issues. How to define an ontology to drive the reconstruction process? How to find semantic objects in a cloud of points? How to control an algorithm in order to find all objects in the cloud of points? This paper underlines the solutions found for these questions. |
|
Title: |
AN ACTIVE STEREOSCOPIC SYSTEM FOR ITERATIVE 3D SURFACE RECONSTRUCTION
|
Author(s): |
Franck Marzani, Franck Marzani, Yvon Voisin, Frank Boochs and Wanjing Li |
Abstract: |
For most traditional active 3D surface reconstruction methods, a common feature is that the object surface is scanned uniformly, so that the final 3D model contains a very large number of points, which requires huge storage space, and makes the transmission and visualization time-consuming. A post-process then is necessary to reduce the data by decimation. In this paper, we present a newly active stereoscopic system based on iterative spot pattern projection. The 3D surface reconstruction process begins with a regular spot pattern, and then the pattern is modified progressively according to the object’s surface geometry. The adaptation is controlled by the estimation of the local surface curvature of the actual reconstructed 3D surface. The reconstructed 3D model is optimized: it retains all the morphological information about the object with a minimal number of points. Therefore, it requires little storage space, and no further mesh simplification is needed. |
|
Title: |
Theoretical Foundations of 3D Scalar Field Visualization
|
Author(s): |
Mohammed Mostefa Mesmoudi, Leila De Floriani and Paolo Rosso |
Abstract: |
In this paper we introduce two novel
technics that allow for a three dimensional scalar field to be
visualized in the three dimensional space $R^3$. Many
applications are possible especially in medicine imagery. New
multiresolution models can be build based our techniques.
Moreover, we show that these two visualization techniques allow
the extraction of morphological features of the space and
that may not be captured by classical methods.
|
|
|
Beyond Image Enhancement
|
Title: |
IMAGE ENHANCEMENT BY REGION DETECTION ON CFA DATA IMAGES
|
Author(s): |
Sebastiano Battiato, Silvia Cariolo, Giovanni Gallo and Gianpiero Di Blasi |
Abstract: |
The paper proposes a new method devoted to identify specific semantic regions on CFA (Color Filtering Array) data images representing natural scenes. Making use of collected statistics over a large dataset of high quality natural images, the method uses spatial features and the Principal Component Analysis (PCA) in the HSL and normalized-RG color spaces. The classes considered, taking into account “visual significance”, are skin, vegetation, blue sky and sea. Semantic information are obtained on pixel basis
leading to meaningful regions although not spatially coherent. Such information is used for automatic color rendition of natural digital images based on adaptive color correction. The overall method outperforms previous results providing reliable information validated by measured and subjective experiments. |
|
Title: |
Towards Intent Dependent Image Enhancement: State-of-the-art and Recent Attempts
|
Author(s): |
Gabriela Csurka, Marco Bressan and Sebastien Favre |
Abstract: |
Image enhancement is mostly driven by intent. Solutions currently available based on image degradations are insufficient and we need to extend their scope to multiple semantic, aesthetic and contextual dimensions. In this article we details some recent efforts in these directions focusing on the particular problem of semantically dependent enhancement. To illustrate our approach, we restrict our experiments to the variations that might be generated from a particular image enhancement approach and learn the mapping between semantic categories and enhancement space from user preference evaluations.
|
|
Title: |
INTEGRATING IMAGING AND VISION FOR CONTENT-SPECIFIC IMAGE ENHANCEMENT
|
Author(s): |
Francesca Gasparini, Gianluigi Ciocca, Claudio Cusano and Raimondo Schettini |
Abstract: |
The quality of real-world photographs can often be considerably improved by digital image processing. In this article we describe our approach, integrating imaging and vision, for content-specific image enhancement. According to our approach, the overall quality of digital photographs is improved by a modular, image enhancement procedure driven by the image content. Single processing modules can be considered as autonomous elements. The modules can be combined to improve the overall quality according to image and defect categories. |
|
|
Computer Vision Methods in Medicine
|
Title: |
MRI SEGMENTATION USING MULTIFRACTAL ANALYSIS AND MRF MODELS
|
Author(s): |
Su RUAN and Jonathan Bailleul |
Abstract: |
In this paper, we demonstrate the interest of the multifractal analysis for removing the ambiguities due to the intensity overlap, and we propose a brain tissue segmentation method from Magnetic Resonance Imaging (MRI) images, which is based on Markov Random Field (MRF) models. The brain tissue segmentation consists in separating the encephalon into the three main brain tissues: grey matter, white matter and cerebrospinal fluid (CSF). The classical MRF model uses the intensity and the neighbourhood information, which is not robust enough to solve problems, such as partial volume effects. Therefore, we propose to use the multifractal analysis, which can provide information on the intensity variations of brain tissues. This knowledge is modelled and then incorporated into a MRF model. This technique has been successfully applied to real MRI images. The contribution of the multifractal analysis is proved by comparison with a classical MRF segmentation using simulated data. |
|
Title: |
Automated tumor segmentation using level set method
|
Author(s): |
stephane lebonvallet, sonia katchadourian and Su Ruan |
Abstract: |
In the framework of detection, diagnostic and treatment planning of the tumours, the Positron %Emission Tomography (PET) and Magnetic Resonance Imaging have become the most efficient %techniques for body and brain examination. Radiologists take usually several hours to segment %manually the region of interest (ROI) on images to obtain some information about patient %pathology. It is very time consuming. The aim of our study is to propose an automatic solution %to this problem to help the radiologist's work. This paper presents an approach of tumour %segmentation based on a fast level set method. The results obtained by the proposed method %dealing with both PET and MRI images are encouraging. |
|
Title: |
RECONSTRUCTING IVUS IMAGES FOR AN ACCURATE TISSUE CLASSIFICATION
|
Author(s): |
Karla L Caballero Espinosa, Joel Barajas Zamora, Oriol Pujol, Petia Radeva and Josefina Mauri |
Abstract: |
Plaque rupture in coronary vessels is one of the principal causes of sudden death in western societies. Reliable diagnostic tools are of great interest for physicians in order to detect and quantify the vulnerable plaque in order to develop an effective treatment. To achieve this, a tissue classification must be performed. Intravascular
Ultrasound (IVUS) represents a powerful technique to explore the vessel walls and observe its morphology and histological properties. In this paper, we propose a method to reconstruct IVUS images from the raw Radio Frequency (RF) data coming from the ultrasound catheter. This framework offers a normalization scheme to compare accurately different patient studies. Then, an automatic tissue classification is proposed based on image texture analysis and the use of Adapting Boosting (AdaBoost) learning technique combined with Error Correcting Output Codes (ECOC). In this study, 9 in-vivo cases are reconstructed with 7 different parameter set. This method improves the classification rate based on images, yielding a 91% of well-detected tissue using the best parameter set. It is also reduced the inter-patient variability compared with the analysis of DICOM
images, which are obtained from the commercial equipment. |
|
Title: |
PRECISE APPROACH FOR RECOVERING POSES OF DISTAL LOCKING HOLES FROM SINGLE CALIBRATED X-RAY IMAGE FOR COMPUTER-ASSISTED INTRAMEDULLARY NAILING OF FEMORAL SHAFT FRACTURES
|
Author(s): |
Guoyan Zheng and Xuan Zhang |
Abstract: |
One of the most difficult steps of intramedullary nailing of femoral shaft fractures is distal locking – the insertion of distal transverse interlocking screws, for which it is necessary to know the position and orientation of the distal locking holes of the intramedullary nail. This paper presents a precise approach for solving this problem using single calibrated X-ray image via parameter decomposition. The problem is formulated as a model-based optimal fitting process, where the to-be-optimized parameters are decomposed into two sets: (a) the angle between the nail axis and its projection on the imaging plane, and (b) the translation and rotation of the geometrical models of the distal locking holes around the nail axis. By using a hybrid optimization technique coupling an evolutionary strategy and a local search algorithm to find the optimal values of the latter set of parameters for any given value of the former one, we reduce the multiple-dimensional model-based optimal fitting problem to a one-dimensional search along a finite interval. We report the results of our in vitro experiments, which demonstrate that the accuracy of our approach is adequate for successful distal locking of intramedullary nails. |
|
Title: |
Automatic Heart Localization in Ultrasound Fetal Images
|
Author(s): |
Mozart Lemos de Siqueira and Philippe Olivier Alexandre Navaux |
Abstract: |
The research presented here is based on pattern recognition of cardiac
structure using a mold calculated in advance with a density probability function that uses as parameters the scales of gray of the images. This function is also used for search of the similar cardiac structure, where it is applied on the whole image, and then compared with the pattern of structure in which one interested by Bhattacharyya coefficient in order to obtain the similarity that defines the choice of the structure of interest.
To increase the results and performance, the method uses texture features for isolate the region on interest inside of the heart. To evaluate the proposed method was developed a prototype. Some experiments were tried, and the results are presented in this text. |
|
Title: |
Notes3D: Endoscopes learn to see 3-D - Basic Algorithms For A Novel Endoscope
|
Author(s): |
Jochen Penne, Sophie Krüger, Hubertus Feußner and K. Höller |
Abstract: |
TOF chips enable the acquisition of distance information via the phase shift of a reference signal and the reference signal reflected in a scene. Transmitting the reference signal via the light conductor of an endoscope and mounting a TOF chip distally enables the acquisition of distance information via an endoscope. The hardware
combination of TOF technology and endoscope optics is termed Multisensor-Time-Of-Flight endoscope (MUSTOF endoscope). Utilizing a MUSTOF endoscope in the context of NOTES procedures, enables the direct endoscopic acquisition of 3-D information (NOTES3D). While hardware issues are currently under investigation, an algorithmic framework is proposed here, dealing with the main points of interest: calibration, registration and reconstruction. |
|
|
Bayesian Approach for Inverse Problems in Computer Vision
|
Title: |
Region Restricted EM Algorithm Based Color Image Segmentation
|
Author(s): |
Zhong Li, Jianping Fan and Mohand-Said Hacid |
Abstract: |
Automatic image segmentation is a fundamental and challenging work in image analysis.
Region competition is a special case of seeded region growing (SRG) method where pixels can be re-labeled to different regions in the process of evolution of the front line.
The region restricted EM (RREM) algorithm for Gaussian mixtures estimates the Gaussian parameters of each enclosed region by treating each of them as a single Gaussian model.
The pixels on the contours are re-labeled in E-step.
The parameters of the Gaussian mixtures are updated in M-step.
The contour evolution is further associated with level set technique to keep efficiency and regularity.
The minimum description length (MDL) associated with region competition is guaranteed to be convergence and achieves the local minimum.
MDL also makes segmentation adaptive to the complexity of the image which handles the overfitting problem nicely.
By introducing split and merge operation, not only the convergence reaches faster, the chances of being trapped in local minima are greatly reduced.
Experimental evaluation shows good performance of our technique on a relatively large variety of images. |
|
Title: |
Bayesian Separation of Document Images with Hidden Markov Model
|
Author(s): |
Feng SU and Ali MOHAMMAD-DJAFARI |
Abstract: |
In this paper we consider the problem of separating document images from noisy linear mixtures in the Bayesian framework. The source image is modeled hierarchically by pixel intensities in multiple color channels and also a latent variable representing common
classifications of document objects among different channels. A Potts Markov random field is used to model local regularity of the classification variable inside object regions. Within the Bayesian approach, all unknowns including the source, the classification, the mixing coefficients and the distribution parameters of these variables are estimated from their posterior laws. The corresponding Bayesian computations are done by MCMC sampling method. Results from some experiments on synthetic and real data are presented to illustrate the performance of the proposed method. |
|
Title: |
ROBUST VARIATIONAL BAYESIAN KERNEL BASED BLIND IMAGE DECONVOLUTION
|
Author(s): |
Dimitris Tzikas, Aristidis Likas and Nikolaos Galatsanos |
Abstract: |
In this paper we present a new
Bayesian model for the blind image deconvolution (BID) problem.
The main novelties of this model are two. First, a sparse kernel
based representation of the point spread function (PSF) that
allows for the first time estimation of both PSF shape and
support. Second, a non Gaussian heavy tail prior for the model
noise to make it robust to large errors encountered in BID when
little prior knowledge is available about both image and PSF.
Sparseness and robustness are achieved by introducing Student-t
priors both for the PSF and the noise. A Variational methodology
is proposed to solve this Bayesian model. Numerical experiments
are presented both with real and simulated data that demonstrate
the advantages of this model as compared to previous Gaussian
based ones.
|
|
Title: |
Variational Posterior Distribution Approximation in Bayesian Emission Tomography Reconstruction Using a Gamma Mixture prior
|
Author(s): |
Rafael Molina, Antonio López, Jose Manuel Martin and Aggelos Katsaggelos |
Abstract: |
Following the Bayesian framework we propose a method to reconstruct emission tomography images
which uses gamma mixture prior and
variational methods to approximate the posterior distribution of the unknown parameters and image instead of
estimating them by using the Evidence Analysis or alternating between the estimation of parameters and image (Iterated
Conditional Mode (ICM)) approach. By analyzing the posterior distribution approximation we can examine the quality of the
proposed estimates. The method is tested on real Single Positron Emission Tomography (SPECT) images. |
|
Title: |
Image Deconvolution using a Stochastic Differential Equation Approach
|
Author(s): |
Xavier Descombes, Marion Lebellego and Elena Zhizhina |
Abstract: |
We consider the problem of image deconvolution. We foccus on a Bayesian approach which
consists of maximizing an energy obtained by a Markov Random Field modeling. MRFs are classically optimized by a M
CMC sampler embeded into a simulated annealing scheme. In a previous work, we have shown that, in the context of im
age denoising, a diffusion process can outperform the MCMC approach in term of computational time. Herein, we exten
d this approach to the case of deconvolution. We first study the case where the kernel is known. Then, we address t
he myopic and blind deconvolutions.
|
|
|
Mathematical and Linguistic Techniques for Image Mining
|
Title: |
Meaning and Efficiency in the GESTALT-System
|
Author(s): |
Eckart Michaelsen, Leo Doktorski, Michael Arens, Patrik Ding and Uwe Stilla |
Abstract: |
Knowledge-based image recognition and description systems have to balance soundness versus efficiency. For mining purposes in particular a theory of meaning – i.e. formal semantic or ontology – has to be given. However, efficiency is also most important for practicability. This contribution uses production systems. The semantics are analysed by means of confluence. Productions are used in an accumulating way instead of reductive. The interpretation scheme given allows breaking reasoning at any time with an approximate interpretation as result. It also allows straight forward parallelization. Two example systems using this interpreter system are discussed. |
|
Title: |
Spatial Rank and Approximate Symmetries in sequential Reconstruction of Dense Packings
|
Author(s): |
Alexander Vinogradov |
Abstract: |
General rotation group manifold is used as a base structure for representation of k-point configuration clusters in Hough-type parametric space. This yields to introduce efficiently spatial ranks inside k-point trial set and arrange in multiple dimensions Purzen-like windows with properties analogous to the linear ones’. As a result, asymptotically optimal dense packings of clusters are automatically produced for arbitrary spatial shapes via independent sequential trials. |
|
Title: |
LINGUISTIC SUPPORT OF THE KNOWLEDGE BASE FOR IMAGE ANALYSIS AND UNDERSTANDING SYSTEM
|
Author(s): |
Yulia Trusova, Igor Gurevich, Victor Beloozerov and Dmitri Murashov |
Abstract: |
The problem of lexical and semantic support of the knowledge base for the system for automation of scientific research in image processing, analysis and understanding is discussed. The main contribution is the image analysis thesaurus which has been developed as a main tool for solving this problem. A structure of the thesaurus and functional characteristics of the basic version of the thesaurus are described. Lexical categories of terms and relationships between terms in the domain of image processing, analysis and recognition are considered. The thesaurus was implemented as an autonomous program module. The description of the thesaurus module and its use are provided. The developed thesaurus was applied for automation of early diagnosis of hematological diseases on the base of cytological specimens. |
|
Title: |
THE DESCRIPTIVE TECHNIQUES FOR IMAGE ANALYSIS AND RECOGNITION
|
Author(s): |
Igor Gurevich |
Abstract: |
The presentation is devoted to the research of mathematical fundamentals for image analysis and recognition procedures. The final goal of this research is automated image mining: a) automated design, test and adaptation of techniques and algorithms for image recognition, estimation and understanding; b) automated selection of techniques and algorithms for image recognition, estimation and understanding; c) automated testing of the raw data quality and suitability for solving the image recognition problem. The main instrument is the Descriptive Approach to Image Analysis, which provides: 1) standardization of image analysis and recognition problems representation; 2) standardization of a descriptive language for image analysis and recognition procedures; 3) means to apply common mathematical apparatus for operations over image analysis and recognition algorithms, and over image models. It is shown also how and where to link theoretical results in the foundations of image analysis with the techniques used to solve application problems. |
|
Title: |
NUCLEI IMAGES ANALYSIS: Technology, Diagnostic Features and Experimental Study
|
Author(s): |
Dmitry Murashov, Igor Gurevich, Ovidio Salvetti and Heinrich Niemann |
Abstract: |
The information technology for automated morphologic analysis of the cytological slides, taken from patients with the lymphatic system tumours, was developed. The main contributions of the paper are the technology, the set of features for representation of nuclei images in pattern recognition problems (automated diagnostics), and experimental study of the technology and the features informativeness. The main components of the technology are: acquisition of cytological slides, method for segmentation of nuclei in the cytological slides, synthesis of the feature based nuclei description for subsequent classification, nuclei image analysis based on pattern recognition and scale-space techniques. The experiments confirmed efficiency of the developed technology. The discussion of the obtained results is given. The developed technology is implemented in the software system. |
|
Title: |
AN APPLICATION OF A DESCRIPTIVE IMAGE ALGEBRA FOR DIAGNOSTIC ANALYSIS OF CYTOLOGICAL SPECIMENS. An Algebraic Model and Experimental Study
|
Author(s): |
Vera Yashina, Irina Koryabkina, Heinrich Niemann, Ovidio Salvetti and Igor Gurevich |
Abstract: |
The paper is devoted to representation of a model of an information technology for automation of diagnostic analysis of cytological specimens of patient with lymphatic system tumors. The main contribution is implementation of the model by algebraic means. The theoretical base of the model is the Descriptive Approach to Image Analysis. The paper demonstrates a practical application of its algebraic tools – it is shown how to construct a model of a technology for automation of diagnostic analysis of cytological specimens using Descriptive Image Algebras. |
|
Title: |
AUTOMATED COMBINED TECHNIQUE FOR SEGMENTING CYTOLOGICAL SPECIMEN IMAGES
|
Author(s): |
Dmitry Murashov |
Abstract: |
Automated snake-based combined technique for segmenting cytological images is proposed. The main fea-tures of the technique are: implementation of the wave propagation model and modified Gaussian filter based on the heat equation with heat source, availability of coarse and precise levels of contour approxima-tion, automated snake initiation. The technique is successfully implemented for segmenting cytological specimen images. |
|
Title: |
Texture Based Image Indexing and Retrieval
|
Author(s): |
|
Abstract: |
The Content Based Image Retrieval (CBIR) has been an active research area. Given a collection of images it is to retrieve the images based on a query image, which is specified by content. The present method uses a new technique based on wavelet transformations by which a feature vector characterizing texture of the images is constructed. Our method derives 10 feature vectors for each image characterizing the texture of sub image from only three iterations of wavelet transforms. A clustering method ROCK is modified and used to cluster the group of images based on feature vectors of sub images of database by considering the minimum Euclidean distance. This modified ROCK is used to minimize searching process. Our experiments are conducted on a variety of garments images and successful matching results are obtained. |
|
Title: |
NEW ADAPTIVE ALGORITHMS FOR EXTRACTING OPTIMAL FEATURES FROM GAUSSIAN DATA
|
Author(s): |
Youness Aliyari and Hamid Abrishami Moghaddam |
Abstract: |
In this paper, we present new adaptive learning algorithms to extract features from multidimensional Gaussian data while preserving class separability. For this purpose, we introduce new adaptive algorithms for the computation of the square root of the inverse covariance matrix . We prove the convergence of the adaptive algorithms by introducing the related cost function and discussing about its properties and initial conditions. Adaptive nature of the new feature extraction method makes it appropriate for on-line signal processing and pattern recognition applications. Experimental results using two-class multidimensional Gaussian data demonstrated the effectiveness of the new adaptive feature extraction method. |
|
Title: |
Deformable structures localization and reconstruction in 3D Images
|
Author(s): |
Davide Moroni, Sara Colantonio, Ovidio Salvetti and Mario Salvetti |
Abstract: |
Accurate reconstruction of deformable structures in image sequences is a fundamental task in many applications ranging from forecasting by remote sensing to sophisticated medical imaging applications.
In this paper we report a novel automatic two-stage method for deformable structure reconstruction from 3D image sequences.
The first stage of the proposed method is focused on the automatic identification and localization of the deformable structures of interest, by means of fuzzy clustering and temporal regions tracking. The final segmentation is accomplished by a second processing stage, devoted to identify finer details using a Multilevel Artificial Neural
Network.
Application to the segmentation of heart left ventricle from MRI sequences are discussed. |
|
Title: |
TIME-ORIENTED MULTI-IMAGE CASE HISTORY – WAY TO THE “DISEASE IMAGE” ANALISIS
|
Author(s): |
Nikita Shklovskiy-Kordi, Boris Zingerman and Saveli Goldberg |
Abstract: |
: Electron Patient Records clinical database allows to create individual integrative medical records presentation The system links different types medical information for individual patient on real-time scale, forming a recognizable image, which can be described as "disease image". Normalisation of parameters makes future perspective of the data processing based on case-to-case and case-to-cluster comparative and multivariate statistical analysis of the patient’s data
|
|
|
The First International Workshop on Robot Vision
|
Title: |
Case-Based Indoor Navigation
|
Author(s): |
Giuseppe Sansonetti and Alessandro Micarelli |
Abstract: |
The purpose of this paper is to present a novel approach to the problem of autonomous robot navigation in a partially structured environment. The proposed solution is based on the ability of recognizing digital images that have been artificially obtained by applying a sensor fusion algorithm to ultrasonic sensor readings. Such images are classified in different categories using the well known Case-Based Reasoning (CBR) technique, as defined in the Artificial Intelligence domain. The architecture takes advantage of fuzzy theory for the construction of digital images, and wavelet functions for their analysis. |
|
Title: |
On board Camera Perception and Tracking of Vehicles
|
Author(s): |
Arturo de la Escalera, Juan Manuel Collado, Cristina Hilario and Jose Maria Armingol |
Abstract: |
In this paper a visual perception system for Intelligent Vehicles is presented. The goal of the system is to perceive the surroundings of the vehicle looking for other vehicles. Depending on when and where they need to be detected (overtaking, at long range) the system analysed movement or use a vehicle geometrical model to perceive them. Later, the vehicles are tracked. The algorithm takes into account the information of the road lanes in order to apply some geometric restrictions. Additionally, a multi-resolution approach is used to speed up the algorithm and work in real-time. Examples of real images are shown to validate the algorithm. |
|
Title: |
Uncalibrated visual odometry for ground plane motion without auto-calibration
|
Author(s): |
Simone Gasparini and Vincenzo Caglioti |
Abstract: |
In this paper we present a technique for visual odometry on the ground plane, based on a single, uncalibrated fixed camera mounted on a mobile robot. The odometric estimate is based on the observation of features (e.g., salient points) on the floor by means of the camera mounted on the mobile robot.
The presented odometric technique produces an estimate of the transformation between the ground plane prior to a displacement and the ground plane after the displacement. In addition, the technique estimates the homographic transformation between ground plane and image plane: this allows to determine the 2D structure of the observed features on the ground. A method to estimate both transformations from the extracted points of two images is presented.
Preliminary experimental activities show the effectiveness and the accuracy of the proposed method which is able to handle both relatively large and small rotational displacements.
|
|
Title: |
An Unsupervised Approach for Adaptive Color Segmentation
|
Author(s): |
Ulrich Kaufmann, Roland Reichle, Philipp Bear and Christof Hoppe |
Abstract: |
One of the key requirements of robotic vision systems for real-life
application is the ability to deal with varying lighting conditions. Many systems
rely on color-based object or feature detection using color segmentation. A static
approach based on preinitialized calibration data is not likely to perform very well
under natural light. In this paper we present an unsupervised approach for color
segmentation which is able to self-adapt to varying lighting conditions during
run-time. The approach comprises two steps: Initialization and iterative tracking
of color regions. Its applicability has been tested on vision systems of soccer
robots participating in RoboCup tournaments. |
|
Title: |
High Performance Realtime Vision for Mobile Robots on the GPU
|
Author(s): |
Christian Folkers and Wolfgang Ertel |
Abstract: |
We present a real time vision system designed for and implemented on a
graphics processing unit (GPU). After an introduction in GPU
programming we describe the architecture of the
system and software running on the GPU. We show the advantages of
implementing a vision processor on the GPU rather than on a CPU as
well as the shortcomings of this approach. Our performance
measurements show that the GPU-based vision system including color
segmentation, pattern
recognition and edge detection easily meets the
requirements for high resolution (1024$\times$768) color image processing at
a rate of up to 50 frames per second. A CPU-based system mobile PC on a
robot would under these constraints achieve only around twelve
frames per second. |
|
Title: |
A vision-based path planner/follower for an assistive robotics project
|
Author(s): |
Andrea Cherubini, Giuseppe Oriolo, Francesco Macri', Fabio Aloise, Febo Cincotti and Donatella Mattia |
Abstract: |
Assistive technology is an emerging area where robots can be used to
help individuals with motor disabilities achieve independence in
daily living activities. Mobile robots should be able to
autonomously and safely move in the environment (e.g. the user
apartment), by accurately solving the self-localization problem and
planning efficient paths to the target destination specified by the
user. This paper presents a vision-based navigation scheme designed
for Sony AIBO, in ASPICE, an assistive robotics project. The
navigation scheme is map-based: visual landmarks (white lines and
coded squares) are placed in the environment, and the robot utilizes
visual data to follow the paths composed by these landmarks, and
travel to the required destinations. Performance of this
vision-based scheme is shown by experiments and comparison with the
two other existing ASPICE navigation modes. Finally, the system is
clinically validated, in order to obtain a
definitive assessment through patient feedback. |
|
Title: |
A Cognitive Robot Architecture Based on 3D Simulator of Robot and Environment
|
Author(s): |
Antonio Chella and Irene Macaluso |
Abstract: |
Abstract. The paper proposes a robot architecture based on a comparison between the effective robot sensations and the expected sensations generated by a 3D robot/environment simulator. The robot perceptions are generated by the simulator driven by this comparison process. The architecture is operating in “Cicerobot” a museum robot offering guided tours at the Archaeological Museum of Agrigento, Italy.
|
|
Title: |
Extraction of multi-modal object representations in a robot vision system
|
Author(s): |
Nicolas Pugeault, Emre Baseski, Dirk Kraft, Norbert Krueger and Florentin Woergoetter |
Abstract: |
We introduce one model in a cognitive system that learns object representations by active exploration. More specifically, we propose a feature tracking scheme that makes use of known motion from a robotic arm to 1) segment the object currently grasped from the rest of the scene, and 2) learn a representation of the 3D shape without any prior knowledge. The 3D representation is generated via stereo-reconstruction of local multi-modal edge features. The segmentation of the features belonging to the object from the features describing the rest of the scene is achieved using Bayesian inference. We show the extracted shape model for various object. |
|
Title: |
Stereo Vision for Obstacle Detection: a Region-Based Approach
|
Author(s): |
|
Abstract: |
We propose a new approach to stereo matching for obstacle detection in the autonomous navigation framework. An accurate but slow reconstruction of the 3D scene is not needed; rather, it is more important to have a fast localization of the obstacles to avoid them. All the methods in the literature, based on a punctual stereo matching, are ineffective in realistic contexts because they are either computationally too expensive, or unable to deal with the presence of uniform patterns, or of perturbations between the left and right images. Our idea is to face the stereo matching problem as a matching between homologous regions. Our method is strongly robust in a realistic environment, requires little parameter tuning, and is adequately fast, as experimentally demonstrated in a comparison with the best algorithms in the literature. |
|
Title: |
Person Following through Appearance Models and Stereo Vision using a Mobile Robot
|
Author(s): |
Luca Iocchi, G. Riccardo Leone and Daniele Calisi |
Abstract: |
Following a person is an important task for mobile service and domestic robots
in applications in which human-robot interaction is a primary requirement.
In this paper we present an approach that integrates appearance models and stereo vision
for efficient people tracking in domestic environments. Stereo vision helps in obtaining a
very good segmentation of the scene to detect a person during the automatic model acquisition
phase, and to determine the position of the target person in the environment.
A navigation module and a high level \emph{person following} behavior are responsible for
performing the task in dynamic and cluttered environments.
Experimental results are provided to demonstrate the effectiveness of the proposed approach.
|
|
Title: |
Adaptive and Fast Scale Invariant Feature Extraction
|
Author(s): |
Primo Zingaretti and Emanuele Frontoni |
Abstract: |
The Scale Invariant Feature Transform, SIFT, has been successfully applied to robot vision, object recognition, motion estimation, etc. Still, the parameter settings are not fully investigated, especially when dealing with variable lighting conditions. In this work, we propose a SIFT improvement that allows feature extraction and matching between images taken under different illumination. Also an interesting approach to reduce the SIFT computational time is presented. Finally, results of robot vision based localization experiments using the proposed approach are presented. |
|
Title: |
Fourier Signature in Log-Polar Images
|
Author(s): |
Emanuele Menegatti, Enrico Grisan and Alberto Gasperin |
Abstract: |
In image-based robot navigation, the robot localises itself by compar-
ing images taken at its current position with a set of reference images stored in
its memory.
The problem is then reduced to find a suitable metric to compare images, and
then to store and compare efficiently a set of images that grows quickly as the en-
vironment widen. The coupling of omnidirectional image with Fourier-signature
has been previously proved to be a viable framework for image-based localisation
task, both with regard to data reduction and to image comparison. In this paper,
we investigate the possibility of using a space variant camera, with the photosen-
sitive elements organised in a log polar layout, thus resembling the organization
of the primate retina. We show that an omnidirectional camera using this reti-
nal camera, provides a further data compression and excellent image comparison
capability, even with very few components in the Fourier signature. |
|
Title: |
Frame-frame matching for realtime consistent visual mapping
|
Author(s): |
Kurt Konolige and Motilal Agrawal |
Abstract: |
Many successful indoor mapping techniques
employ frame-to-frame matching of laser scans to produce
detailed local maps, as well as closing large loops. In this
paper, we propose a framework for applying the same
techniques to visual imagery, matching visual frames with
large numbers of point features. The relationship between
frames is kept as a nonlinear measurement, and can be used
to solve large loop closures quickly. Both monocular
(bearing-only) and binocular vision can be used to generate
matches. Other advantages of our system are that no special
landmark initialization is required, and large loops can be
solved very quickly. |
|
| >
Mathematical and Linguistic Techniques for Image Mining
|
Paper Nr.: |
192 |
Title: |
MULTITASK LEARNING - An Application to Incremental Face Recognition |
Author(s): |
David Masip, Ŕgata Lapedriza and Jordi Vitriŕ |
Abstract: |
Usually
face classification applications suffer from two important problems:
the number of training samples from each class is reduced, and the
final system usually must be extended to incorporate new people to
recognize. In this paper we introduce a face recognition method that
extends a previous boosting-based classifier adding new classes and
avoiding the need of retraining the system each time a new person joins
the system. The classifier is trained using the multitask learning
principle and multiple verification tasks are trained together sharing
the same feature space. The new classes are added taking advantage of
the previous learned structure, being the addition of new classes not
computationally demanding. Our experiments with two different data sets
show that the performance does not decrease drastically even when the
number of classes of the base problem is multiplied by a factor of $8$. |
|
Paper Nr.: |
418 |
Title: |
AN ONLINE SELF-BALANCING BINARY SEARCH TREE FOR HIERARCHICAL SHAPE MATCHING |
Author(s): |
N. Tsapanos, A. Tefas and I. Pitas |
Abstract: |
In
this paper we propose a self-balanced binary search tree data structure
for shape matching. This was originaly developed as a fast method of
silhouette matching in videos recorded from IR cameras by firemen
during rescue operations. We introduce a similarity measure with which
we can make decisions on how to traverse the tree and backtrack to find
more possible matches. Then we describe every basic operation a binary
search tree can perform adapted to a tree of shapes. Note that as a
binary search tree, all operations can be performed in O(log n) time
and are very fast and efficient. Finally we present experimental data
evaluating the performance of our proposed data structure. |
|
Paper Nr.: |
444 |
Title: |
CONTINUOUS LEARNING OF SIMPLE VISUAL CONCEPTS USING INCREMENTAL KERNEL DENSITY ESTIMATION |
Author(s): |
Danijel Skočaj, Matej Kristan and Aleš Leonardis |
Abstract: |
In this paper we propose a method for continuous learning of simple
visual concepts. The method continuously associates words
describing observed scenes with automatically extracted visual
features. Since in our setting every sample is labelled with
multiple concept labels, and there are no negative examples,
reconstructive representations of the incoming data are used. The
associated features are modelled with kernel density probability
distribution estimates, which are built incrementally. The proposed
approach is applied to the learning of object properties and spatial
relations. |
|
Paper Nr.: |
447 |
Title: |
ONLINE LEARNING OF GAUSSIAN MIXTURE MODELS - A Two-Level Approach |
Author(s): |
Arnaud Declercq and Justus H. Piater |
Abstract: |
We
present a method for incrementally learning mixture models that avoids
the necessity to keep all data points around. It contains a single
user-settable parameter that controls via a novel statistical criterion
the trade-off between the number of mixture components and the accuracy
of representing the data. A key idea is that each component of the
(non-overfitting) mixture is in turn represented by an underlying
mixture that represents the data very precisely (without regards to
overfitting); this allows the model to be refined without sacrificing
accuracy. |
|
Paper Nr.: |
454 |
Title: |
TIME DEPENDENT ON-LINE BOOSTING FOR ROBUST BACKGROUNDMODELING |
Author(s): |
Helmut Grabner, Christian Leistner and Horst Bischof |
Abstract: |
In
modern video surveillance systems change and outlier detection is of
highest interest. Most of these systems are based on standard
pixel-by-pixel background modeling approaches. In this paper, we
propose a novel robust block-based background model as it is suitable
for outlier detection using an extension to on-line boosting for
feature selection. In order to be robust and still easy to operate our
system incorporates several novelties in both previous proposed on-line
boosting algorithms and classifier-based background modeling systems.
We introduce time-dependency and control into on-line boosting. Our
system allows for automatically adjusting its temporal behavior to the
underlying scene by using a control system which regulates the model
parameters. The benefits of our approach are illustrated on several
experiments on challenging standard datasets. |
|
|
|
|