TP-2010-01: 3D Facial Landmark Detection & Face Registration (PDF)
Facial Landmark Annotation Files (zip)
Selected Paper Abstracts
Chaikalis D., G. Passalis, N. Sgouros, D. Maroulis, and T. Theoharis, ‘Near Real-Time 3D Reconstruction from InIm Video Stream’, in Image Analysis and Recognition, 5th International Conference, ICIAR 2008, Póvoa de Varzim, Portugal, June 25-27, 2008, Proceedings. Also LNCS 5112. (draft PDF)
An efficient hardware architecture for the acceleration of an inte-grated 3D reconstruction method is presented, targeting demanding dynamic Integral Imaging applications. It exploits parallel processing and features mini-mized memory operations by implementing an extended-access memory scheme. Its reduced data throughput, thanks to optimized data utilization, makes it suitable for single FPGA device implementation. Results reveal that the hardware system outperforms the software method by an order of magni-tude. Moreover, its processing rate surpasses the typical rate of a dynamic Integral Imaging acquisition system, thus making a significant step towards real-time 3D video reconstruction.
novel method for
the reconstruction of 3D shape and texture from Integral Photography
(IP) images is presented. Sharing the same principles with
stereoscopic-based object reconstruction, it offers increased
robustness to noise and occlusions due to the unique characteristics of
IP images. A coarse to fine approach is used, employing a novel grid
refinement step in order to increase the quality of the reconstructed
objects. The proposed method’s unique properties include configurable
depth accuracy and direct and seamless triangulation. We evaluate our
method using synthetic data from a computer simulated IP setup as well
as real data from a simple yet effective digital IP setup. Experiments
show reconstructed objects of high quality indicating that IP can be a
competitive modality for 3D object reconstruction.
Mavridis, G. Papaioannou, G. Passalis, T. Theoharis, 3D Object Repair
Algorithms, Proc. International Conference on Computational Science
2006), LNCS, Springer.
A number of
have been proposed to solve the problem of patching surfaces to rectify
and extrapolate missing information due to model problems or bad
geometry visibility during data capture. On the other hand, a number of
similar yet more simple and robust techniques apply to 2D image data
and are used for texture restoration. In this paper we make an attempt
to bring these two-dimensional techniques to the 3D domain due to their
obvious advantage of simplicity and controllability. Creating a depth
image with the help of a voxelisation algorithm will allow us to apply
a variety of image repair algorithms in order to mend a 3D object. The
use of three variations of the texture synthesis algorithm is
investigated. Constrained texture synthesis and its variations using
the Haar wavelet and image decomposition methods are also proposed in
order to preserve patterns appearing on the object while trying to
maintain its geometry intact.
Passalis G., I. Kakadiaris, T. Theoharis, ‘Efficient Hardware Voxelization’, Computer Graphics International 2004, Crete, June 2004. (draft PDF)
A novel algorithm for the voxelization of surface models of
arbitrary topology is presented. The algorithm uses the depth and
stencil buffers, available in most commercial graphics hardware, to
achieve high performance. It is suitable for polygonal meshes and
parametric surfaces. Experiments highlight the advantages and
limitations of our approach.
Platis N., T. Theoharis, ‘Simplification of Vector Fields over Tetrahedral Meshes’, Computer Graphics International 2004, Crete, June 2004, pp. 174-181. (draft PDF)
produced by experiments or
simulations are usually extremely dense, which makes their manipulation
and visualization cumbersome. Often, such fields can be simplified
without much loss of information. A simplification method for 3D vector
fields defined over tetrahedral meshes is presented. The underlying
tetrahedral mesh is progressively simplified by successive half-edge
collapses. The order of collapses is determined by a compound metric
which takes into account the field and domain error incurred as well as
the quality of the resulting mesh. Special
attention is given to the preservation of the mesh boundary and of critical points on the vector field. A tool has been developed for the measurement of the difference between two vector fields over tetrahedral meshes, and it is used to quantify the simplification error.
Platis N. &
Theoharis, “Fast Ray-Tetrahedron Intersection Using Pluecker
Journal of Graphics
8(4), 2004, pp. 37-48.
We present an algorithm for ray-tetrahedron intersection. The algorithm uses Pl¨ucker coordinates to represent the ray and the edges of the tetrahedron and employs a robust and efficient test to determine the intersection. The algorithm is highly optimized and provides a significant performance increase over related algorithms.
Drakopoulos V., N. Mimikou & T. Theoharis, “An Overview of Parallel Visualization Methods for Mandelbrot and Julia Sets”, Computers & Graphics, 27, 2003, pp. 635-646.We present a comparative study of simple parallelisation schemes for the most widely used methods for the graphical representation ofMandelbrot and Julia sets. The compared methods render the actual attractor or its complement.
N. & T. Theoharis, “Progressive Hulls for Intersection
Applications“, Computer Graphics Forum, 22(2), 2003, pp. 107-116.
Progressive meshes are an established tool for triangle mesh simplification. By suitably adapting the simplification process, progressive hulls can be generated which enclose the original mesh in gradually simpler, nested meshes. We couple progressive hulls with a selective refinement framework and use them in applications involving intersection queries on the mesh. We demonstrate that selectively refinable progressive hulls considerably speed up intersection queries by efficiently locating intersection points on the mesh. Concerning the progressive hull
construction, we propose a new formula for assigning edge collapse priorities that significantly accelerates the simplification process, and enhance the existing algorithm with several conditions aimed at producing higher quality hulls. Using progressive hulls has the added advantage that they can be used instead of the enclosed object when a lower resolution of display can be tolerated, thus speeding up the rendering process.
Karabassi E.A., G. Papaioannou, C. Fretzagias & T. Theoharis, ‘’Exploiting Multiresolution Models to Accelerate Ray Tracing’’, Computers & Graphics, 27, 2003, pp. 91-98. (draft PDF)
It is shown how multiresolution models can be exploited in order to improve the efficiency of the ray tracing process without significant image deterioration. To this effect a set of criteria are established which determine the level of detail (LOD) model that should be used for each ray-object intersection. These criteria include the distance from the observer and the amount of distortion to which a ray has been subjected before hitting the object. The resulting images are indistinguishable to the naked eye from those obtained using the maximum resolution models but require significantly less time to compute. Our method can be used in conjunction with previous ray tracing acceleration methods.
Papaioannou G. & T. Theoharis, "Fast Fragment Assemlage Using Boundary Line and Surface Matching", in Application sof Computer Vision in Archaeology, within IEEE Computer Vision and Pattern Recongition 2003, Madison, WI, June 2003. (draft PDF)
In the recent past, fragment matching has been treated in two different approaches, one using curve matching methods and one that compares whole surfaces or volumes, depending on the nature of the broken artefacts. Presented here is a fast, unified method that combines curve matching techniques with a surface matching algorithm to estimate the positioning and respective matching error for the joining of threedimensional fragmented objects. Combining both aspects of fragment matching, essentially eliminates most of the ambiguities present in each one of the matching problem categories and helps provide more accurate results with low computational cost.
Papaioannou G., E.A. Karabassi & T. Theoharis, “Reconstruction of Three-Dimensional Objects through Matching of their Parts” , IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 24(1), January 2002, pp. 114-124. (draft PDF)
The problem of reassembling an object from its parts or fragments has never been addressed with a unified computational approach, which depends on the pure geometric form of the parts and not application-specific features. We propose a method for the automatic reconstruction of a model based on the geometry of its parts, which may be computer-generated models or range-scanned models. The matching process can benefit from any other external constraint imposed by the specific application.
Papaioannou G., E.A. Karabassi & T. Theoharis, "Virtual Archaeologist: Assembling the Past", IEEE Computer Graphics and Applications, 21(2), 2001, pp. 53-59. (draft PDF)
semi-automatic system for
the reconstruction of archaeological finds from their fragments is
Archaeologist uses computer graphics to calculate a measure of
matching between scanned data and employs optimization algorithms in
estimate the correct relative pose between fragments and cluster those
fragments that belong to the same entity.
Theoharis T., G. Papaioannou & E.A. Karabassi, "The Magic of the Z-Buffer: A Survey", WSCG 2001, Pilsen, CZ, February 2001, pp. 379-386. (draft PDF)
The wide availability of hardwired Z-buffers has sparked an explosion in the number of application of the algorithm whose origins lie in hidden surface elimination. This paper presents a much needed survey of key applications of the Z-buffer from the fields of rendering, modelling and vision in a common notation in order to help users make better use of this resource.
Simiakakis G., T. Theoharis & A.M. Day, "Parallel Adaptic Ray Tracing with 5D Adaptive Subdivision", short paper in WSCG2001 - Int. Conf. on Graphics, Visualization & Computer Vision, 2001 (draft PDF)
We present strategies for parallelising ray tracing based on 5D adaptive subdivision.Our goals are to obtain good speed-up and to efficiently balance the load between thc processors while minimising therequired memory per processor.
models are the
most common representation of structured 3D data in computer graphics,
recognition and machine vision. The method presented here automatically
identifies and labels all compact surface regions of a polygonal mesh,
or not, and extracts valuable invariant features regarding their
attributes. A method that is independent of the mesh topology is also
for the surface bumpiness estimation and the identification of coarse
G., E.A. Karabassi
Theoharis, "Automatic Reconstruction of Archaeological Finds- A
Approach", Proc. 4th Intern. Conf. in
and Artificial Intelligence,
archaeological finds from fragments is a tedious task requiring many
work by the archaeologists and reconstruction personnel. Up to now,
have significantly simplified this work by providing tools for data
storage classification and visualization in some cases. In this paper
we go one
step further by presenting a semi-automatic procedure for the full
reconstruction of the original objects using computer graphics and
intelligence algorithms on geometrical information.
Karabassi E.A., Papaioannou G. & T. Theoharis, "A Fast Depth-Buffer-Based Voxelization Algorithm", ACM Journal of Graphics Tools, 4(4), 1999, pp. 5-10. (draft PDF)
presents a fast
and easy to implement voxelization algorithm, which is based on the
most existing methods, our approach is suitable both for polygonal and
analytical objects. The efficiency of the method is independent of the
complexity and can be accelerated by taking advantage of widely
Karabassi E.A., Papaioannou G. & T. Theoharis, "Intersection Test for Collision Detection in Particle Systems", ACM Journal of Graphics Tools, 4(1), 1999, pp. 25-37. (draft PDF)
We present a
detecting collisions between a system of spherical particles and an
composed of triangles. The proposed algorithm takes into account each
particle's volume and is based on an intersection test between a
triangle and a
cylinder with spherical caps (the trajectory of a particle). The
efficiently calculates a close estimate of the point of intersection,
necessary for collision detection.
Papaioannou G., T. Theoharis & A. Boehm, "A Texture Controller", The Visual Computer, 14(10), 1998, pp. 488-496.
creating accurate and controllable textures is proposed. In the past
textures have been used to cover every point of an object uniformly,
behaviour could not be controlled locally, as is frequently needed. In
provide local control, texture-attribute control points are inserted in
model and the behaviour of the texture at every point is defined
interpolation of the control-point attributes. The proposed texturing
behaves as a texture controller and can be applied to any kind of
Agathos A., T. Theoharis & A. Boehm, "Efficient Integer Algorithms for the Generation of Conic Sections", Computers & Graphics, 22(5), 1998, pp. 621-628. (draft PDF)
8-connected algorithms for the fast generation of Conic Sections whose
aligned to the coordinate axes are described based on a Bresenham-like
methodology. Performance results show that in the case of the ellipse,
algorithm is at least as fast as other known integer algorithms but
lower integer range and always performs correct region transitions. Antialiasing is easily
Theoharis T., A.R.L. Travis & N.E. Wiseman, "3D Display: Synthetic Image Generation and Visual Effect Simulation", Computer Graphics Forum, 9, 1990, pp. 337-348. (draft PDF)
Two viewing models are presented for a promising type of three dimensional display; they are based on parallel oblique and perspective oblique projections respectively. A detailed simulation compares the quality of the imeges that will be produced by each projection type and points out potential problems. The time necessary to alter the image of the three dimensional display will be comparable to that of conventional displays by the use of a simple parallel processing scheme.
Theoharis T., "Algorithms for Parallel Polygon Rendering", Lecture Notes in Computer Science (Springer-Verlag), #373, 1989.
in-depth analysis of the use of general purpose parallel processors for
speedup of basic graphics algorithms. Account is taken of both SIMD
and MIMD architectures,
as well as their
Algorithms considered include polygon rendering and clipping.
It is shown
how the polygon
rendering operations of filling and hidden surface elimination can be
efficiently distributed among the SIMD
MIMD parts of the
Disputer, a parallel
machine developed at
Oxford University Programming Research Group that incorporates both of
modes of parallelism. Apart from being a useful graphics application,
illustrates some of the issues involved in programming the Disputer
Theoharis T. & I. Page, "Two Parallel Methods for Polygon Clipping", Computer Graphics Forum, 8, 1989, pp. 107-114. (draft PDF)
the implementation of the well known Sutherland-Hodgman
polygon clipping algorithm are given for the SIMD
and the MIMD Transputer
respectively. The two implementaions
comparison that is interesting for the two architectures are typical
of their respective architectural classes.
for the evaluation of a linear polynomial function using the method of
differences is given. An implementation on a general purpose SIMD array is presented and the
exploited for the
implementation of certain important graphics operations. The required
and the achieved speedup and efficiency are discussed.
the Graphics Pipeline" Technical Monograph PRG-54,
the construction of a parallel processing pipeline for the rapid
polygonal models in computer graphics. It includes an analysis of the
cost of each
stage, which results in a proposal for the amount of internal
is required for each pipeline stage. Formal specifications in Z and CSP are given as well as an Occam
We present a novel 3D shape descriptor that uses a set of panoramic views of a 3D object which describe the position and orientation of the object’s surface in 3D space. We obtain a panoramic view of a 3D object by projecting it to the lateral surface of a cylinder parallel to one of its three principal axes and centered at the centroid of the object. The object is projected to three perpendicular cylinders, each one aligned with one of its principal axes in order to capture the global shape of the object. For each projection we compute the corresponding 2D Discrete Fourier Transform as well as 2D Discrete Wavelet Transform. We further increase the retrieval performance by employing a local (unsupervised) relevance feedback technique that shifts the descriptor of an object closer to its cluster centroid in feature space. The effectiveness of the proposed 3D object retrieval methodology is demonstrated via an extensive consistent evaluation in standard benchmarks that clearly shows better performanceagainst state-of-the-art 3D object retrieval methods.
Papadakis P., I. Pratikakis, T. Theoharis, G. Passalis and S. Perantonis, ‘3D Object Retrieval using an Efficient and Compact Hybrid Shape Descriptor’, Eurographics 2008 Workshop on 3D Object Retrieval, April 2008, pp. 9-16. (draft PDF)
We present a novel 3D object retrieval method that relies upon a hybrid descriptor which is composed of 2D features based on depth buffers and 3D features based on spherical harmonics. To compensate for rotation, two alignment methods, namely CPCA and NPCA, are used while compactness is supported via scalar feature quantization to a set of values that is further compressed using Huffman coding. The superior performance of the proposed retrieval methodology is demonstrated through an extensive comparison against state-of-the-art methodson standard datasets.
In this paper, we present a comparative study that concerns relevance
feedback (RF) algorithms in the context of content-based 3D object
retrieval. In this study, we employ RF algorithms which range from
query modification and multiple queries to one-class support vector
machines (SVM). Furthermore, we employ pseudo relevance feedback (PRF)
and show that it can considerably improve the performance of
content-based retrieval. Our comparative study is based upon
Papadakis P., I. Pratikakis, T. Trafalis, T. Theoharis and S. Perantonis, "Relevance feedback in content-based 3D Object Retrieval: A comparative study", Computed-Aided Design and Applications Journal, 5(5), 2008, pp. 753-763 (draft PDF)
We present a 3D shape retrieval methodology based on the theory of spherical harmonics. Using properties of spherical harmonics, scaling and axial flipping invariance is achieved. Rotation normalization is performed by employing the continuous principal component analysis along with a novel approach which applies PCA on the face normals of the model. The 3D model is decomposed into a set of spherical functions which represents not only the intersections of the corresponding surface with rays emanating from the origin but also points in the direction of each ray which are closer to the origin than the furthest intersection point. The superior performance of the proposed methodology is demonstrated through a comparison against state-of-the-art approaches on standard databases.
Passalis, T. Theoharis, I.A. Kakadiaris,
PTK: A Novel Depth
Buffer-Based Shape Descriptor
for Three-Dimensional Object Retrieval, The
Visual Computer, January 2007, pp. 5-14.
in availability and use
of digital three-dimensional (3D) synthetic or scanned objects,
makes necessary the availability of basic database operations, such as
retrieval. Retrieval methods are based on the extraction of a compact
descriptor; the challenge is to design a shape descriptor which
original object in sufficient detail to make accurate 3D object
possible. Building on previous work, this paper proposes a novel depth
buffer-based shape descriptor (called PTK)
encompasses symmetry, eigenvalue-related
and an object thickness related measure to provide accuracy that
state-of-the-art methods. Evaluation of the novel method's parameters
as a direct comparison to other approaches is performed using publicly
available and widely used databases.
Vajramushti N., I. A. Kakadiaris, T. Theoharis, G. Papaioannou, ‘Efficient 3D Object Retrieval Using Depth Images’, Proceedings of the 6th ACM SIGMM international Workshop on Multimedia information retrieval 2004, New York, NY, USA October 15 - 16, 2004, pp. 189-196.
In this paper, we present a new three-dimensional object retrieval method. This method employs depth buffers for representing and comparing the objects. Specifically, multiple depth buffers per object (computed from different points of view) are compared for surface and volume similarity. Our method is easily extensible for hierarchical comparisons at multiple resolutions and is highly parallelizable. We have employed this method for both inter-class and intra-class retrieval tasks on a gallery of over 3,000 three-dimensional objects of vehicles with very encouraging results. The accuracy of the method depends on the number of depth buffers and the depth buffer resolution.
Toderici G., S.M. O' Malley, J. Passalis, T. Theoharis, I.A. Kakadiaris, "Ethnicity- and Gender-based Subject Retrieval Using 3D Face-Recognition Techniques", International Journal of Computer Vision (IJCV), 89(2/3), September 2010, pp. 382-391. (draft PDF)
While the retrieval of datasets from human subjects based on demographic characteristics such as gender or race is an ability with wide-ranging application, it remains poorly-studied. In contrast, a large body of work exists in the field of biometrics which has a different goal: the recognition of human subjects. Due to this disparity of interest, existing methods for retrieval based on demographic attributes tend to lag behind the more well-studied algorithms designed purely for face matching. The question this raises is whether a face recognition system could be leveraged to solve these other problems and, if so, how effective it could be. In the current work, we explore the limits of such a system for gender and ethnicity identification given (1) a ground truth of demographically-labeled, textureless 3-D models of human faces and (2) a state-of-theart face-recognition algorithm. Once trained, our system is capable of classifying the gender and ethnicity of any such model of interest. Experiments are conducted on 4007 facial meshes from the benchmark Face Recognition Grand Challenge v2 dataset.
Perakis P. G. Passalis, T. Theoharis, G. Toderici, I.A. Kakadiaris, "Partial Matching of Interposed 3D Facial Data for Face Recognition", Best Reviewed Paper, IEEE Third International Conference on Biometrics Theory, Applications and Systems (BTAS 2009), Washington DC, September 2009. (draft PDF)
face recognition has lately received much attention due to its
robustness in the presence of lighting and pose variations. However,
certain pose variations often result in missing facial data. This is
common in realistic scenarios, such as uncontrolled environments and
uncooperative subjects. Most previous 3D face recognition methods do
not handle extensive missing data as they rely on frontal scans.
Currently, there is no method to perform recognition across
scans of different poses. A unified method that addresses the partial matching problem is proposed. Both frontal and side (left or right) facial scans are handled in a way that allows interpose retrieval operations. The main contributions of this paper include a novel 3D landmark detector and a deformable model framework that supports symmetric fitting. The landmark detector is utilized to detect the pose of the facialscan. This information is used to mark areas of missing data
and to roughly register the facial scan with an Annotated Face Model (AFM). The AFM is fitted using a deformable modelframework that introduces the method of exploiting facial symmetry where data are missing. Subsequently, a geometry image is extracted from the fitted AFM that is independent of the original pose of the facial scan. Retrieval operations, such as face identification, are then performed on a wavelet domain representation of the geometry image. Thorough testing
was performed by combining the largest publicly available databases. To the best of our knowledge, this is the first method that handles side scans with extensive missing data (e.g., up to half of the face missing).
Perakis P., T. Theoharis, G. Passalis and I.A.
Kakadiaris, ‘Automatic 3D Facial Region Retrieval from Multi-pose Facial Datasets
Eurographics/SIGGRAPH 2009 Workshop on 3D Object Retrieval, March 2009, pp. 37-44. (draft PDF)
The availability of 3D facial datasets is rapidly growing, mainly as a result of medical and biometric applications.These applications often require the retrieval of specific facial areas (such as the nasal region). The most crucial step in facial region retrieval is the detection of key 3D facial landmarks (e.g., the nose tip). A key advantage of 3D facial data over 2D facial data is their pose invariance. Any landmark detection method must therefore also be pose invariant. In this paper, we present the first 3D facial landmark detection method that works in datasets with pose rotations of up to 80 around the y-axis. It is tested on the largest publicly available 3D facial datasets, for which we have created a ground truth by manually annotating the 3D landmarks. Landmarks automaticallydetected by our method are then used to robustly retrieve facial regions from 3D facial datasets.
Kakadiaris I.A., H. Abdelmunim, W. Yang, and T. Theoharis, ‘Profile Based Face Recognition’, FG2008, 8th IEEE Int. Conf. on Automatic Face and Gesture Recognition, Amsterdam, September 2008. (draft PDF)
In this paper, we introduce a new system for profile based face recognition. The specific scenario involves a driver entering a gated area and using his/her side-view image (s/he sits inside the vehicle) as identification. The system has two modes: enrollment and identification. At the enrollment mode, 3D face models of subjects are acquired and the profiles extracted under different poses are stored to form a gallery database. At the identification mode, 2D images are acquired, and the corresponding planar profiles are extracted and used as probes. Then, probes are matched to the gallery profiles to determine identity. The matching is implemented using implicit shape registration via the vector distance functions. In our experiments, the implicit registration recognition exhibited higher accuracy than the iterative closest point methodology due to the use of more general transformations. The performance of our system isillustrated using a variety of databases.
T., G. Passalis, G. Toderici, I. A. Kakadiaris, “Unified 3D Face and
Ear Recognition using Wavelets on Geometry Images”, Pattern
Recognition, Special Issue: Multimodal Biometrics, 41, pp. 796-804,
As the accuracy of biometrics improves, it is getting increasingly hard to push the limits using a single modality. In this paper, a unified approach that fuses three-dimensional facial and ear data is presented. An annotated deformable model is fitted to the data and a geometry image is extracted. Wavelet coefficients are computed from the geometry image and used as a biometric signature. The method is evaluated using the largest publicly available database and achieves 99.7% rank-one recognition rate. The state-of-the-art accuracy of the multimodal fusion is attributed to the low correlation between the individual differentiability of the two modalities.
Toderici G., G. Passalis, T. Theoharis, and
In this paper, we present a novel method for human face modeling and its application to face relighting and recognition. An annotated face model is fitted onto the raw 3D data using a subdivision-based deformable model framework. The fitted face model is subsequently converted to a geometry image representation. This results in regularly sampled, registered and annotated geometry data. The albedo of the skin is retrieved by using an analytical skin reflectance model that removes the lighting (shadows, diffuse and specular) from the texture. Additional provisions are made such that if the input contains over-saturated specular highlights, an inpainting method with texture synthesis is used as a post-processing step in order to estimate the texture. The method is fully automatic and uses as input only the 3D geometry and texture data of the face, as acquired by commercial 3D scanners. No measurement or calibration of the lighting environment is required. The method’s fully automatic nature and its minimum input requirements make it applicable to both computer vision applications (e.g., face recognition) and computer graphics applications (i.e., relighting, face synthesis and facial expressions transfer). Moreover, it allows the utilization of existing 3D facial databases. We present very encouraging results on a challenging
Kakadiaris I.A., G.
Murtuza, Y. Lu, N. Karampatziakis, and T. Theoharis, ‘3D face
recognition in the
presence of facial expressions: An annotated deformable model
Transactions on Pattern Analysis and Machine Intelligence (TPAMI),
2007, pp. 640-649. (draft PDF)
present the computational tools and a hardware prototype for 3D face
recognition. Full automation is provided
use of advanced multistage alignment algorithms, resilience to facial
expressions by employing a deformable model
invariance to 3D capture devices through suitable preprocessing steps.
In addition, scalability in both time and
space is achieved by converting 3D facial scans into compact metadata. We present our results on the largest known, and now publicly available, Face Recognition Grand Challenge 3D facial database consisting of several thousand scans. To the best of our knowledge, this is the highest performance reported on the FRGC v2 database for the 3D modality.
G. Passalis, I. A. Kakadiaris, T. Theoharis, Intra-class retrieval of non-rigid 3D objects: Application to Face Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 29(2), February 2007, pp. 218-229. (draft PDF)
As the size of the available collections of 3D objects grows, database transactions become essential for their management, with the key operation being retrieval (query). Large collections are also pre-categorized into classes, so that a single class contains objects of the same type (e.g., human faces, cars, four-legged animals). It is shown that general object retrieval methods are inadequate for intra-class retrieval tasks. We advocate that such intra-class problems require a specialized method that can exploit the basic class characteristics in order to achieve higher accuracy. A novel 3D object retrieval method is presented which uses a Parameterized Annotated Model of the shape of the objects in a class incorporating its main characteristics. The annotated subdivision-based model is fitted onto objects of the class using a deformable model framework, converted to a geometry image and transformed into the wavelet domain. Object retrieval takes place in the wavelet domain. The method does not require user interaction, achieves high accuracy, it is efficient for use with large databases, and it is suitable for non-rigid object classes. We apply our method to the face recognition domain, one of the most challenging intra-class retrieval tasks. We utilized the Face Recognition Grand Challenge database, yielding an average verification rate of 95.2% at 0.0001 false accept rate.
Kakadiaris I.A., G. Passalis,
G. Toderici, N. Karampatziakis, N. Murtuza, Y. Lu and T. Theoharis,
multispectral face recognition; you can smile now!’, SPIE Defense and
Symposium, Orlando, Florida, 17-21 April, 2006.
Face recognition performance has always been affected by the different
facial expressions a subject may display. In this paper, we present an
extension to the UR3D face recognition algorithm, which enables us to
decrease the discrepancy in its performance for datasets from subjects
with and without a neutral facial expression, from 15% to 3%.
Kakadiaris I.A., G. Passalis, G. Toderici, N. Murtuza and T. Theoharis, ‘3D Face Recognition’, British Machine Vision Conference (BMVC), Edinburgh, September 2006. (draft PDF)
In this paper,
we present a new
3D face recognition approach. Full automation is provided through the
use of advanced multi-stage alignment algorithms, resilience to facial
expressions by employing a deformable model framework, and invariance
to 3D capture devices through suitable preprocessing steps. In
addition, scalability in both time and space is achieved by converting
3D facial scans into compact wavelet metadata. We present results on
the largest known, and now publicly-available, Face Recognition Grand
Challenge 3D facial database consisting of several thousand scans. To
the best of our knowledge, our approach has achieved the highest
accuracy on this dataset.
I.A., G. Passalis,
G. Toderici, N. Murtuza, and T. Theoharis. "Quo Vadis, 3D Face and Ear
Recognition?" Multi-Sensory Multi-Modal Face Biometrics for Personal
Identification, Eds. R.I. Hammoud, B. Abidi and M. Abidi, Springer-Verlag, 2006 (book chapter).
In this chapter, we present the computational tools necessary for capture-device agnostic 3D face and ear recognition, as well as a hardware prototype for each of these biometrics. The 3D face recognition algorithm that we present is fully automatic, while the current 3D ear recognition algorithm requires manual initialization. Automation is provided by employing a deformable model
framework, while capture-device invariance is achieved by using suitable preprocessing algorithms which can be tuned to any specific 3D capture device. We achieve scalability of the system by converting the 3D fitted models of the face and ear into compact wavelet metadata.We present face recognition results on the largest known, publicly-available 3D facial database, which is used for the Face Recognition Grand Challenge. The 3D ear recognition results are reported on our own database and on a subset of the University of Notre Dame database. To the best of our knowledge, our 3D face recognition system achieves highest performance reported on the FRGC v2 database for the 3D modality.
Kakadiaris I.A., G. Passalis, T.
It is becoming increasingly important to be able to credential and identify authorized personnel at key points of entry. Such identity management systems commonly employ biometric identifiers. In this paper, we present a novel multimodal facial recognition approach that employs data from both visible spectrum and thermal infrared sensors. Data from multiple cameras is used to construct a threedimensional mesh representing the face and a facial thermal texture map. An annotated face model with explicit two-dimensional parameterization (UV) is then fitted to this data to construct: 1) a three-channel UV deformation image encoding geometry, and 2) a one-channel UV vasculature image encoding facial vasculature. Recognition is accomplished by comparing: 1) the parametric deformation images, 2) the parametric vasculature images, and 3) the visible spectrum texture maps. The novelty of our work lies in the use of deformation images and physiological information as means for comparison. We have performed extensive tests on the Face Recognition Grand Challenge v1.0 dataset and on our own multimodal database with very encouraging results.
G. Passalis, I. A. Kakadiaris, T. Theoharis, G. Toderici and N. Murtuza, Evaluation of 3D Face Recognition in the presence of facial expressions: an Annotated Deformable Model approach, IEEE Workshop on Face Recognition Grand Challenge Experiments, San Diego, USA, June 2005
From a user's
recognition is one of the most desirable biometrics, due to its
non-intrusive nature; however, variables such as face expression tend
to severely affect recognition rates. We have applied to this problem
our previous work on elastically adaptive deformable models to obtain
parametric representations of the geometry of selected localized face
areas using an annotated face model. We then use wavelet analysis to
extract a compact biometric signature, thus allowing us to perform
rapid comparisons on either a global or a per area basis. To evaluate
the performance of our algorithm, we have conducted experiments using
data from the Face Recognition Grand Challenge data corpus, the largest
and most established data corpus for face recognition currently
available. Our results indicate that our algorithm exhibits high levels
of accuracy and robustness, and is not gender biased. In addition, it
is minimally affected by facial expressions.
Danelakis A., D. Verganelakis and T. Theoharis, ‘A New User-Friendly Visual Environment for Breast MRI Data Analysis’, Computer Methods and Programs in Biomedicine, Elsevier, June 2013, 110(3), Pages 411-423. (draft PDF)
In this paper a novel, user friendly visual tool for Breast MRI Data Analysis is presented (BreDAn). Given planar MRI images before and after IVcontrast medium injection, BreDAn generates kinematic graphs, color maps of signal increase and decrease and finally detects high risk breast areas. The advantage of BreDAn, which has been validated and tested successfully, is the automation of the radiodiagnostic process in an accurate and reliable manner. It can potentially facilitate radiologists' workload.
Nikolakopoulos G., P. Stavrou, D. Tsitsipis, D. Kandris, A. Tzes, T. Theocharis, "A dual scheme for compression and restoration of sequentially transmitted images over Wireless Sensor Networks", Ad Hoc Networks, 11(1), January 2013, pp. 410-426. (draft PDF)
this article a dual scheme for compression and restoration of
sequentially transmitted images over Wireless Sensor Networks (WSN) is
presented. Nowadays such image based applications have significantly
increased and are characterized by the need to transfer high volumes of
multimedia data while exhibiting real time synchronization. Depending
on the type of wireless channel and communication protocol adopted,
during data transfer, a considerable number of data packets may fail to
reach the destination target, with a corresponding direct effect on the
received image quality. This loss of valuable information until now has
been addressed by
retransmission quality of service schemes. However, this approach increases the energy consumption along with the probability of congestion occurrence due to the rise of traffic load. This article proposes a dual transmission scheme for multimedia networks that aims to decrease the overall traffic load introduced by the retransmission schemes, while performing image restoration resulting from lost data packets, at the receiver side. The proposed novel dual scheme is based on: a) the Quad Tree Decomposition (QTD) algorithm that is adopted for compression of e image data to be transmitted by clustering the image in sets of variable size and of similar type
of color information, and b) on the fast image inpainting algorithm to restore the effect of the missing data packets by reconstructing its missing portions from the surrounding information.
Stavrou P., G. Nikolakopoulos, N. Fanakis, A. Tzes, T. Theoharis, "An Application of the Inpainting Algorithm for Recovery Packet Losses from Transmitting Sequential Quad Tree Compressed Images Over Wireless Sensor Networks, 2nd IFAC International Conference on Intelligent Control Systems and Signal Processing (ICONS 2009), Instanbul, September 2009. (draft PDF)In this article an application of the inpainting algorithm for recovering packet losses from transmitting sequential quad tree compressed images over wireless sensor networks is presented. The aim of the proposed scheme is to reconstruct the missing information at the receiver side, based on the inpainting algorithm, instead of increasing the overhead of the network by requesting from the transceiver to re-transmit the lost data packets. The proposed architecture initially compresses the images sequentially using the quad tree decomposition algorithm. In the sequel, at the receiver side, the image is reconstructed as a lossy image, based on the available successfully received data packets, while afterwards the proposed scheme applies the image inpainting algorithm, in order to restore any missing partitions of the image. The inpainting algorithm is performed based on information derived from the the received image itself. Experimental results are
Manousopoulos P., V. Drakopoulos, T. Theoharis, “Parameter Identification of 1D Recurrent Fractal Interpolation Functions with Applications to Imaging and Signal Processing”, Journal of Mathematical Imaging and Vision, 40(2), 2011, pp. 162-170. (draft PDF)
Recurrent fractal interpolation functions are very useful in modelling irregular (non-smooth) data. Two methods that use bounding volumes and one that uses the concept of box-counting dimension are introduced for the identification of the vertical scaling factors of such functions. The first two minimize the area of the symmetric difference between the bounding volumes of the data points and their transformed images, while the latter aims at achieving the same box-counting dimension between the original and the reconstructed data. Comparative results with existing methods in imaging applications are given, indicating that the proposed ones are competitive alternatives for both low and high compression ratios
Manousopoulos P., V. Drakopoulos and T. Theoharis, "Curve fitting by fractal interpolation", Transactions and Computational Science, 1, (Springer-Verlag journal within LNCS), pp. 85-103, 2008 (draft PDF)Fractal interpolation provides an efficient way to describe data that have an irregular or self-similar structure. Fractal interpolation literature focuses mainly on functions, i.e. on data points linearly ordered with respect to their abscissa. In practice, however, it is often useful to model curves as well as functions using fractal intepolation techniques. After reviewing existing methods for curve fitting using fractal interpolation, we introduce a new method that provides a more economical representation of curves than the existing ones. Comparative results show that the proposed method provides smaller errors or better compression ratios.
Manousopoulos P., V. Drakopoulos, T. Theoharis and P. Stavrou, ‘Effective representation of 2D and 3D data using fractal interpolation’, New Advances in Shape Analysis and Geometric Modeling (NASAGEM Workshop of the Cyberworlds), Hannover, Germany, October, 2007.
Methods for representing curves in R2 and R3 using fractal
interpolation techniques are presented. We show that such
representations are both effective and convenient for irregular or
complicated data. Experiments in various datasets, including
geographical and medical data, verify the practical usefulness of these
P., V. Drakopoulos, and T. Theoharis, ‘Fractal Active Shape Models’,
12th International Conference on Computer Analysis of Images and
Patterns (CAIP 2007), Vienna, August 2007.
Active Shape Models often require a considerable number of
samples and landmark points on each sample, in order to be efficient in
practice. We introduce the Fractal Active Shape Models, an extension of
Active Shape Models using fractal interpolation, in order to surmount
these limitations. They require a considerably smaller number of
landmark points to be determined and a smaller number of variables for
describing a shape, especially for irregular ones. Moreover, they are
shown to be efficient when few training samples are available.
Drakopoulos V., N. Mimikou & T. Theoharis, “An Overview of Parallel Visualization Methods for Mandelbrot and Julia Sets”, Computers & Graphics, 27, 2003, pp. 635-646.
We present a comparative study
parallelisation schemes for the most widely used methods for the
compared methods render the actual attractor or its complement.
Drakopoulos V., Are there any Julia sets for the Laguerre iteration function?, Computers & Mathematics with Applications, 46 (2003), pp.1201-1210.For polynomials some of whose zeros are complex, little is known about the overall convergence properties of the Laguerre's method. We examine the existence of free critical points of the Laguerre function as applied to one-parameter families of quadratic and cubic polynomials and, with the help of microcomputer plots, investigate the Julia sets of this function which is constructed to converge to the $n$th roots of unity.
Drakopoulos V., Schröder iteration functions associated with a one-parameter family of biquadratic polynomials, Chaos, Solitons & Fractals, 13 (2002), pp.233-243.Schröder iteration functions, a generalization of Newton-Raphson method to determine roots of equations, are generally rational functions which possess some, free to converge to attracting cycles, critical points. With the help of microcomputer plots, we examine the orbits of all these free critical points associated with a particular one-parameter family of quartic polynomials, by walking in their parameter space. This examination takes place in the complex plane as well as on the Riemann sphere.
Dalla L. and Drakopoulos V., On the parameter identification problem in the plane and the Polar Fractal Interpolation Functions, Journal of Approximation Theory, 101 (1999), 290-303.
interpolation functions provide a new means for fitting experimental
their graphs can be used to approximate natural scenes. We first
conditions that a vertical scaling factor must obey to model
arbitrary function. We then introduce polar fractal interpolation
one fractal interpolation method of a non-affine character. Thus, this
may be suitable for a wider range of applications than that of the
The interpolation takes place in polar coordinates and then with an
non-affine transformation a simple closed curve arises as an attractor
interpolates the data in the usual plane coordinates. Finally, we prove
this attractor has the same Hausdorff
the polar one.
Drakopoulos V., How is the dynamics of König iteration functions affected by their additional fixed points?, Fractals 7 (1999), 327-334.
functions are a
generalization of Newton-Raphson
method to determine roots of equations. These higher-degree rational
possess additional fixed points, which are generally different from the
roots. We first prove two new results: firstly, about these
fixed points and, secondly, about the Julia sets of the König
functions associated with the one-parameter family of quadratic
after finding all the critical points of the König
functions as applied to a one-parameter family of cubic polynomials, we
the orbits of the ones available for convergence to an attracting
cycle, should such a cycle exist.
Drakopoulos V., Argyropoulos N. and Böhm A., Generalized computation of Schröder iteration functions to motivate families of Julia and Mandelbrot-like sets, SIAM Journal on Numerical Analysis 36 (1999), 417-435.
generalization of the Newton-Raphson
method to determine roots of equations, are generally rational
possess some critical points, free to converge to attracting cycles.
critical points, however, satisfy some higher-degree polynomial
present a new algorithmic construction to compute in general all of the
terms as well as
to maximize the
computational efficiency of these functions associated with a
family of cubic polynomials. Finally, we examine the Julia sets of the Schröder functions constructed
to the nth roots
of unity, these roots' basins of attraction and the orbits of all free
points of these functions for order higher than four, as applied to the
one-parameter family of cubic polynomials mentioned above.
Drakopoulos V. and Dalla L., Space-filling curves generated by fractal interpolation functions, in Iliev O. P., Kaschiev M. S., Margenov S. D., Sendov Bl. H. and Vassilevski P. S. (eds), Recent advances in numerical methods and applications II, World Scientific, Singapore, 1999, 784-792.
formulas are the starting points in the derivations of many methods in
areas of Numerical Analysis. The goal is always the same: to represent
by a classical geometrical entity. Fractal interpolation functions
new means for fitting experimental data and their graphs can be used to
approximate natural scenes. We show how one can construct space-filling
using hidden variable linear fractal interpolation functions. These
result from the projection of the attractor of an iterated function
Drakopoulos V. and Georgiou S., Visualization on the Riemann sphere of Schröder iteration functions and their efficient computation, in Mastorakis N. E. (ed), Modern applied mathematics techniques in Circuits, Systems and Control, World Scientific Engineering Society,New York and Athens, 1999, 131-137.
Julia and Fatou holds
for the complex
plane as well
as for the Riemann sphere, with minor modifications. The latter space
more natural approach to examine rational functions. Schröder
iteration functions, a generalization of Newton-Raphson
method to determine roots of equations, are generally rational
until now, are examined only in the complex plane. On the Riemann
examine the Julia sets of the Schröder
constructed to converge to the nth roots of unity and the orbits of all
critical points of these functions as applied to a one-parameter family
cubic polynomials. Finally, we present a new algorithmic construction
maximize the computational efficiency of the Schröder
Drakopoulos V., Tziovaras A., Böhm A. and Dalla L., Fractal interpolation techniques for the generation of space-filling curves, in Lipitakis E. A. (ed), Hellenic European Research on Computer Mathematics and its Applications, LEA, Athens, 1999, 843-850.
formulas are the starting points in the derivations of many methods in
areas of Numerical Analysis. The goal is always the same: to represent
by a classical geometrical entity. Fractal interpolation functions
new means for fitting experimental data and their graphs can be used to
approximate natural scenes. We show how the theory of linear fractal
interpolation functions together with the Deterministic Algorithm can
to construct space-filling curves.
Drakopoulos V. and Böhm A., Basins of attraction and Julia sets of Schröder iteration functions, in Bountis T. and Pnevmatikos Sp. (eds), Order and Chaos in Nonlinear Dynamical Systems, Pnevmatikos, Athens, 1998, 157-163.Schröder iteration functions are a generalization of Newton-Raphson method to determine roots of equations. We examine the Julia sets of the Schröder functions constructed to converge to the nth roots of unity as well as these roots’ basins of attraction.
Drakopoulos V., On the additional fixed points of Schröder iteration functions associated with a one-parameter family of cubic polynomials, Computers & Graphics 22 (1998), 629-634.Schröder iteration functions are a generalization of Newton-Raphson’s method to determine roots of equations. These higher-degree rational functions, however, possess additional fixed points which are generally distinct from the desired roots and which satisfy some polynomial equations of order higher than three. With the help of microcomputer plots, we observe the behaviour of their roots, of which the attracting ones lead to pathological cycles, that is to points or cycles which do not correspond to the desired roots.
N., Böhm A. and Drakopoulos
and Mandelbrot-like sets for higher order König
iteration functions, in Novak M. M. and Dewey T. G. (eds),
A.E. & T.
Theoharis, “A Functional View
Parallel Computer Graphics”, ACS/IEEE
Computer Systems and Applications,
A formal functional specification of the Graphics Output Pipeline is given. There are three advantages to this: first, it provides a natural abstraction of the pipeline; second, many computer graphics optimization techniques can be directly obtained from the functional specification by applying general and well understood transformational programming algebraic laws on functional expressions; third, a number of highly parallel implementations suited to various parallel architectures can be derived from the initial specification by a systematic application of general transformation strategies for parallelizing functional programs.
G. Simiakakis &
T. Theoharis, "Formal
a Reconfigurable Tool for Parallel DNA Matching", in special session
Formal Methods for Engineering Special Purpose Parallel Systems within
International Conference on Electronic Circuits and Systems, Kaslik, Lebanon, 17-20 December
computationally demanding task. The Human Genome Project is producing
quantities of data, which have to be analyzed. A formal description of
of searching a DNA sequence is given and an efficient parallel
derived using formal methods. The algorithm is implemented on an FPGA using Handel-C, a language
that enables the
compilation of high-level algorithms directly into gate level
hardware, thus reducing the development time. The designed algorithm
assumptions about DNA transformations, and is therefore a very powerful
can be used in conjunction with an expert system to automatically
patterns of interest in the DNA.
Theoharis T. & A.E. Abdallah, "Formal Derivation of two Parallel Rendering Algorithms", in Proc. Parallel and Distributed Processing Techniques and Applications (PDPTA'99), CSREA, Las Vegas, 1999, pp. 1444-1450. (draft PDF)
algorithms are derived from a high level specification. The initial
specification is formulated as a functional program. A calculational
approach is used to derive, from the original specification, two
algorithms expressed as networks of communicating processes in Hoare's CSP. Both algorithms exploit
parallelism in order
to achieve efficiency. The first algorithm is massively parallel but
uses a fixed number of processing elements. The derivation is done in
stages. Firtstly, a
calculus of functional
decomposition is used in order to transform the specification into an
of a generic parallel functional from. Secondly, the generic functional
refined into a collection of communicating processes in CSP
using a formal refinement framework.
Abdallah A.E. & T. Theoharis,
Parallel Algorithms on a Ring of Processors", in
operator which takes 2 lists as arguments is called multiscan
if every element of the first list must be considered in conjunction
element of the second list in order to produce the result. Several
such as the relational data base operators join,
intersection and difference can be expressed as specific instances of
the multiscan operator.
In this paper we
consider a generic
functional definition of multiscan
show how it
can be implemented as a network of communicating sequential processes (CSP) with a ring configuration.
the performance of a parallel implementation and identify two
if possessed by a multiscan
derivation of an efficient scalable parallel implementation on a ring
processors. A practical illustration from the field of relational data
Abdallah A.E. & T. Theoharis,
"Synthesis of Massively
Pipelined Algorithms from Recursive Functional Programs", in Proc. 8th IASTED Int. Conf. (Parallel
this paper is to show how a class of recursively defined functional
can be efficiently implemented as massively parallel networks of
Sequential Processes (CSP).
The method aims
achieving efficiency by exploiting pipeline parallelism which is
functional programs. The backbone of the method is a collection of
transformation rules for unrolling recursion Each
these rules transforms a generic recursive functional definition into a
composition of a fixed number of simpler functions. Parallelism is
realised in CSP by
functions as piping of appropriate processes.
Kalousis A. & T. Theoharis, "NOEMON: Design, Implementation and Performance of an Intelligent Assistant for Classifier Selection", Intelligent Data Analysis (Elsevier), 3(5), 1999, pp. 319-337.
appropriate classification model and algorithm is crucial for effective
discovery on a dataset. For large data bases, common in data mining,
selection is necessary, because the cost of invoking all alternative
classifiers is prohibitive. This selection task is impeded by two
First, there are many performance criteria and the behaviour of a
varies considerably with them. Second, a classifier's performance is
affected by the cherecteristics
selection implies mastering a lot of background information on the
models and the algorithms in question. An intelligent assistant can
effort by inducing helpful suggestions from background information. In
study, we present such an assistant, NOEMON.
registered classifier, NOEMON
performance for a collection of datasets. Rules are induced from those
measurements and accomodated
knowledge base. the
suggestion on the most
appropriate classifier(s) for a
dataset is then based on those rules. Results on the performance of an
prototype are also given.
Kalousis A., G. Zarkadakis
Adding Intelligence to
Knowledge Discovery Process", in Proc. Expert Systems 97,
In this paper an
architecture is proposed that supports the selection of
task, model and algorithm in the knowledge discovery process by
artificial intelligence techniques. The proposed system acts as an
the analyst by suggesting possible selections. It improves ita
performance by observing the analysis
sequences applied by the analyst to new data sets. The system
characteristics with specific selections that lead to positive or
results and uses this information to guide later analysis.
T. & Y. Cotronis,
An RDBMS Employing Intra-Query and Operator Parallelism", in Proc.
of High Performance Computing in Engineering V,
FlexParDB, a relational algebra query
execution system is presented, which combines intra-query and operator
parallelism. Intra-query parallelism is expressed in the wavesets,
which are a partition of the set of query-tree operators; valid wavesets are consistent with the
from the leaves to the root of the query-tree. The wavesets
represent the query execution plan. A simple script language for the
description of a query-tree and its wavesets
developed. Wavesets are
executions of ParDB, a
implemented on a
massively parallel Transputer
Theoharis T. & A. Abdallah, "Distributed Resource Performance Simulation", in Proc. 2nd LAAS Int. Conference on Computer Simulation, Lebanon, September 1997, pp. 129-134.
presented for a shared-nothing parallel system in which a resource
available at every processor node but actually exists fewer times
sharing between processors and leading to performance degradation. The
is also applicable if multiple resource classes exist. It is a useful
prototyping tool and its application to a parallel RDBMS is discussed.
Theoharis T. & A. Paraskevopoulos, "PARDB: Design, Algorithms and Performance of a Transputer Based Parallel Relational DBMS", Transputer Communications, 3(4), 1996, pp. 233-260. (draft PDF)
design choices involved in
parallel RDBMSs are
discussed and the
design choices made for the author's system, PARDB,
are justified. The parallel architecture, the process structure and the
parallel algorithms used to implement each of the relational operators
in PARDB are described.
measurements show that near-linear speedup is achieved in uniscan operators (such as
select), whereas multiscan
operators (such as join) perform
slightly worse because of the communication involved in exchanging
relation data between the processors.
Katsaloulis P., T. Theoharis, W. M. Zheng, B.L. Hao, A. Bountis, Y. Almirantis and A. Provata, 'Long range correlations of RNA polymerase II promoter sequences across organisms', Physica,-A, V. 366, 2006, pp. 308-322.
The statistical properties of the size distribution of DNA segments separating identical oligonucleotides are studied. For representative eukaryotes (Homo sapiens, Mus musculus, Saccharomyces cereviciae, Oryza sativa, Arabidopsis thaliana) we have demonstrated the existence of long-range correlations for the distances separating oligonucleotides of sizes 4, 5 and 6, which carry a promoter signature. This observation is independent of the consensus sequence used by the organism, as in the case of O. sativa (which mainly uses the CG promoter box) and A. thaliana (which mainly uses the TATA promoter box). If we use two parameters to characterise the size distribution separating oligonucleotides, we observe that oligonucleotidescontaining promoter signatures cluster together, away from the others.
Katsaloulis P., E. Floros, A. Provata,. Y. Cotronis and T. Theoharis, ‘Gridification of the SHMap Biocomputational Algorithm’, IEEE ITAB ’06, Ioannina, October 2006. (draft PDF)
In the current study we present a parallel statistical
algorithm (SHMap), which distinguishes DNA regions which possibly carry
protein coding information from non-coding regions. The code was
parallelized using MPI and was deployed in a Grid testbed provided by
the CrossGrid IST European Project. We tested the SHMap algorithm on
mammalian chromosomes. The parallelization of the algorithm and the use
of the Grid environment achieves significant speed-up compared
to the sequential version of the algorithm, although the use of the Grid environment still presents some problems. Our algorithm is open source and freely distributed from our website.
Katsaloulis P., T. Theoharis & A. Provata, 'Statistical Algorithms for Long DNA Sequences: Oligonucleotide Distributions and Homogeneity Maps', Scientific Programming, 13(3), 2005, pp. 177-188 (draft PDF)
The statistical properties of oligonucleotide appearances within long DNA sequences often reveal useful characteristics of the corresponding DNA areas. Two algorithms to statistically analyze oligonucleotide appearances within long DNA sequences in genome banks are presented. The first algorithm determines statistical indices for arbitrary length oligonucleotides within arbitrary length DNA sequences. The critical exponent of the distance distribution between consecutive occurrences of the same oligonucleotide is calculated and its value is shown to characterize the functionality of the oligonucleotide. The second algorithm searches for areas with variable homogeneity, based on the density of oligonucleotides. The two algorithms have been applied to representative eucaryotes ( the animal Mus musculus and the plant Arabidopsis thaliana) and interesting results were obtained, conrmed by biological observations. All programsare open source and publicly available on our web site.
Theoharis T. & J.J. Modi, "Implementation of Matrix Multiplication on the T-RACK", Parallel Computing, 14, 1990, pp. 229-233 (draft PDF)
It turns out to be advantageous to partition matrices into blocks for multiplication on the T-RACK, which is a MIMD system consisting of 64 floating-point transputers with partially re-configurable linkage.
Two viewing models for a three dimensional display are presented and a simple parallel processing scheme is outlined.