|
Keynote
Lecture 1
Realistic 3D Simulation of Garments |
|
|
|
|
|
Prof. André
Gagalowicz
INRIA Rocquencourt
France
|
|
|
|
|
|
Brief
Bio:
Prof. Andre Gagalowicz is a research director at INRIA, FRANCE.
He was the creator of the first laboratory involved in image
analysis/synthesis collaboration techniques in 1984. He graduated
from Ecole Superieure d'Electricite in 1971 (engineer in Electrical
Engineering), obtained his PHD in Automatic Control from the
University of Paris XI, Orsay, in 1973, and his state doctorate
in Mathematics (doctorat d'Etat es Sciences) from the University
of Paris VI (1983). He is fluent in english, german, russian
and polish and got a bachelor degree in chinese from the University
of Paris IX, INALOCO in 1983. His research interests are in
3D approaches for computer vision, computer graphics, and
their cooperation and also in digital image processing and
pattern recognition. He received the prizes of the best scientific
communication and the best technical slide at the Eurographics'85
conference. He was awarded the second prize of the Seymour
Cray competition in 1991 and one of his papers was selected
by Computers and Graphics journal as one of the three best
publications of this journal over the last ten years. He took
part to the redaction of eight books and wrote around two
hundreds publications. He was the founder and the last chairman
of the MIRAGES international conference. The last version 24/02/09lace at INRIA, FRANCE between
the 3rd and 5th of March 2005. This conference is exclusively
dedicated to computer vision/computer graphics collaboration
techniques.
Abstract:
The target of the presented work is to allow a future client
to buy a garment directly by INTERNET. He/she will have the
possibility to choose the garment, its material and to see
himself/herself in 3D, wearing the garment he/she will have
not yet bought on a simple PC screen.
The presentation will be restricted to the case of Warp/Weft
materials.
Our target is to produce realistic 3D simulations; it is a
necessary condition for their commercial use. Garments have
to correspond exactly to the style that a future client will
have chosen and the rendering of textile material, which is
strongly influenced by its mechanical property, has to be
realistic as well.
We will first concentrate on the mechanical properties of
warp/weft materials and describe Kawabata's results on the
characterization of such textile. Kawabata's results are summarized
by his famous K.E. S that will also be discussed. The most
important outcome of his work is that he proved that textile
material has a non linear hysteretic behaviour. It is fundamental
to incorporate those properties to a realistic material model.
We will first describe the overall technique used to produce
a 3D mannequin wearing a specific garment constructed from
a set of 2D patterns of the type of the 2D patterns employed
to create the real garments.
We will then describe the mass/spring model used to model
realistically the mechanical behaviour of textile and how
it is mapped on each 2D pattern.
We will then discuss a technique allowing the automatic prepositioning
of the 2D patterns around the body and how these 2D patterns
are sewed.
We will finally present the procedure used to animate the
global mass/spring system in order to produce the garment
evolution around the body. The results of the validation of
our choice of non linear mass spring system will be shown.
Some details will be given regarding collision detection and
the response of the system in case of collision as well as
regarding our technique implementation. In conclusion, we
will discuss the remaining problems and our envisioned extensions.
Some videos showing various garment simulations on a numerical
mannequin of a real person (obtained by a 3D scanner) will
close the presentation. |
|
|
Keynote
Lecture 2
The Influence of Rendering Styles on Participant Responses in
Immersive
Virtual Environments |
|
|
|
|
|
|
|
Prof. Mel Slater
Universitat Politècnica de Catalunya
Spain |
|
|
|
Brief
Bio:
Mel Slater became Professor of Virtual Environments at University
College London in 1997. Before that he was at Queen Mary,
University of London, Head of Department of Computer Science
from 1993-95. His research includes both computer graphics
and virtual environments. He has been involved in many funded
projects over the past decade, and obtained funding for the
UCL 'Cave' system (£900,000) and further support for this
more recently (£350,000). Since 1989 seventeen of his PhD
students have obtained their PhDs, and he is currently supervising
ten students. He is co-Editor-in-Chief of Presence: Teleoperators
and Virtual Environments and was co-Programme Chair of the
Eurographics Conference 2004, and has been on the SIGGRAPH
papers panel 4 times since 1999. He was an Engineering and
Physical Sciences Senior Research Fellow from October 1999
for five years working on the Virtual Light Field approach
to computer graphics rendering. His book 'Computer Graphics
and Virtual Environments: From Realism to Real-Time' with
co-authors A. Steed and Y. Chrysanthou, was published in 2001.
He led a European consortium (PRESENCIA) funded under the
European FET Presence Research initiative from 2002 to 2005,
and leads a follow-on European Integrated Project PRESENCCIA
from January 2006 for 4 years. He was awarded a higher doctorate
from London University (DSc, Computer Science) in September
2002, for his work on 'Presence in Virtual Environments'.
He has been awarded the IEEE 2005 Virtual Reality Career Award
'In Recognition of Pioneering Achievements in Theory and Applications
of Virtual Reality'. During 2005 he was a visiting scientist
at the Instituto de Neurociencias de Alicante, Universidad
Miguel Hernandez-CSIC. From January 2006 is an ICREA Professor
at the Universitat Politècnica de Catalunya.
Abstract:
What influence does rendering style have on the responses
of participants in immersive virtual environments? In this
talk we will consider the extent of presence of participants
in an immersive virtual environment when they experience rendered
with real-time ray tracing compared with standard OpenGL style
rendering. An experiment will be described in detail, and
the results presented. The question of the impact of visual
realism on presence is an open one to date, and this experiment
provides further evidence in this debate. |
|
|
Keynote
Lecture 3
Recognition of Human Activity and Object Interactions |
|
|
|
|
|
|
|
Prof. Jake K. Aggarwal
The University of Texas at Austin
U.S.A. |
|
|
|
Brief
Bio:
J.K. Aggarwal has served on the faculty of The University of Texas at Austin College of Engineering in the Department of Electrical and Computer Engineering since 1964. He is currently one of the Cullen Professors of Electrical and Computer Engineering.
Professor Aggarwal earned his B.Sc. from University of Bombay, India in 1957, B. Eng. from University of Liverpool, Liverpool, England, 1960, M.S. and Ph.D. from University of Illinois, Urbana, Illinois, in 1961 and 1964 respectively.
His research interests include image processing, computer vision and pattern recognition. The current focus of research is on the automatic recognition of human activity and interactions in video sequences, and on the use of perceptual grouping for the automatic recognition and retrieval of images and videos from databases.
A fellow of IEEE (1976) and IAPR (1998), Professor Aggarwal received the Best Paper Award of the Pattern Recognition Society in 1975, the Senior Research Award of the American Society of Engineering Education in 1992 and the IEEE Computer Society Technical Achievement Award in 1996. He is the recipient of the 2004 K. S. Fu Prize of the IAPR and the 2005 Leon K. Kirchmayer Graduate Teaching Award of the IEEE. He is the author or editor of 7 books and 52 book chapters, author of over 200 journal papers, as well as numerous proceeding papers and technical reports.
He has served as the Chairman of the IEEE Computer Society Technical Committee on Pattern Analysis and Machine Intelligence (1987-1989), Director of the NATO Advanced Research Workshop on Multisensor Fusion for Computer Vision, Grenoble, France (1989), Chairman of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (1993), and the President of the International Association for Pattern Recognition (1992-1994). He is a life fellow of IEEE and Golden Core Member of IEEE Computer Society.
Abstract:
The development of computer vision systems able to detect humans and to recognize their activities is a broad effort with applications in areas including virtual reality, smart monitoring and surveillance systems, motion analysis in sports, medicine and choreography, and vision-based user interfaces, etc. The understanding of human activity is a diverse and complex subject that includes tracking and modeling human activity, and representing video events at the semantic level. Its scope ranges from understanding the actions of an isolated person to understanding the actions and interactions of a crowd, or the interaction of objects like pieces of luggage or cars with persons.
At The University of Texas at Austin, we are pursuing a number of projects on human motion. Professor Aggarwal will present his research on modeling and recognition of human actions and interactions, and human and object interactions. The work includes the study of interactions at the gross level as well as at the detailed level. The two levels present different problems in terms of observation and analysis. At the gross level we model persons as blobs, and at the detailed level we conceptualize human actions in terms of an operational triplet 'agent-motion-target' similar to 'verb argument structure' in linguistics. We consider atomic actions, composite actions and interactions, and continued and recursive activities. In addition, we consider the interactions between a person and an object including climbing a fence. The issues considered in these problems will illustrate the richness and the difficulty associated with understanding human motion. Application of the above research to monitoring and surveillance will be discussed together with actual examples.
|
|
|
Keynote
Lecture 4
High Dynamic Range Imaging and Display |
|
|
|
|
|
|
|
Prof. Wolfgang Heidrich
University of British Columbia
Canada |
|
|
|
Brief
Bio:
Wolfgang Heidrich is an Associate Professor in Computer Science at the
University of British Columbia. He received a PhD in Computer Science
from the University of Erlangen in 1999, and then worked as a Research
Associate at the Computer Graphics Group of the Max-Planck-Institute
for Computer Science in Saarbrucken, Germany, before joining UBC in
2000. Heidrich's research interests lie at the intersection of
computer graphics, computer vision, imaging, and optics. In
particular, he has worked on High Dynamic Range imaging and display,
image-based modeling, measuring, and rendering, geometry acquisition,
GPU-based rendering, and global illumination. Heidrich has written
over 80 refereed publications on these subjects and has served on
numerous program committees. He was the program co-chair for Graphics
Hardware 2002, and Graphics Interface 2004, and the Eurographics
Symposium on Rendering, 2006.
Abstract:
The human visual system's ability to process wide ranges of intensities
by far exceeds the capabilities of current imaging systems. Both cameras
and displays are currently limited to a dynamic range (contrast) of
between 300:1 to 1,000:1, while the human visual system can process a
simultaneous dynamic range of 50,000:1 or more, and can adapt to a much
larger range.
High-dynamic-range (HDR) imaging refers to the capture, processing, storage, and display of images with significantly improved contrast and brightness compared to the conventional imaging pipeline. This new HDR imaging pipeline is designed to match the power of the human visual system. HDR displays significantly improve the sense of realism and immersion when showing both real and synthetic HDR imagery. Likewise, HDR cameras are able to take images without saturation under difficult lighting situations. The additional information captured in both extremely bright and extremely dark regions is useful as an input for HDR displays, but also for machine vision applications.
In this talk, I will summarize the results of a multi-disciplinary research effort to create the first true HDR display. This work is a collaboration of multiple departments at The University of British Columbia, and a spinoff company called Brightside Technologies. I will provide an overview of current research activities, with a focus on computational problems. The talk and Q&A period will be followed by a demo of Brightside's commercial HDR display. |
|
|
|