Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Tutorials

The role of the tutorials is to provide a platform for a more intensive scientific exchange amongst researchers interested in a particular topic and as a meeting point for the community. Tutorials complement the depth-oriented technical sessions by providing participants with broad overviews of emerging fields. A tutorial can be scheduled for 1.5 or 3 hours.

TUTORIALS LIST
Mobile and Cloud Web-Based Graphics and Visualization
Lecturer(s): Haim Levkowitz, University of Massachusetts Lowell, U.S.A.
Estimated Session Time: 6 hours

Know the Rules -- Tutorial on Procedural Modeling
Lecturer(s): Dr. Torsten Ullrich, Fraunhofer Austria Research GmbH & Technische Universität Graz, Austria
Estimated Session Time: 3 hours

Recent Trends in Computer Vision
Lecturer(s): Dr. Giovanni Maria Farinella, University of Catania, Italy
Estimated Session Time: 1.5 hours

Image Quality Assessment for Global Illumination Methods based on Machine Learning
Lecturer(s): Dr. André Bigand, Université du Littoral, Calais, France
Estimated Session Time: 3 hours

Haim Levkowitz
University of Massachusetts Lowell
U.S.A.


Biography of Haim Levkowitz
Haim Levkowitz has been a Faculty member of the Computer Science Department at the University of Massachusetts Lowell, in Lowell, MA, USA since 1989. He was a twice-recipient of a US Fulbright Scholar Award to Brazil (August – December 2012 and August 2004 – January 2005). He was a Visiting Professor at ICMC — Instituto de Ciencias Matematicas e de Computacao (The Institute of Mathematics and Computer Sciences)—at the University of Sao Paul, Sao Carlos – SP, Brazil (August 2004 - August 2005; August 2012 to August 2013). He co-founded and was Co-Director of the Institute for Visualization and Perception Research (through 2012), and is now Director of the Human-Information Interaction Research Group. He is a world renowned authority on visualization, perception, color, and their application in data mining and information retrieval. He is the author of "Color Theory and Modeling for Computer Graphics, Visualization, and Multimedia Applications" (Springer 1997) and co-editor of "Perceptual Issues in Visualization" (Springer 1995), as well as many papers in these subjects. He has more than 42 years experience teaching and lecturing, and has taught many tutorials and short courses, in addition to regular academic courses. In addition to his academic career, Professor Levkowitz has had an active entrepreneurial career as Founder or Co-Founder, Chief Technology Officer, Scientific and Strategic Advisor, Director, and venture investor at a number of high-tech startups.

Mobile and Cloud Web-Based Graphics and Visualization

Abstract
Cloud computing is rapidly becoming one of the most prevailing computing platforms. The combination of mobile devices and cloud-based computing has been changing how users consume and use computing resources. And, so far, we have only seen the "tip of the iceberg." Many new technologies have been --- and continue to be --- introduced, for in-the-browser graphics and visualization. With these, the implementation of high-quality Web-based graphics has become a reality. For example, WebGL offers capabilities comparable to the well-known OpenGL on the Web browser. It is now feasible to have high-performance graphics and visualization "in your palm," utilizing a mobile device. And, when necessary, one can use that mobile device as the front end interface and the display, but elastically forking some or all the graphics "heavy lifting" to a cloud-based platform. We argue that this will become the most common platform for computer graphics and visualization in the not-too-distant future.

Keywords
Mobile Computing; Cloud Computing; Computer Graphics; Visual Analytics; Visual Data Mining; Visualization; Web-based Graphics; Web-based Visualization; Visual Computing; Mobile apps; Imaging;

Aims and learning objectives
The goals of this course are to make students familiar with the underlying technologies that make this possible, including (but not limited to) cloud-based computing, mobile computing, their combination, HTML5 and the canvas element, the WebGL graphics library, general Web-based graphics and visualization, and Web-based interactive development environments.

Format
This is a 6 hours tutorial according to the next outline:
  • Introduction, motivation, overview, and focus
    In this part we provide the overall introduction to the topics, the motivation for discussing this topics, an overview of the topics to be covered and their organization, and those topics that will be emphasized.
  • Introduction to cloud computing
    We provide several definitions of cloud computing. We then continue with discussion of the various types of cloud computing, including private, public, community, and hybrid cloud deployments. We present the various service types within the cloud computing paradigm, including Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS). We compare and contrast them. We discuss the major tenets of cloud computing, such as commodity hardware and multi-tenanting, and important influencing technologies, such as virtualization. We explore advantages and disadvantages, potential challenges, as well as policy and legal issues, and user sentiments, attitudes, and concerns. We observe the potential privacy and data security issues.
  • Introduction to mobile computing
    We survey the current mobile computing trends, the prevailing providers, operating systems, and devices. We discuss the general process of app development on each one of the prevailing platforms as well as the process of unified development for multi-platform deployment. We describe the process of locating an app in the prevailing app markets.
  • Introduction to mobile and cloud computing
    We describe the tight linkage between mobile apps and their cloud-based "home," providing extra storage capabilities, additional processing, and synchronization among multiply-connected devices.
  • HTML5 overview with focus on the canvas element
    We discuss the major features of the HTML5 standard and its major technologies, including the Document Object Model (DOM), Cascading Style Sheets (CSS3), and JavaScript, and how they form together the basis of the HTML5 specification. We cover the major innovations implemented in HTML5, including (but not limited to) the canvas element, local storage, and audio-visual support. We demonstrate the centrality of of these features to providing graphics and visualization in a Web-browser.
  • Web-based 2D graphics libraries and tools
    We discuss various Web-based 2D graphics libraries and tools, and provide examples of their use.
  • Web-based 3D graphics libraries and tools
    We discuss various Web-based 3D graphics libraries and tools, and provide examples of their use.
  • Web-based graphics and visualization: public data
    We survey sources of publicly-available data and demonstrate examples how these data are being visualized using Web-based tools to provide interactive visualization, analysis, and exploration of the data.
  • Web-based development environments
    We show how the previously-surveyed technologies are utilized to provide Web-based environments that liberate the programmer from many of the typical headaches associated with traditional development environments, including development environment setup, revision control management, dependency management, deployment, and sharing.
  • Putting it all together: interactive mobile- and cloud-based graphics and visualization
    Finally, we put it all together, discussing the design, development, and deployment of graphics and visualization applications that utilize mobile devices as the front-end interface, control, and display, but take advantage of both local and cloud-based processing and storage capabilities to provide the most flexible "best practice" graphics and visualization capabilities. We address current strengths and weaknesses, as well as future challenges and work to be done.


Contacts
e-mail: grapp.secretariat@insticc.org


Dr. Torsten Ullrich
Fraunhofer Austria Research GmbH & Technische Universität Graz
Austria


Biography of Dr. Torsten Ullrich
Torsten Ullrich studied mathematics with a focus on computer science at the University of Karlsruhe (TH) and received his doctorate in 2011 with his work "Reconstructive Geometry" on the subject of reverse engineering at the Graz University of Technology. Since 2005 he has been involved in the establishment of the newly formed "Fraunhofer Austria Research GmbH". He coordinates research projects in the field of "Visual Decision Support", "Virtual Engineering" and "Digital Society" and is Deputy Head of the business area Visual Computing in Graz. His research in the field of generative modeling languages and geometrical optimization has received several international awards.

Know the Rules -- Tutorial on Procedural Modeling

Abstract
This tutorial introduces the concepts and techniques of generative modeling. It starts with some introductory examples in the first learning unit to motivate the main idea: to describe a shape using an algorithm. After the explanation of technical terms, the second unit focuses on technical details of algorithm descriptions, programming languages, grammars and compiler construction, which play an important role in generative modeling. The purely geometric aspects are covered by the third learning unit. It comprehends the concepts of geometric building blocks and advanced modeling operations. Notes on semantic modeling aspects -- i.e. the meaning of a shape -- complete this unit and introduce the inverse problem. What is the perfect generative description for a real object? The answer to this question is discussed in the fourth learning unit while its application is shown (among other applications of generative and inverse-generative modeling) in the fifth unit. The discussion of open research questions concludes this tutorial.

Keywords
Geometry Processing; Generative, Procedural Modeling; Inverse Modeling; Modeling Applications; Shape Description; Language Design

Aims and learning objectives
The tutorial enables the attendees to take an active part in future research on generative modeling.

Target Audience
The target audience comprehends students and interested individuals from computer science and computer graphics as well as persons interested in content creation and design.

Prerequisite Knowledge of Audience
The assumed background knowledge of the audience comprehends basics of computer science (including algorithm design and the principles of programming languages) as well as a general knowledge of computer graphics

Detailed Outline
  • 1) Introduction to "Generative Modeling"
    • a) Motivation: Generative modeling has been developed in order to generate highly complex objects based on a set of formal construction rules. This introduction shows impressive examples of generative models and summarizes the state-of-the-art in generative and procedural modeling.
    • b) Definitions: The reseach area of generative modeling comprehends parametric, procedural, and generative techniques based on L-systems, split grammars, scripting languages, etc. In this learning unit, all the terms and definitions are introduced in order to clarify the topic "Generative Modeling".
  • 2) Languages & Grammars
    • a) Scripting Languages for Generative Modeling: Different tools have been developed for many domains, such as man-made objects (e.g. architecture), or organic structures (e.g. plants). This learning unit contains an overview of current techniques and systems as well as their differences in terms of shape representation, modeling strategy, and modeling language.
    • b) Language Processing & Compiler Construction: The evaluation of procedural descriptions typically utilizes techniques used for description of formal languages and compiler construction. We reiterate shortly the necessary basics, like compiler front ends and back ends, and present some state-of-the-art systems for procedural modeling.
  • 3) Modeling by Programming
    • a) Building Blocks & Elementary Data Structures: 3D objects, which consist of organized structures and repetitive forms, are well suited for procedural description, e.g. by the combination of building blocks or by using shape grammars. We discuss the problems in conjunction with the definition of a shape: what is a suitable interface for a building block? Especially within a pipeline of different tools, this question is gaining in importance.
    • b) Advanced Techniques: Besides classical, geometric operations -- such as constructive solid geometry -- procedural and functional descriptions offer novel, additional possibilities to describe a shape. In this unit we present advanced techniques, which are based on already existing generative descriptions.
    • c) Semantic Modeling: In some application domains, e.g. in the context of digital libraries, semantic metadata plays an important role. In this unit, non-geometric aspects like classification schemes used in architecture, civil engineering, archaeology, etc., which can be incorporated into a generative description, are discussed.
  • 4) Inverse Modeling
    • a) Problem Description: In order to use the full potential of generative techniques, the inverse problem has to be solved; i.e. what is the best generative description of one or several given instances of an object class? This problem can be interpreted in several ways.
    • b) Overview on Current Approaches: An overview of current approaches is presented. Selected techniques are discussed in detail. Special attention is given to related topics such as pattern / symmetry detection, machine learning, shape recognition, and object retrieval.
  • 5) Applications
    • a) Design: Real-world applications of generative modeling are presented in the following learning units. First, we demonstrate the effectiveness of procedural shape modeling for mass customization of consumer products. A generative description composed of a few well-defined procedures can generate a large variety of shapes. Furthermore, it covers most of the design space defined by an existing collection of designs -- in this case wedding rings.
    • b) Semantic Enrichment: The second application uses inverse modeling techniques to identify procedural shapes in a repository of 3D meshes. Having identified a mesh, which is similar to a shape description, it is possible to transfer semantic information from the shape description to the polygonal mesh. This semantic enrichment of 3D artifacts, e.g. in the context of digital libraries, is necessary for markup, indexing, and retrieval.
    • c) Form Follows Function: Last but not least, we present an approach using generative modeling techniques and numerical optimization to create energy-efficient buildings. The free parameters of the generative description of a building are calculated in order to minimize an energy or a cost function, respectively.
  • 6) Open Questions In the last part of this tutorial, we will outline some open research questions: For example, while many domain-specific tools exist, a general abstraction of a procedural description of a shape is still not clear.
Further topics include the problem of a suitable presentation, the systematic analysis of shape families, and easy-to-use interfaces for non-computer science users -- to name a few.

Contacts
e-mail: grapp.secretariat@insticc.org


Dr. Giovanni Maria Farinella
University of Catania
Italy


Biography of Dr. Giovanni Maria Farinella
Giovanni Maria Farinella received the M.S. degree in Computer Science (egregia cum laude) from the University of Catania, Italy, in 2004, and the Ph.D. degree in computer science in 2008. He joined the Image Processing Laboratory (IPLAB) at the Department of Mathematics and Computer Science, University of Catania, in 2008, as a Contract Researcher. He is an Adjunct Professor of Computer Science at the University of Catania (since 2008) and a Contract Professor of Computer Vision at the Academy of Arts of Catania (since 2004). His research interests lie in the fields of computer vision, pattern recognition and machine learning. He has edited four volumes and coauthored more than 60 papers in international journals, conference proceedings and book chapters. He is a co-inventor of four international patents. He serves as a reviewer and on the programme committee for major international journals and international conferences. He founded (in 2006) and currently directs the International Computer Vision Summer School (ICVSS).

Recent Trends in Computer Vision

Abstract
In the last decade, consumer imaging devices such as camcorders, digital cameras, smartphones and tablets have been dramatically diffused. The increasing of their computational performances combined with an higher storage capability allowed to design and implement advanced imaging systems that can automatically process visual data with the purpose of understanding the content of the observed scenes. In the next years, we will be conquered by wearable imaging devices acquiring, streaming and logging video of our daily life. This new exciting imaging domain, in which the scene is observed from a first person point of view, poses new challenges to the research community, as well as gives the opportunity to build new applications. In this tutorial I will give an overview of the recent trends in Computer Vision considering the wearable domain. Challenges, applications and algorithms will be discussed by considering the state-of-the-art literature.

Keywords
Computer Vision, Wearable Imaging Devices

Aims and learning objectives
The objective of this tutorial to provide an overview of some of the latest advances of Computer Vision by considering Wearable Imaging Devices.

Target Audience
Ph. D. students, post-docs, young researchers (both academic and industrial), senior researchers (both academic and industrial) or academic/industrial professionals.

Prerequisite Knowledge of Audience
Basic Knowledge on Image Processing, Computer Vision and Pattern Recognition

Detailed Outline
  • Introduction and Motivation
  • Open Challenges
  • State-of-the-Art Algorithms

Contacts
e-mail: visapp.secretariat@insticc.org


Dr. André Bigand
Université du Littoral, Calais
France


Biography of Dr. André Bigand
André Bigand (IEEE Member) received the Ph.D. Degree in 1993 from the University Paris 6 and the « HDR » degree in 2001 from the Université du Littoral of Calais (ULCO, France). He is currently senior associate professor in ULCO since 1993. His current research interest include uncertainty modeling and machine learning with applications to image processing and synthesis (particularly noise modeling and filtering). He is currently with the LISIC Laboratory (ULCO). He is author and co-author of 120 scientific papers in international journals and books or communications to conferences with reviewing committee. He has 33 years experience teaching and lecturing. He is a visiting professor at UL - Lebanese University- where he teaches "machine learning and pattern recognition" in research master STIP.

Image Quality Assessment for Global Illumination Methods based on Machine Learning

Abstract
Unbiased global illumination methods based on stochastical techniques provide phororealistic images. They are however prone to noise that can only be reduced by increasing the number of computed samples. The problem of finding the number of samples that are required in order to ensure that most of the observers cannot perceive any noise is still open since the ideal image is unknown. In this tutorial we address this problem focusing on visual perception of noise. But rather than use known perceptual models we investigate the use of machine learning approaches classically used in the Artificial Intelligence area as full-reference and reduced-reference metrics. We propose to use such approaches to create a machine learning model based on Learning Machines as SVM, RVM, in order to be able to predict which image highlights perceptual noise. We also investigate the use of soft computing approaches based on fuzzy sets as no-reference metric. Learning is performed through the use of an example database which is built from experiments of noise perception with human users. These models can then be used in any progressive stochastic global illumination method in order to find the visual convergence threshold of diferent parts of any image. This tutorial is structured as a half day presentation (3 hours). The goals of this course are to make students familiar with the underlying techniques that make this possible (machine learning, soft computing).

Keywords
Computer-generated Images; Quality Metrics; Machine Learning; Soft Computing.

Detailed Outline
  • 10 minutes: Introduction and Welcome
  • 30 minutes: The Ray tracing Algorithm
  • 40 minutes: Full-reference metric
  • 10 minutes: Coffee break
  • 45 minutes: No-reference metric
  • 45 minutes: Reduced-reference metric


Contacts
e-mail: grapp.secretariat@insticc.org

footer