Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Tutorials

The role of the tutorials is to provide a platform for a more intensive scientific exchange amongst researchers interested in a particular topic and as a meeting point for the community. Tutorials complement the depth-oriented technical sessions by providing participants with broad overviews of emerging fields. A tutorial can be scheduled for 1.5 or 3 hours.

TUTORIALS LIST

Tutorial on On the Turning Away: Enhancing Stroke Survivors Rehabilitation with Virtual Reality  (VISIGRAPP)
Instructor : Bernardo Marques, Beatriz Sousa Santos and Sérgio Oliveira

Tutorial on Egocentric Vision: Exploring User-Centric Perspectives  (VISIGRAPP)
Instructor : Francesco Ragusa



Tutorial on
On the Turning Away: Enhancing Stroke Survivors Rehabilitation with Virtual Reality


Instructors

Bernardo Marques
Universidade de Aveiro
Portugal
 
Brief Bio
Bernardo is an Assistant Professor at the Department of Electronics, Telecommunications and Informatics (DETI) and a Researcher at Institute of Electronics and Informatics Engineering of Aveiro (IEETA). His interests include Human-Centered Computing with particular focus on eXtended Reality, 3D User Interfaces, Computer-Supported Cooperative Work, User Experience, Data and Information Visualization and others.
Beatriz Sousa Santos
University of Aveiro
Portugal
 
Brief Bio
Beatriz is an Associate Professor with Habilitation at the Department of Electronics, Telecommunications and Informatics (DETI), and a Researcher at the Institute of Electronics and Informatics Engineering of Aveiro (IEETA), University of Aveiro (UA). Her interests include Human-Centered Computing with particular focus on eXtended Reality, 3D User Interfaces, Computer Graphics, Data and Information Visualization and others
Sérgio Oliveira
University of Aveiro
Portugal
 
Brief Bio
Sérgio is a PhD Student working at the Institute of Electronics and Informatics Engineering of Aveiro (IEETA), University of Aveiro (UA). His interests include human-centered technologies, with a focus on computer-supported cooperative work, computer graphics, extended reality, as well as rehabilitation scenarios.
Abstract

Rehabilitation stands as a pivotal component in the journey of stroke survivors, serving as a beacon of hope and restoration in their lives. Following a stroke, individuals often face physical and cognitive challenges, ranging from impaired mobility to difficulties in speech and memory. The journey towards recovery is arduous, requiring immense dedication and support from both the survivors and their caregivers. Hence, rehabilitation offers the promise of regained independence and improved quality of life. However, traditional methods of rehabilitation are not without their constraints. Conventional therapies often present limitations in terms of engagement, personal- ization, and accessibility. Many stroke survivors find themselves navigating repetitive exercises that may lack the necessary stimulation to sustain motivation and progress. To help with some of the existing challenges, Virtual Reality (VR), offers a realm of possibilities to complement tradi- tional rehabilitation practices, providing immersive, interactive experiences tailored to the needs of stroke survivors. By transporting individuals to virtual environments where they can engage in therapeutic activities, VR has the potential to reignite motivation and drive, fostering a sense of empowerment and agency in the rehabilitation process. Beyond its impact on survivors, VR holds promise for healthcare professionals as well, offering new avenues for assessment, treatment, and data collection, offering valuable insights into the progress and preferences of survivors, en- abling more targeted interventions and personalized care plans. This tutorial aims to identify the current state of VR research, as well as the gaps beyond technological aspects, by focusing on hu- man factors related to rehabilitation scenarios and how these may inform future research directions.

Keywords

Stroke, Rehabilitation, Virtual Reality, Human-Centered Design, Field Studies.

Aims and Learning Objectives


This tutorial will present essential concepts associated with the use of Virtual Reality (VR) technologies for Rehabilitation scenarios from a Human-Centered Design (HCD) perspective. To help achieve this vision, various research works in collaboration with a renown Medical Rehabilita- tion center, conducted by the authors together with stroke survivors and healthcare professionals in recent years will be described. The tutorial will end with a call for action, illustrating various important topics that should be addressed to increase the level of maturity of the field. Finally, we intend to have a period for discussion, in which the attendees may express their questions and opinions, including interesting topics for future research.


Target Audience

This is an introductory tutorial. Persons of all levels are welcome to participate. It is open to anyone that has an interest in learning more about stroke rehabilitation supported by VR Technologies. Naturally, we encourage everyone working on related topics to join, and share their experiences and points of view. This, in turn, can be used to increase the awareness of the research community.

Prerequisite Knowledge of Audience

This is an introductory tutorial. Persons of all levels are welcome to participate. It is open to anyone that has an interest in learning more about stroke rehabilitation supported by VR Technologies. Naturally, we encourage everyone working on related topics to join, and share their experiences and points of view. This, in turn, can be used to increase the awareness of the research community.

Detailed Outline

1. Introduction to Stroke and Rehabilitation Expected duration: 10min.
2. The role of Digital Realities Expected duration: 15min.
3. Presentation of some examples from our own work. Expected duration: 30min. 4. Road Map of important research actions. Expected duration: 10 min.
5. Final Remarks. Expected duration: 5 min.
6. Discussion with participants. Expected duration: 20 min.

Secretariat Contacts
e-mail: visigrapp.secretariat@insticc.org

Tutorial on
Egocentric Vision: Exploring User-Centric Perspectives


Instructor

Francesco Ragusa
University of Catania
Italy
 
Brief Bio
Francesco Ragusa is a Research Fellow at the University of Catania. He is member of the IPLAB (University of Catania) research group since 2015. He has completed an Industrial Doctorate in Computer Science in 2021. During his PhD studies, he has spent a period as Research Student at the University of Hertfordshire, UK. He received his master’s degree in computer science (cum laude) in 2017 from the University of Catania. Francesco has authored one patent and more than 10 papers in international journals and international conference proceedings. He serves as reviewer for several international conferences in the fields of computer vision and multimedia, such as CVPR, ECCV, BMVC, WACV, ACM Multimedia, ICPR, ICIAP, and for international journals, including TPAMI, Pattern Recognition Letters and IeT Computer Vision. Francesco Ragusa is member of IEEE, CVF e CVPL. He has been involved in different research projects and has honed in on the issue of human-object interaction anticipation from egocentric videos as the key to analyze and understand human behavior in industrial workplaces. He is co-founder and CEO of NEXT VISION s.r.l., an academic spin-off the the University of Catania since 2021. His research interests concern Computer Vision, Pattern Recognition, and Machine Learning, with focus on First Person Vision.
Abstract

Wearable devices with integrated cameras and computing capabilities are gaining significant attention from both the market and society. With the increasing availability of commercial devices and numerous companies announcing the release of new products, interest is on the rise. The main attraction of wearable devices lies in their mobility and their ability to facilitate user-machine interaction through Augmented Reality. These features make wearable devices an ideal platform for developing intelligent assistants that can support and enhance human abilities, with Artificial Intelligence and Computer Vision playing crucial roles.

Unlike traditional computer vision (known as "third-person vision"), which analyzes images from a static viewpoint, first-person (egocentric) vision assumes that images are captured from the user's perspective, providing privileged information about the user's activities and how they perceive and interact with the world. Visual data acquired from wearable cameras typically offers valuable insights into users, their intentions, and their interactions with the environment.

This tutorial will explore the challenges and opportunities presented by first-person (egocentric) vision, covering its historical background and seminal works, presenting key technological tools and building blocks, and discussing various applications.


Keywords

Wearable devices, first person vision, egocentric vision, augmented reality, visual localization, action recognition, action anticipation, human-object interaction, procedural assistance

Aims and Learning Objectives

The participants will understand the main advantages of first person (egocentric) vision over third person vision to analyze the user’s behavior, build personalized applications and predict future events. Specifically, the participants will learn about: 1) the main differences between third person and first person (egocentric) vision, including the way in which the data is collected and processed, 2) the devices which can be used to collect data and provide services to the users, 3) the algorithms which can be used to manage first person visual data for instance to perform localization, indexing, object detection, action recognition, human-object interaction and the prediction of future events.

Target Audience

First year PhD students, graduate students, researchers, practitioners.

Prerequisite Knowledge of Audience

Fundamentals of Computer Vision and Machine Learning (including Deep Learning)

Detailed Outline

The tutorial is divided into two parts and will cover the following topics:
Part I: History and motivation
• Agenda of the tutorial;
• Definitions, motivations, history and research trends of First Person (egocentric) Vision;
• Seminal works in First Person (Egocentric) Vision;
• Differences between Third Person and First Person Vision;
• First Person Vision datasets;
• Wearable devices to acquire/process first person visual data;
• Main research trends in First Person (Egocentric) Vision;
Part II: Fundamental tasks for first person vision systems:
• Localization;
• Hand/Object detection;
• Attention;
• Action/Activity recognition;
• Action anticipation;
• Dual-Agent Language Assistance
• Industrial Applications;
The tutorial will cover the main technological tools (devices and algorithms) which can be used to build first person vision applications, discussing challenges and open problems and will give conclusions and insights for research in the field.

Secretariat Contacts
e-mail: visigrapp.secretariat@insticc.org

footer