Making Everything Interacting: New ways of interaction in the Ubiquitous Computing Age
In the last few years, the increasingly rise of new forms of computing devices is dramatically revolutionizing the way users create, consume and interact with data. Portable and ubiquitous computing devices, 3D printers and micro aerial vehicles are just an example of this growing trend – their fast reach and penetration to the market has created an abundance of data which users have now literally at their fingertips. It is clear that the old desktop interaction paradigm (i.e., mouse+keyboard) is rapidly becoming obsolete, and new forms of interaction are required. Likewise, the computational power at our hands has never been bigger, and this opens up exciting research possibilities which only few years ago seemed unreachable. In my research, I am interested in understanding how people will interact with data in the future, and how machines can help users to accomplish strongly specialized, and often critical, tasks.
In the first part of the talk, I will give an overview of the research group I am currently part of, the Advanced Interactive Technologies lab (AIT) at ETH Zurich, led by Prof. Otmar Hilliges. I will introduce some of the projects conducted within the group, including works related to gesture recognition for mobile devices, computational design and micro aerial vehicles control.
I will then introduce a project in which we explored the possibility to detect in-air gestures around unmodified portable devices, to complement the well know touch-based interaction paradigm [1, 2]. The project focuses on developing a novel machine-learning gesture recognition pipeline which is capable of detecting a variety of gestures and rough hand distance to the camera, from a single RGB-camera input.
Finally, I will conclude my talk with an overview of a project focused on an interface that presents large, unstructured collections of videos, arranged in their original context and sorted in time [3]. With our tool, we extended the focus+context paradigm to create a video-collections+context interface by embedding videos captured with mobile devices into a panorama. We built a spatio-temporal index and tools for fast exploration of the space and time of the video collection, helping users in navigating otherwise unstructured collections of videos.
Bio
Dr. Fabrizio Pece is currently a postdoctoral researcher (Marie Curie Fellow) at ETH Zurich, working with Prof. Otmar Hilliges in the Advanced Interactive Technologies lab at the Institute of Pervasive Computing. Prior to joining ETH, he was a PhD student at University College London (2010 -- 2014), where he earned his doctoral degree in the Virtual Environment and Computer Graphics group, under the supervision of Prof. Jan Kautz. Dr. Pece earned his BSc in Computer Science from Università degli Studi di Roma Torvergata, Italy (2008) and his MSc in Vision and Virtual Environment from University College London, UK (Distinction, 2009). Between June and October 2010, he has completed an internship at Disney Research Zurich under the supervision of Prof. Jan Kautz and Prof. Wojciech Matusik, working in the area of digital fabrication.