You Can Touch This

3 downloads 98797 Views 60MB Size Report
fitness trackers such as Fitbit or the Apple Watch) the “life- logging” community has become very popular [4]. Most people use the data from these fitness trackers ...
You Can Touch This: Eleven Years and 258218 Images of Objects

Figure 1: All objects touched by Alberto Frigo in January 2004, 2009 and 2014. Every line shows the images of the touched objects for one day. Please use the magnifying functionality of your PDF reader to take a closer look at the photos.

Nina Runge Digital Media Lab University of Bremen, TZI Bremen, Germany [email protected]

Johannes Schöning Expertise Centre for Digital Media Hasselt University - tUL iMinds, Diepenbeek, Belgium [email protected]

Rainer Malaka Digital Media Lab University of Bremen, TZI Bremen, Germany [email protected]

Alberto Frigo Södertörn University, Media and Communication Stockholm, Sweden [email protected]

Abstract Touch has become a central input modality for a wide variety of interactive devices, most of our mobile devices are operated using touch. In addition to interacting with digital artifacts, people touch and interact with many other objects in their daily lives. We provide a unique photo dataset containing all touched objects over the last 11 years. All photos were contributed by Alberto Frigo, who was involved early on in the “Quantified Self” movement. He takes photos of every object he touches with his dominant hand. We analyzed the 258,218 images with respect to the types objects, their distribution, and related activities.

Author Keywords Touch Interaction; Tangible Interaction; Life Logging; Quantified Self

ACM Classification Keywords H.5.2. [User Interfaces]: Haptic I/O Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. CHI’16 Extended Abstracts, May 07–12, 2016, San Jose, CA, USA. Copyright is held by the owner/author(s). Publication rights licensed to ACM. ACM 978-1-4503-4082-3/16/05 ...$15.00 DOI: http://dx.doi.org/10.1145/2851581.2892575

Introduction & Context Touch interaction is heavily studied in the area of humancomputer interaction (HCI). From research in the area of tangible computing [8, 10] to research enriching touch as an input modality [3, 20], the topic has gained growing importance in the field. In addition to using touch to interact with the digital world, like a computer mouse or a smartphone,

people also touch, grasp, interact and manipulate a wide range of objects every day. In this paper we distinguish digital objects that are used to interact with the digital world, and analog objects that have no links to the digital world (yet).

Figure 2: The installation “Images of the artifact used by the main hand” by Alberto Frigo as part of exhibit “Hamster – Hipster – Handy” in the Museum Angewandte Kunst, Frankfurt Germany, 2015. Top: Overview of the whole dataset. Bottom: Visitor inspecting the single images with a magnifying glass.

Humans have developed incredible skills in using objects to manipulate and interact with the environment [7]. Even though we have a good understanding of how people interact and touch digital devices [1], we only have a rough picture of the kind and amount of objects people touch every day. Nevertheless, researchers in the domain of HCI have begun to exploit the capabilities of touch going beyond simple fingertip contact to adaptive surfaces like the inForm surface [6] to improve interaction with the digital world. Sato et al. [17] show a way to interact with the digital world through everyday objects. Devices that could change shape [15], interfaces with dynamically changeable buttons [9] or deformable mobile devices [14] allow new ways of interaction. It is not yet well explored in HCI how people interact with analog objects (like pens, cups, keys) or with digital objects (like smartphones, laptops) on an everyday basis. What are the similarities and differences, and how has the set of objects we touch changed over the last decade? What are the differences between analog and digital objects? What are the activities people fulfill while interacting with objects?

Research Context The current dataset is collected by Frigo who was involved early on in the the “life-logging” (also called “Quantified Self”) movement, which incorporates technology into data acquisition on aspects of a person’s daily life [13, 19]. With the recent rise of consumer hardware for “life-logging” (e.g.

fitness trackers such as Fitbit or the Apple Watch) the “lifelogging” community has become very popular [4]. Most people use the data from these fitness trackers as an indicator of their health, eating behaviours or to measure their daily activity levels [11, 18]. In general, most selfloggers want to improve their health, well-being and timemanagement, and many people start self-tracking because they want to change a certain behaviour. It is interesting that this rather simple usage of these tracking devices indeed is already helping people change their lives [16]. Nevertheless, insufficient analyzing and visualizing tools and platforms are reasons to stop collecting the data [4]. Besides fitness-trackers, which track basically the daily activities of the users using acceleration sensors, other devices use photos to document the life of a user (e.g. the Narrative Clip or the Autographer). Both are body-worn cameras that take photos in regular intervals (around every 30 seconds) over the whole day, from an egocentric perspective. The amount of collected data from these devices is huge and the ability to analyze this data might be a crucial point for their success [12]. For example, the OrCam uses OCR techniques on the images to support disabled or visually impaired people. Furthermore, analyzing the amount and duration of activities might be interesting for users to help them to be more efficient.

Dataset In this paper we address these questions by contributing and analyzing a unique dataset of 258,218 photos of objects touched by a single person over 11 years. Alberto Frigo has photographed every object he touched since the beginning of 2004. Frigo1 is a conceptual media artist and born in Asiago (Italy). He was 24 years old when he started 1

http://2004-2040.com/

Spoon SpoonSt i r r ng Pl ay Mobi l eCal l i ng Mobi l eTex t i ng Tool Di v i ngGoggl es Toot hbr ush Cr edi t car d

Figure 3: Example images for the object categories of 8,000 manually tagged images. From top to bottom: spoon, spoon (stirring), play, mobile (calling), mobile (texting/finger input), tool (for refurbishing), diving goggles, toothbrush and creditcard.

the project and his plan is to continue until 2040. Frigo captures an image with a small photo camera every time he grasps a consistent and independent object with his dominant (right) hand. Since September 2003, Frigo has been taking photos of every object he touches, with some constraints: he takes a photo of the object if it is graspable, consistent and independent, e.g. cups, pens, telephones, but not clothes or other people he touches. He takes the photo only once he touches the object. If the same object is touched several times consecutively, he only takes one photo. Every time he touches a different object he takes a new photo. If he grasps a different object, then touches the original object again, he photographs it again. For example, when he interacts with a spoon, and then with a cup, he would take a photo every time he touches the cup again after the spoon, but would not take several photos if he places the cup on the table for a while. Therefore, we cannot directly infer for example how many times he touches a knife during dinner, but we do get a detailed view of the objects he uses throughout his daily life. Frigo also endeavors to capture the ways that he he interacts with these objects. For example, a camera is used to take a photo, so the image shows his face with the camera. Many objects can be used in very different ways, for example, a spoon could be used to eat or to stir something. This means that the dataset contains images from spoons as stirring tools as well as eating tools. Therefore, this dataset offers a unique view on the way people touch and interact with objects–both with analog objects, like cutlery or a toothbrush, or with digital objects, like smartphones and computers. We analyze the dataset and show how Frigo’s interaction with objects, the types of objects, and the amount of objects changed over the last 11 years using an auto-ethnography approach. Furthermore, we show the

type and amount of activities executed with different objects and how they are distributed over the day. After an experimental phase with various wearable devices for collecting data, ranging from gloves with embedded cameras to hand-mounted cameras to a full-body suit, Frigo decided that taking pictures with a simple camera was the best and most reliable way for him to document the objects. Multiple Aiptek PenCams (2M, SD) of the same model with a resolution of 128x96 pixels were used for data collection. This camera model proved itself to be reliable, lightweight and easy for Frigo to carry. Frigo’s motivations for this project were to organize his life, to communicate with other people, e.g. the research community, and to document his own life through this collection of photos of the objects Frigo: “In this respect, this part of the project is meant to depict the activities of the core of his life, generating a DNA-like code where each photographed object is like a letter of an alphabet to be analyzed and interpreted.” The way Frigo presented his data in exhibitions all over the world is one approach to communicate through these images, see Figure 2 for an example exhibition. The images are very small resolution and are each printed to be the size of a postage stamp. One month is shown as one rectangle and every row in the rectangle is one day (see Figure 1 for three example months). As the images are quite small, visitors must come very close to inspect them. A magnifying glass is provided to assist with this. Limitation of the Dataset We admit that this dataset is highly biased, as it is collected by a single contributor. Nevertheless, we believe that our systematic overview can help to shape studies on similar data collections that will be available in the future, due

M SD

M SD

M SD

M SD

2004

2005

2006

56.67 13.29

53.91 11.82

55.87 11.76

2007

2008

2009

59.22 12.09

61.08 12.69

59.09 13.52

to the rise of life logging activities and wearable devices. Structuring and making this dataset accessible for other researchers in the community can trigger further research. In addition, we believe that alt.CHI is the right venue for such endeavor as such a unique dataset can go unrecognized through the standard review process we have in our community, but still can have substantial impact on the HCI community. Our analysis shows that the amount of images significantly increased over the last decade. We also show that this increase is mainly due to the proliferation of digital devices. Furthermore, we annotated 8,000 images regarding the objects shown in the images and grouped them in 15 activity categories. We give an overview of these activities and how they are distributed over the day.

2010

2011

2012

Analysis

65.88 13.57

75.57 14.30

75.76 14.28

2013

2014

73.84 17.40

70.31 16.02

The dataset contains many images over a long time period and is very interesting from an HCI perspective. We will show how the amount of images is distributed and has changed over time. We will then show our results from a deeper analysis, sorting the images by if they contain analog or digital objects, and the analysis of 8,000 randomly selected images, the types of objects, and activities they show.

Table 1: Overview of the mean (M ) and standard deviation (SD ) of images of touched objects per day for each year.

Quantity We used Frigo’s images from January 1st 2004 until December 31st 2014. In these 11 years, he has taken 258,218 images of objects. We do not take into account the early experimental phase of the project in late 2003. On average, he photographed 64 object images per day (M =64.48, SD=16.0, M AX=108, M IN =19 objects) and 1,956 objects per month (M =1, 956.20, SD=286.07, M AX=2, 573, M IN =1, 347 objects). The number of objects increased over the years: in 2004 he photographed 56 objects per day, and 70 per day in 2014 (see Table 1).

A one-way ANOVA with repeated measures showed that the differences between the years are highly significant F10,3640 =131.64, p