Visualising touch errors

Last year (I’ve been meaning to write this for a while), Daniel Buschek spent a few months in the IDI group as an intern. His work here resulted in an interesting study looking at whether the variability of touch performance on mobile phones between users and between devices. Basically, we were interested in seeing if an algorithm trained to improve a user’s touch performance on one phone could be translated to another phone. To find the answer, you’ll have to read the paper…

During the data collection phase of the project, Daniel produced some little smartphone apps for visualising touch performance. For example, on a touch-screen smartphone navigate to:

http://dcs.gla.ac.uk/taps/demos/tapping2/

Hold your phone in one hand and repeatedly touch the cross-hair targets with the thumb of the hand you’re holding the phone with. Once you’ve touched 50 targets the screen will change. If you’re going to do it, do it now before you read on!


Ok, so now you’ll see something that looks a bit like this:
photo

This is a visualisation of touch accuracy (or lack of) – the lines moving to the right (from green to red) demonstrate that I typically touch some distance to the right of where I’m aiming (this is common for holding the phone with the right hand and using the right thumb).

Here’s how it works: at the start, the app chooses 5 targets (the green points). These are presented to you in a random order. When you touch, the app records where you touched and replaces the original target location with the position where you actually touched. i.e. there are still five targets, but one of them has moved. Once you’ve seen (and moved) all five targets, it loops through them again showing you the new targets and again replacing with the new touch. Because most people have a fairly consistent error between where they are aiming and where they actually touch, plotting all of the targets results in a gradual drift in one direction, finishing (after 50 touches, 10 for each trace) with the red points. Note that not all people have this consistency (something discussed in Daniel’s paper).

To demonstrate how the errors vary, here’s another one I did, but this time with the left hand where the offset is now in a completely different direction:photo

One particularly odd feature (for me anyway) is that I know that I’m touching with errors, but don’t seem to be able to correct it! The fact that the errors are so different for left-hand and right-hand use points to a problem in designing methods to correct for errors – we need to which hand the user is using.

If you’d like to try one target rather than 5, use this link:
http://daniel-buschek.de/demos/tapping2/

If you’re interested in why there is an offset, then this paper by Christian Holz and Patrick Baudisch is a good starting point.