I am a Research Associate in the School of Computer Science at the University of Manchester. While I am based in the Web Ergonomics Lab group of IMG, I am currently on secondment to BBC R&D. My research revolves around information, particularly how users interact with it and enabling efficient access to it. While the focus has been on visually impaired users, this area involves understanding how different forms of presentation help or hinder all users.
My research interest is information accessibility, in particular making complex information accessible in as efficient manner as possible.
Providing TV content over multiple devices
My main focus at the moment is working with the BBC on some research that leads on from SASWAT. There is considerable interest in presenting content over multiple devices, such as a television and a smartphone, and coordinating the content will be critical if it is to be used effectively. We are interested in understanding how people split their attention when viewing television; initially we looked at dynamic content on the same screen (such as the football results tables on Final Score), but have recently extended this to explore how viewers shift attention between a TV program and companion content on a tablet. I am currently pursuing this research on an EPSRC funded secondment to BBC Research and Development.
Google Research Award. "Accessibility catch-up: techniques for disseminating accessibility research"
This project was a continuation of the SASWAT project, funded by a Research Award from Google. I investigated how the findings of that project, and other similar research, may be made available to its target audience in as quick and reliable way as possible. We discovered that, while there are limitations caused by decoupling the content analysis from the screen-reader, browser plug-ins could be used to monitor and analyse content, and to modify the way in which it was presented. They also allow simple remote updates, so offer an effective means of 'upgrading' a screen-reader to use the latest research findings in non-visual user-interfaces.
A recent departure for me was the JISC-funded SWORD project. This 6-month project (from Feburary to July 2011), led by Robert Stevens, with James Malone and Helen Parkinson from EBI developed a software ontology. This will allow researchers to describe the software used for their experiments and analysis, and thus improve reproducibility, and help data preservation. Please see the project blog or sourceforge pages.
An emerging interest is how eye-tracking can be used to understand patterns in the ways in which people view scenes, with a hope that these may be used to inform audio presentation. A pilot study has been performed at Manchester City Art Gallery, where people's eye movements were tracked as they viewed paintings on a monitor. This attracted much media interest, and was filmed by the BBC for the local television news. Our data anaylsis is ongoing, and has stimulated a proposal for future funding.
The Google-funded research is a continuation of the SASWAT project. We investigated how to present dynamic information to blind users. Many sites nowadays use Web 2.0 technologies, such as AJAX, to allow parts of the page to update independently; determining how to present these to people who cannot overview the page (and its chagnes) with a quick glance is our key research question. Our studies of sighted users (mainly using eye-tracking) enabled us to identify effective ways of presenting dynamic content to screen reader users. Please see the project pages for more details.
PhD Research: Graph Accessibility
My doctorate research investigated techniques to enable graphs to be understood non-visually. Graphs, in the mathematical sense of nodes connected by arcs, are ubiquitous, forming the key component of many diagrams. Examples include UML and ER diagrams, flowcharts and molecular structure diagrams. Diagrams are often the only way in which information is presented, so making this information accessible to all requires tools enabling non-visual access to the information.
This research involved understanding how sighted readers use diagrams and what features of diagrams make them better than sentential representations of the same information. Investigation of the literature on this, and some experiments suggested that annotation could prove a powerful technique. It was shown that enabling the user to annotate a diagram as it is explored will make the exploration task easier. In addition, annotation can be performed automatically, for example identifying features of the diagram that would otherwise be implicit and making them explicit. Annotation can similarly be used to build a kind of `breadcrumb trail' so the reader may know where has been visited previously, thereby assisting navigation. A prototype was designed and built to allow non-visual exploration of logic circuits and family trees, and demonstrated that annotated diagrams were easier to use.