CASE STUDY
Project HANDIMAPS
A 3-month project of Evaluative + Generative Research. Seattle start-up, HandiMaps, was working on the first version of their mobile application and had never done a usability test. As part of the Human Centered Design & Engineering Masters degree program at the University of Washington, my teammates and I partnered with their founder and lead designer to evaluate their mobile application designed to meet the needs of wheelchairs users who attend events in large venues.
Team: Jessica Carr, Madeline Kleiner and Leo Salemann. My role: client communication, recruiting, study design, interviewing and report writing.
This project was under NDA, so some specifics have been excluded.
OBJECTIVE
Our study goal was to provide actionable improvements to the V1 release of the Handimaps mobile app. We were asked to do a usability test and validate key tasks with real users. Our team also chose expand the scope of the research plan to include generative research that could inform future development and show the value that research in parallel to software development.
WORK
To orient ourselves to the perspective of our users, our team researched accessibility topics and interviewed two subject matter experts who regularly use mobility devices.
We recruited 8 participants through a local organization and with flyers posted on the University campus. All 8 participants identified as either female or male and used either a manual or motorized chair, which differed in size and maneuverability. Some participants also had dexterity and speech differences and one traveled with a caregiver. The breath of personal experience in the small participant pool was an asset to the project.
METHODS
We did a heuristic evaluation of the medium fidelity 'happy path' prototype that Handimaps provided to us and decided to design our test using a paper-prototype version which would set expectations that the product was still in development. We used a pre and post-test questionnaire with two styles of questions. For the usability test, we did a cognitive walkthrough asking users to complete tasks and use a think-aloud protocol.
We were cognizant of the quantity of data we would need to compile and created our data capture tables in advance allowing us to quickly input and analyze our results.
Using the paper prototype slowed down the interaction enough that when users clicked off the ‘happy path’ into screens that did not yet exist, they were not met with a dead end and we could ask: What would you expect to happen? Despite using a paper prototype, participants were still able to be delighted by visual changes on each new screen.
Participants easily navigated the app to find the restroom. However, we learned important expectations they held about those restrooms. For several participants, especially those in motorized wheelchairs, understanding the details like single person vs. multi-stall bathroom or where grab bars were installed, were critical.
Our post-test questionaries probed on the product name, company logo and tagline. To better understand how this was perceived we used a 7-point likert scale with semantic differential end points. We provided the client with the quantitative summary of results and a visualization that highlighted the breadth of reactions.
END RESULTS
Our study delivered actionable improvements to the V1 release and provided recommendations for future product iterations. By sharing our study plan with HandiMaps before testing we were assured that we were indeed diving into areas they wanted to understand.
​
Our decision to do generative research paid off and we were able to extend the value we provided to the client. For example, our final report flagged the company logo to be perceived as an 'inactive' representation of wheelchair users and today's logo for Handimaps is very different.
Our generative study also shed light on the personal journey of a concert attendee using a wheelchair and we learned it does not begin inside the venue. There are many actions taken before a wheelchair user arrives-- from purchasing a specific seat, to researching entrance features like turnstiles and even scoping out parking lots to plan where best to park for their vehicle. This perspective helped us articulate what the app would also need to account for in order to gain adoption. Lastly, our diverse group of participants added to the breadth of our findings. Two participants had hand dexterity differences and learning from them, we added a recommendation for further testing of touch-target sizes and icon spacing.
REFLECTION
Our usability test resulted in many insights, however, we tested while seated at a table in the lab and sometimes in coffee shops, but this was hardly navigating in a crowded, noisy event venue– how might that have been different? Would leapfrogging into more of the complete experience have yielded even better insights?
​
We also learned from having a dumb research question: How do you use your phone while operating your chair? The answer is– they don’t b/c their hands are busy driving the chair. However, from this awkward question participants showed us where they kept their phone while moving and this offered opportunities for haptic feedback while navigating crowded, dark, loud places like event venues.
This study was a truly valuable opportunity to learn from a community that enthusiastically embraces products designed with them in mind. We were lucky to have found gracious participants who shared their time and educated us about their many abilities.
​
​
​