Case Study · Accessibility · HMI Research

Autonomous
Rideshare

A multimodal HMI that bridges the accessibility gap — giving blind and low-vision riders the confidence, safety, and autonomy they deserve.

Role UX Research
Responsibilities Research · Analysis · Usability Testing · WCAG 2.1/2.2
Duration 2.5 months
Study design Between-subjects, N = 24
Autonomous rideshare concept vehicle
Impact
25M
Americans with transport insufficiency
📖
27
Articles reviewed
25M
Americans with transport insufficiency
27
Literature articles reviewed
24
Study participants · 3 conditions
6
Rideshare tasks analyzed
The Problem

Most autonomous HMIs are vision-first.

Imagine you were injured and didn't have a partner or friend to drive you where you needed to go. How would your life be impacted? For 25 million Americans, this is everyday reality — transportation insufficiency stemming from cognitive, sensory, and/or motor impairments.

Without reliable, usable transportation, these individuals are further left behind and isolated from society. Autonomous vehicles promise independent mobility, but most human-machine interfaces are built for sighted users first.

Blind and low-vision riders face barriers at every stage: identity verification, trip confirmation, en-route updates, unexpected events, and safe exit. We set out to design a multimodal interface that restores confidence, safety, and autonomy.

25M
Americans alone experience transportation insufficiency stemming from cognitive, sensory, and/or motor impairments — left further behind and isolated from society.
The Idea
A human-centered, multimodal HMI (audio + visual) that bridges the accessibility gap in autonomous ridesharing.
01 — Discovery

27 articles. One clear gap.

Because fully autonomous vehicles are so new and rapidly changing, we set a publication boundary of 2014 to capture the most accurate, up-to-date research. After keyword searching and abstract filtering, 27 articles were identified and read in full.

27
Articles reviewed in literature review
Literature review initiated with broad keyword searches, then filtered by relevance and recency. Publication date boundary set at 2014 to capture the most current research on fully autonomous vehicles.
accessible HMI accessible rideshare visually impaired HMI visual impairment design fully autonomous vehicles multimodal design
Real-World Context
Wizard-of-Oz simulation setup for usability testing
Wizard-of-Oz simulation environment used for usability testing — replicating an in-vehicle rear-seat experience with the multimodal prototype.
02 — Define

Six tasks. Every failure point mapped.

We broke down the full ridesharing process with impairment needs included — from boarding to exit. Each task became a study condition to measure trust, navigation accuracy, and satisfaction.

Task 01
Identity Verification
Speak a 4-digit ride code to confirm rider-vehicle pairing — replacing visual QR codes that blind riders cannot use.
Identity verification UI
Task 02
Trip Confirmation
Destination read back via audio. Rider verbally confirms or corrects the address before the vehicle departs.
Trip confirmation UI
Task 03
Driving Updates
Proactive status narration: intersections, signal changes, ETA updates — keeping riders continuously oriented.
Driving updates UI
Task 04
Unexpected Events
A special mode activates to explain hazards and evasive actions — maintaining trust when the vehicle deviates from expected behavior.
Unexpected events UI
Task 05
Destination Arrival
A 3-minute pre-arrival alert prepares the rider with curbside context — orientation, surroundings, and safe disembark preparation.
Destination arrival UI
Task 06
Exit Interaction
Guided, safe, step-by-step disembarkment — ensuring riders safely exit without risking traffic or disorientation.
Exit interaction UI
Design Principles
Principle 01
Memory Supports
Predictive cues, consistent language, and knowledge-in-the-world reduce cognitive load for riders with memory or cognitive impairments.
Principle 02
Perception Optimization
Low access cost, redundancy across modalities, and progressive disclosure ensure every rider can receive critical information regardless of vision level.
Principle 03
Attention Management
Salience, urgency mapping, and interruption handling ensure high-priority information reaches the rider at the right moment — without overwhelming.
03 — Prototype

Three components. One multimodal system.

The prototype combined a rear-seat mounted display, a natural-voice external speaker, and high-contrast tactile hardware buttons — designed to work seamlessly together across all six trip tasks.

🖥️
Rear-Seat Display
Mounted display for route status and environmental descriptions. High-contrast visual output synced with audio narration for redundancy.
🔊
Voice — "Maple"
External speaker with a natural, calm voice character named Maple. Synchronized audio prompts across all six tasks — proactive, not reactive.
🔘
Hardware Buttons
Two high-contrast tactile buttons for physical interaction — no touchscreen required. Color-coded for disambiguation.
■ #F4B300  ·  ■ #B87AFF
Wizard-of-Oz prototype in testing environment
Wizard-of-Oz simulation: the prototype in a controlled test environment replicating a rear-seat AV experience.
Display screen screenshot
Rear-seat display — route status view.
Trip confirmation display
Trip confirmation screen with audio sync.
Hardware Button Colors
#F4B300
#B87AFF
High-contrast, tactile — no touchscreen required for interaction.
04 — Experiment Design

Between-subjects. Three conditions.

A between-subject design with N = 24 participants across three conditions — designed to isolate the effect of multimodal audio on trust, satisfaction, and navigation accuracy for visually impaired riders.

NVI
Condition 1 — Non-Visually Impaired
Sighted participants using the full multimodal interface — audio + visual. Baseline condition for comparison.
VI
Condition 2 — Visually Impaired + Audio
Visually impaired participants (Cambridge Disability Simulator, 20/200 acuity blur) using the full multimodal interface — audio + visual.
VIX
Condition 3 — Visually Impaired, No Audio
Visually impaired participants using visual-only interface — no audio. Control condition to isolate the contribution of audio to the experience.
Measures & Methods
📏
Quantitative Measures
Trust, satisfaction, and navigation accuracy measured on Likert scales. Statistical analysis via Kruskal–Wallis with post-hoc testing.
🔬
Disability Simulation
Cambridge Disability Simulator standardized visual impairment at 20/200 acuity blur across all impaired conditions — ensuring consistency.
📝
Qualitative Input
Interviews and literature review confirmed preference for audio-visual redundancy. Key themes: clarity, directionality, and trust in the system voice.
Results

Audio closed the gap.

Across all three key measures — trust, correct vehicle identification, and navigation — the multimodal condition restored performance for visually impaired riders to near-parity with sighted users. Visual-only was significantly worse.

Finding 01
Trust & Satisfaction
VI with audio performed at approximately the same level as NVI (sighted) users. Visual-only (VIX) was significantly worse across both trust and satisfaction measures.
p < .05 — visual-only significantly worse
Finding 02
Correct Rideshare Identification
Audio narration restored confidence in vehicle identification for blind and low-vision riders — a task that was frequently failed in visual-only conditions.
Audio restored correct identification
Finding 03
Navigation Understanding
Proactive narration of intersections, signals, and ETA changes significantly improved route understanding and rider comfort throughout the trip.
Proactive narration improved comfort
Results chart 1 — trust and satisfaction scores across conditions
Trust and satisfaction scores across NVI, VI, and VIX conditions — VI with audio closely matched NVI performance.
Results chart 2 — navigation accuracy scores
Navigation accuracy results — VIX (visual-only) showed statistically significant degradation vs. both multimodal conditions.
Prototype & Mockup
Unimpaired vs impaired view of display
Unimpaired vs. impaired view of the rear-seat display — showing how visual degradation affects screen readability without audio support.
Wizard-of-Oz usability test
Wizard-of-Oz simulation used for usability testing — replicating the rear-seat AV environment with synchronized audio and visual output.
Full system mockup in autonomous vehicle
Full multimodal system mockup — as it would appear if fully autonomous vehicles existed and were deployed at scale.
Next Steps

Where this research goes next.

The current prototype validated the multimodal approach in a controlled Wizard-of-Oz setting. Four directions would move this toward real-world deployment.

Next case study
City of Dearborn