


SafePath Wearable Safety App
Redefining safety through passive, real-time protection powered by wearables.
Role UI/UX Designer
​
Project Type Class Project
​
Tools/Skills User Research, Usability Testing, Interaction Design, Figma, Design Systems
The Project
SafePath was designed as part of a UX design course after research revealed a consistent, painful irony: the moments people need safety tools most are the exact moments those tools become impossible to use. Existing apps like Noonlight and bSafe require you to tap, open, or speak. However, research shows people freeze, panic, and lose motor control under real threat.
SafePath explores what safety looks like when the technology acts for you, not with you.
"The watch will know if I'm getting attacked without me having to do anything, and send help in the shortest amount of time."
JASMIN S. — INTERVIEW PARTICIPANT
"Even if I'm prepared, just the thought of it happening makes me feel more anxious — no matter what I have on me."
NAN P. — INTERVIEW PARTICIPANT
Current Safety Apps & Wearables Gap
1. Safety apps require manual action during panic, making them unreliable when danger hits.
2. Wearables detect stress and danger, but they don’t trigger any safety response.
3. Safety and wearable tech are not connected, leaving a critical protection gap.
Market Analysis
Despite a projected $279 billion women’s safety market and a 55% surge in wearable adoption, there is still no solution that converts biometric danger signals into real-time safety action.
User Interview Findings

What the Interviews Revealed
Across 4–6 interviews with target users, one pattern emerged clearly: presence, not tools, is what makes people feel safe.
​
Participants described safety as feeling "not alone." The most reliable coping strategy wasn't an app. It was being on a phone call, because at least if something happened, someone would know. These insights reframed the design problem entirely. The user flow isn't built around a button press. It's built on the assumption that the user may not be able to act at all.
"At least if I call someone, I feel safer — if something happens, they'll know what to do and where I'm at."
NAN P. — INTERVIEW PARTICIPANT
User Flow Diagram

Sketches

Exploring Alert Escalation Flows
Early sketches explored three alert escalation models, each representing a different assumption about user capability under stress.
FLOW 01
​
Fully Manual​
User initiates every step. Maximum control, but assumes the user can act under stress.
FLOW 02
​
Semi Passive​​
Biometrics trigger a check-in, but user must confirm before an alert is sent.
FLOW 03
​
Fully Automated
System detects, decides, and acts. User only needs to cancel if it's a false alarm.
Selected Direction
Design System
The design system was built around a single constraint: clarity under panic.
Every decision prioritized legibility and speed of comprehension over aesthetics.
Reserved Red
Red used exclusively for active alert states.
High Contrast
Maximum contrast ratios for readability in low-light and distressed states.
Haptic Layer
Vibration patterns as non-visual feedback which is discreet in high-risk areas.

Hi-Fi Prototype



User Testing Findings
RESEARCH OVERVIEW
Method Moderated usability testing + contextual interviews
​
Participants 3 Apple Watch users (ages 21–32)
​
Context Walking alone during perceived unsafe moments (night, parking garages, transit)
​
Goal Evaluate emotional response, clarity, and decision-making under stress
Insight 1
OBSERVED BEHAVIOR
​
-
Reassurance text changed how users emotionally interpreted the experience.
-
Users explicitly referenced copy as a reason they felt safe or unpressured.
"Empathetic language can function as a trust mechanism, especially in high-stress scenarios."
Insight 2
OBSERVED BEHAVIOR
​
-
2 out of 3 users preferred the app to take action without confirmation.
-
Users feared freezing or being unable to respond under real stress.
"Designing for safety means designing for moments when users cannot think clearly."
Insight 3
OBSERVED BEHAVIOR​​
​
-
Users wanted to know if others were responding before navigating to help.
-
Hesitation was tied to personal safety, not lack of empathy.
"Transparency around collective action increases engagement while preserving user safety."
Iterations after User Testing
Each insight above drove a specific design decision.
Here's the direct line from observation to outcome.
ITERATION 01
​
Empathetic copy became a functional component
Language -> Trust

Reassurance copy added to all active alert states. "You're not alone. Help is on the way."
​
Supportive language isn't decorative.
It actively shapes emotional safety and reduces abandonment.
ITERATION 02
​
Confirmation flipped from required to optional
Automation -> Empowerment

System defaults to automatic action. User only needs to cancel if it's a false alarm.
​
Shifting the cognitive burden from user to system, especially for moments of freeze or panic.
ITERATION 03
​
Live responder count added to alert screen
Social Proof -> Safety

"2 people are heading there." Live count shown on alert screen to reduce hesitation.
​
Safety felt synonymous with not being alone. So, the UI was redesigned to communicate exactly that.
Final Product
Reflection & Learnings
-
Designing for safety requires designing for moments of panic, not calm decision-making.
-
Supportive interface language proved to be as critical as functional accuracy in building trust.
-
User testing revealed that automation, when communicated clearly, can feel empowering rather than controlling.
-
This project reshaped how I think about defaults, confirmations, and emotional load in high-stakes interactions.
-
In future iterations, I would further explore social confirmation features while carefully managing pressure and consent.