Mental wellness app | 0→1 product discovery & launch
Overview
📍 Background
Aurora began as a solution to my own problem. During a period of high anxiety, professional support wasn’t accessible, and I leaned heavily on a mental wellness app, until Headspace acquired and sunset it. Losing that tool when I needed it most made me wonder: how many others were left without support? As a product designer, I naturally decided to see whether this was a personal need or a market opportunity.
⚡ The challenge
I knew what I needed, but solving my own problem isn’t the same as solving for a market. As a solo founder starting from scratch, I needed to gauge demand beyond my own experience and ensure Aurora could serve real emotional needs, all while building trust in a market skeptical of AI-driven mental health support.
👩🏻💻 My role
- Founder
- Researcher
- Designer
- Developer
As the solo founder solving my own problem, I owned the full product lifecycle: testing whether my personal need reflected a broader market opportunity, conducting research with target users, designing the experience, building it with no-code tools, and launching a live app to test real-world demand.
⭐️ The solution
A supportive, intuitive app that helps people notice emotional patterns, work through anxious or negative thoughts with CBT tools, and talk freely without fear of judgment.
🤔 How might we
How might we create a judgment-free, accessible space for young adults to work through difficult emotions?
Research & insights
I ran surveys, interviews, and competitive analysis to learn what young adults need for emotional support and what keeps them engaged.
⭐️ Key insights
- Informal support is already happening: Some participants were turning to ChatGPT for emotional support
- Participants described most mental health apps as generic and forgettable, but Finch stood out as an exception, suggesting brand personality, behavioral psychology, and engagement mechanics matter as much as clinical utility.
- Barriers to seeking help: Shame, fear of judgment, not wanting to burden others, and financial barriers stop many from seeking support
- Differing preferences and expectations: Some want guidance, while others want to simply vent, most want a combination of the two
- Trust through evidence: Trust in AI increased when backed by science
- Top requests included affirmations, mood tracking, and self-learning resources
- Passive tracking isn’t enough: People wanted help understanding their patterns, not just logging moods
Before diving into user interviews, I defined key research questions I wanted to answer about emotional needs, feature expectations, and retention drivers to guide the conversation.
I conducted moderated video interviews with young adults to understand their current coping strategies, barriers to seeking help, and reactions to Aurora’s core concept.
I used a Value Proposition Canvas to map user pains and gains against Aurora’s features, ensuring each design decision addressed a real emotional need rather than assumed problems.
Survey responses from 40+ potential users revealed preferences around anonymity, AI trust, and desired features, helping validate assumptions and prioritize the MVP roadmap.
I mapped competitors across artificial/human and playful/serious axes to identify positioning opportunities and understand where Aurora could differentiate in the mental wellness market.
Ideation & strategy
Using research insights, I prioritized features that addressed the top user needs while keeping scope focused for testing.
🎯 The goal
Use research insights to define an MVP that would test real-world demand and product-market fit.
👍 MVP features
- AI chat: The core differentiator; users craved judgment-free conversation
- Mood logging + insights: Passive tracking alone wasn’t enough; people wanted to see patterns
- CBT tools: Built credibility and trust by grounding AI responses in evidence-based therapy
- Affirmations: Low-effort, high-impact feature users explicitly requested
- Crisis resources: Non-negotiable for safety
I mapped out key journeys using wireflows.
I created a Figma prototype to guide moderated usability testing to further test assumptions and identify areas for improvement.
After running seven sessions, I organized user feedback by feature area to identify patterns in what worked, what confused users, and where the experience needed refinement.
Design & iteration
I designed and tested multiple prototypes to refine usability and emotional resonance.
🎨 Key design decisions
- Gentle onboarding that gradually builds context without overwhelming new users
- Clear messaging that Aurora supports professional therapy, not replaces it
- Prominent mood log and chat buttons
- Calm color palette and warm, friendly illustrations
- Therapy-inspired mood flow for reflection
- Chat experience designed to feel like texting a friend
CBT-based tools use storytelling to guide users through thought reframes based on specific cognitive distortions.
Aurora’s mood log uses a stepped approach inspired by therapy sessions: users start with a general emotion (happy, sad, anxious), then identify specific feelings (e.g., “overwhelmed” vs “awkward” under “scared”), and finally reflect on triggers or context. This progressive disclosure prevents overwhelm while encouraging deeper self-awareness.
Aurora’s chat is designed to feel like talking to a supportive friend who listens and asks questions for deeper reflection.
I use Dovetail as my research repository, which allows me to collect ongoing user data that I use to inform product iterations.
Outcomes
🎨 Design validation
- Users describe Aurora as “comforting,” “supportive,” and “helpful.”
- Users found the interface intuitive and welcoming, with multiple people highlighting the “warm” and “friendly” visual design
🚀 Post-launch
- Launched MVP with 100+ signups, providing real user data to inform iteration
- 4.5⭐ App Store rating from early users
- The AI chat was the most-used feature, followed by mood logging and affirmations, showing conversation is the core value driver.
👀 Early retention insights
- Users who engaged with chat returned more frequently than those who only tracked moods, validating that conversation (not passive logging) was the core value driver
- Habit formation was underdeveloped in the MVP. Future iterations would focus on ethical engagement strategies that encourage healthy use patterns without creating dependency
🔁 What I’d do differently
A lot! I’d test assumptions earlier instead of perfecting the brand and UI first. I’d bring in licensed therapists from the start, not just for safeguards, but to help define appropriate boundaries for AI support in mental health. The early data showed users valued conversation over passive tracking, but this also raised questions about dependency and appropriate use that I didn’t fully address in v1.
Starting from personal need gave me deep user empathy but also created blind spots. I assumed others would want what I wanted, which led me to over-build features (chat + mood tracking + CBT tools + affirmations) instead of testing the core AI chat concept first.
Since no-code tools make building fast, I sometimes skipped research and testing before adding features. If I could start over, I’d test risky assumptions first before building.