👩🏻‍💻 My role

  • Founder
  • Researcher
  • Designer
  • Developer

As the solo founder solving my own problem, I owned the full product lifecycle: testing whether my personal need reflected a broader market opportunity, conducting research with target users, designing the experience, building it with no-code tools, and launching a live app to test real-world demand.

  • Informal support is already happening: Some participants were turning to ChatGPT for emotional support
  • Participants described most mental health apps as generic and forgettable, but Finch stood out as an exception, suggesting brand personality, behavioral psychology, and engagement mechanics matter as much as clinical utility.
  • Barriers to seeking help: Shame, fear of judgment, not wanting to burden others, and financial barriers stop many from seeking support
  • Differing preferences and expectations: Some want guidance, while others want to simply vent, most want a combination of the two
  • Trust through evidence: Trust in AI increased when backed by science
  • Top requests included affirmations, mood tracking, and self-learning resources
  • Passive tracking isn’t enough: People wanted help understanding their patterns, not just logging moods
  • AI chat: The core differentiator; users craved judgment-free conversation
  • Mood logging + insights: Passive tracking alone wasn’t enough; people wanted to see patterns
  • CBT tools: Built credibility and trust by grounding AI responses in evidence-based therapy
  • Affirmations: Low-effort, high-impact feature users explicitly requested
  • Crisis resources: Non-negotiable for safety

  • Gentle onboarding that gradually builds context without overwhelming new users
  • Clear messaging that Aurora supports professional therapy, not replaces it
  • Prominent mood log and chat buttons
  • Calm color palette and warm, friendly illustrations
  • Therapy-inspired mood flow for reflection
  • Chat experience designed to feel like texting a friend

🎨 Design validation

  • Users describe Aurora as “comforting,” “supportive,” and “helpful.”
  • Users found the interface intuitive and welcoming, with multiple people highlighting the “warm” and “friendly” visual design

🚀 Post-launch

  • Launched MVP with 100+ signups, providing real user data to inform iteration
  • 4.5⭐ App Store rating from early users
  • The AI chat was the most-used feature, followed by mood logging and affirmations, showing conversation is the core value driver.

👀 Early retention insights

  • Users who engaged with chat returned more frequently than those who only tracked moods, validating that conversation (not passive logging) was the core value driver
  • Habit formation was underdeveloped in the MVP. Future iterations would focus on ethical engagement strategies that encourage healthy use patterns without creating dependency