Leo – AI Moderator for Scalable Research
Industry
Market Research - AI
Client
Forelight.ai
Service
Product Design
Date
Q4 2024
Overview
Leo is an AI-powered research moderator that enables researchers to conduct multiple participant interviews simultaneously, unlocking deep qualitative insights at scale. Traditional user research methods—especially in-depth interviews—are often time-consuming, expensive, and limited in reach. As a result, many researchers either reduce the number of interviews or rely on surveys, which provide quantitative data but lack the richness of direct conversations.
Leo bridges this gap by combining the efficiency of surveys with the depth of qualitative interviews. By automating moderation, it allows researchers to collect nuanced insights faster and more affordably, making high-quality user research accessible to more teams and projects.
My Role & Responsibilities
As the lead UX Designer, my responsibilities included:
Conducting initial user research and competitive analysis
Designing user flows and creating wireframes
Prototyping and user testing
Collaborating closely with developers for implementation
Iterating based on feedback from participants and researchers
Users & Their Needs
Participants (Interviewees)
For participants, the experience had to be:
✔ Effortless – No onboarding or learning curve required.
✔ Familiar – A natural conversational interface that feels like speaking to a real moderator.
✔ Accessible – Works across devices with minimal friction.
Participants interact with Leo in a chat-like environment, answering questions, reacting to stimuli, and engaging in a discussion without needing to schedule calls or install new tools.
Researchers (Interviewers & Analysts)
For researchers, the backend system needed to:
✔ Be Flexible – Allow customization of interview guides, follow-up questions, and stimuli (e.g., images, videos, prototypes).
✔ Ensure Depth – Capture verbatim responses, identify key themes, and enable dynamic question branching.
✔ Scale Efficiently – Handle dozens of participants in parallel, providing rich insights in less time.
Researchers design their study by structuring questions and defining interaction flows. Leo then moderates conversations, collecting responses that researchers can analyze via an interactive dashboard.
The Challenge
The Cost & Scalability Issue in UX Research
User interviews are among the most valuable methods for understanding user needs, behaviors, and motivations. However, they are expensive and time-consuming:
Recruitment takes time and effort.
Scheduling interviews can be a bottleneck.
Moderation requires trained researchers.
Analyzing interviews is labor-intensive.
To cope with these constraints, teams often default to surveys, which can scale easily but lack qualitative depth. This tradeoff means that crucial insights are often missed, limiting the impact of research on product decisions.
How Leo Solves This
Leo provides the best of both worlds—the scale of surveys with the richness of qualitative research. By automating the interview process, it:
✔ Reduces cost per interview – No need for live moderation, saving researcher hours.
✔ Eliminates scheduling barriers – Participants complete interviews at their convenience.
✔ Extracts deeper insights – AI-driven moderation ensures follow-ups and context-aware probing.
Research & Discovery
Testing the First Prototype: Learning from a Quick Launch
Before committing to a fully fleshed-out design, we started with a functional but rudimentary prototype. The development team initially built Leo’s backend using a simple copy-paste text field borrowed from an existing feature in the analysis platform of forelight and an existing template for the front side of leo. While not designed for AI moderation, this interface allowed us to quickly test the core concept and gather early user feedback.
🔹 What this early version allowed us to test:
How researchers interacted with an AI moderator in a real-world setting.
Whether participants could engage with an AI-driven interview naturally.
What pain points emerged from both sides of the experience.
🔹 How we structured our beta testing:
We onboarded partnered research companies already using Forelight.ai to test Leo in live projects.
We simulated "fake" research projects to recruit participants and observe their reactions.
We collected qualitative feedback and usability metrics, analyzing pain points and feature requests.
Key Findings: What Users Struggled With
For Researchers: The Back-End Was Confusing and Limiting

❌ Lack of control over interview flow – The copy-paste interface was too simplistic, offering no way to structure follow-ups, probing questions, or adapt interviews dynamically.
❌ Unclear workflow – The "Interview Guide" was located in the Project Setup section, separate from Leo’s moderation settings, making it difficult to understand how everything connected.
❌ Missing core features – Many researchers assumed they could do things that weren’t yet possible, such as adding stimuli or customizing Leo’s moderation style.
💬 "How do I add probing questions?"
💬 "How do I ensure Leo follows the right flow?"
💬 "How do I add videos or images as stimuli?"
Some of these questions had a frustrating answer: "You can't yet."
This was a major problem—if researchers couldn’t clearly set up their AI moderator, they would either abandon the tool or make errors that compromised research quality.
For Participants: A New Concept, But an Unfamiliar Interface
❌ The interface was functional but unnatural – The prototype used a simple chat-based format, similar to existing text-based AI moderators. However, participants found this experience less engaging than expected.
❌ Users hesitated or disengaged – Without the presence of a real human, some participants struggled with how to phrase responses or doubted whether their answers mattered.
📌 Insight: AI moderators had to feel natural and reduce cognitive effort, not add to it.
Competitive Analysis: No Direct Competitors, but Valuable Lessons
When assessing competitors, we found that while no direct equivalent to Leo existed, there were several text-based AI moderation tools. These provided useful insights into what not to do:
🔸 Common issues in existing AI research tools:
Rigid question flow – Participants couldn’t engage in a truly dynamic conversation.
Text-based moderation felt robotic – Lacked natural adaptability, making responses feel forced.
Typing responses created friction – Users had to think too much about wording, disrupting spontaneity.
📌 Insight: If Leo were to stand out, it needed to move beyond text and feel more like a real conversation.
Key UX Challenge: Making an Unfamiliar Concept Feel Instantly Intuitive
Leo introduced a new way of conducting research, but to drive adoption, it needed to feel as familiar as possible.
💡 The Solution: Borrow from existing mental models.
How do human researchers conduct interviews? Through tools like Zoom, Google Meet, and Microsoft Teams.
What are participants already comfortable with? Video conferencing interfaces, especially post-pandemic.
Instead of reinventing the wheel, we designed Leo’s participant experience to mirror these platforms—with subtle optimizations for AI-driven moderation. This ensured:
✔ Zero onboarding needed – Participants instantly understood the interface.
✔ A natural conversation flow – Leo could prompt and probe just like a human moderator.
✔ Higher engagement – Participants responded more freely in a format they trusted.
Next Steps: Translating Insights into UX & UI Design
With these findings, we moved into the Ideation & Design phase, focusing on:
Redesigning the researcher’s back-end experience to provide clearer workflows and better control.
Creating a video-based, conversational interface for participants that felt as natural as live moderation.
Structuring Leo’s AI-driven interaction model to enable dynamic questioning while maintaining research rigor.
Design Process
Redesigning the Researcher Interface: A Scalable Interview Guide
Our first priority was improving the interview guide setup, ensuring researchers could easily structure and manage their AI-moderated interviews without confusion or frustration.
I
teration 1: Relocating the Interview Guide

🔹 Initially, the interview guide was placed in a separate project setup area, disconnected from the AI Moderator settings. This separation made it difficult for researchers to understand how their guide would interact with Leo during the interview process.
🔹 We moved the interview guide setup into the same section as Leo’s settings, creating a more intuitive and cohesive workflow. This ensured researchers could configure everything in one place, reducing cognitive load.
🔹 Introduced a simplified interface for testing, which helped us identify usability bottlenecks before investing in a complex UI.
Iteration 2: Solving Major UX/UI Issues
After conducting usability tests with researchers, we discovered several pain points that needed immediate attention:
✅ Addressing Overwhelming Length – As interviews grew longer, researchers struggled with readability and navigation:
We introduced collapsible sections, allowing researchers to structure their guides into manageable parts instead of one long, overwhelming document.
We refined the visual hierarchy, ensuring that questions, follow-ups, and probes were clearly distinguishable.
✅ Fixing Navigation & Organization – Users needed better control over how their guides were structured:
We introduced drag-and-drop reordering, allowing researchers to move questions easily within and across sections.
The “Add Section” button was relocated to a more prominent position, ensuring it was easy to find and use.
✅ Enhancing AI Instructions – Researchers wanted more control over how Leo moderated conversations:
We added a dedicated instruction panel where researchers could define AI guidelines, such as how assertive Leo should be in probing answers.
Users could now specify restricted topics or keywords that Leo should avoid, ensuring more accurate and contextually appropriate moderation.
Refining the Participant Experience: Making AI Interviews Feel Natural
While the researcher interface focused on setup and control, the participant experience had to be seamless, familiar, and engaging. The challenge was making AI-driven conversations feel as natural as traditional interviews.
Initial design: Removing Unnecessary Elements
🔹 The initial UI resembled a chat-based interview format, which many participants found confusing—they didn’t realize they could simply speak their responses.
🔹 Some unnecessary components cluttered the interface, leading to cognitive overload. Users weren’t sure which elements were interactive and which were purely informative.
🔹 The visual design felt outdated and lacked the polish expected from modern research platforms.
Iteration 1: Creating a Seamless Experience
We made several key refinements based on participant feedback:
✅ Clarifying Input Options – We redesigned the interaction model to ensure participants knew they could:
Speak their answers naturally, just as they would in a Zoom or Google Meet call.
Receive subtle visual cues (e.g., a pulsing microphone indicator) to reinforce that speaking was the intended method.
✅ Improving Stimuli Presentation – Many research studies required participants to interact with stimuli, such as prototypes, videos, or images. To optimize this experience:
We created a flexible media viewer that allowed participants to interact with stimuli without disrupting the interview flow.
Ensured full mobile responsiveness, allowing participants to engage with Leo on any device without UI distortions.
Adjusted the layout to prioritize the conversational aspect.
✅ Creating a Familiar Experience – Since AI-moderated interviews were a new concept, we took inspiration from existing tools:
The participant interface was designed to resemble video conferencing apps like Zoom and Google Meet, making it feel instantly familiar.
Minimal onboarding was required, reducing friction and ensuring that participants could start their interviews without any technical difficulties.
The UI was designed to be clean and distraction-free, keeping the focus on the conversation rather than unnecessary interface elements.
Final Design & Solution
The Redesigned Researcher Interface: A Streamlined Setup Experience
After multiple iterations and usability testing, we delivered a final researcher interface that resolved the previous issues while introducing intuitive, scalable, and flexible workflows.




Key Enhancements
✅ A Structured Interview Guide – The guide is now broken into collapsible sections, reducing cognitive load and improving readability.
✅ Drag-and-Drop Reordering – Researchers can now easily move questions and sections, allowing for better organization.
✅ Integrated AI Control Panel – A dedicated space for researchers to define Leo’s behavior, including question flow logic, restricted topics, and probing strategies.
✅ Enhanced Visual Hierarchy – Questions, probes, and stimuli are now clearly distinguished, ensuring researchers can build guides without confusion.
✅ Live Preview Feature – Researchers can see how their interview flow will play out in real-time, reducing uncertainty before launching interviews.
The Optimized Participant Experience: Conversational & Intuitive
To make Leo’s moderation feel as natural as possible, the participant experience was redesigned to closely mirror familiar video conferencing tools while introducing AI-powered enhancements.







Key Enhancements
✅ A Clean, Distraction-Free Interface – Participants are greeted with a simple and modern UI, eliminating unnecessary components.
✅ Clear Input Options – Users immediately understand they can speak their responses, with an animated microphone indicator reinforcing this behavior.
✅ Seamless Stimuli Presentation – Whether interacting with images, videos, or prototypes, participants can engage with stimuli effortlessly while continuing their conversation with Leo.
✅ Optimized for Mobile – The UI automatically adjusts to different screen sizes, ensuring a seamless experience across desktop, tablet, and mobile devices.
✅ Smart AI Interaction – Leo dynamically adapts based on participant responses, ensuring a more human-like conversation flow.
Impact & Outcomes
After implementing these refinements, we conducted final usability tests to measure improvements. Results showed:
📈 +45% Faster Interview Setup Time – Researchers could now build structured interview guides in half the time compared to the initial prototype.
📈 Reduction in Participant Confusion – Fewer questions about how to interact with Leo, leading to smoother interview sessions.
📈 Higher Researcher Adoption Rates – With improved workflows and AI control, researchers felt more confident using Leo in real projects.
These improvements positioned Leo as a game-changing AI moderator, allowing researchers to conduct rich, scalable, and efficient qualitative research without the traditional constraints of live moderation.