Keplar raised $3M to make research 10x faster – here's what gets left behind
AI startups are compressing research timelines from weeks to days. But does it risk losing the craft of deep human insight?
AI startups are compressing research timelines from weeks to days. But does it risk losing the craft of deep human insight?
AI startups are compressing research timelines from weeks to days at 30% of the cost. But as the $140 billion market research industry automates customer interviews, it risks losing the craft of deep human insight—and the pipeline of researchers who know how to find it.
Keplar's voice AI can conduct 1,000 customer interviews in the time it takes a human moderator to schedule 20. The startup promises insights in days at 30% of the cost of traditional research. The question isn't whether AI will transform market research—it's what gets lost in translation.
The AI research wave
Keplar, founded by former Google engineers Dhruv Guliani and William Wen, raised $3.4 million from Kleiner Perkins in September 2025. The company emerged from the South Park Commons founder fellowship programme in 2023, when the duo spoke with market researchers and brand managers and realised that traditional tools—written surveys and human-conducted interviews—could now be replaced by conversational AI. The company's pitch centres on efficiency: 10 times more customer conversations at 30% of the cost. The AI never experiences interviewer fatigue, never leads conversations with unconscious bias, and handles logistics that typically consume weeks of project time.
Keplar isn't alone. A wave of well-funded startups is targeting different segments of the research value chain, each promising to automate what consultancies charge premium rates to do manually.
- Dialogue AI, which raised $6 million in October 2025, focuses on conversational AI for automated customer interviews. The startup, founded by former Nextdoor executives Justin Hoang, Hubert Chen, and Benjamin Lo, positions itself as transforming the customer research industry through AI-powered conversations. Unlike traditional research firms that conduct dozens of interviews, Dialogue AI promises to scale conversations into the hundreds or thousands whilst maintaining depth through natural language processing.
 - Conveo, a Y Combinator alum from Belgium, raised $5.3 million to specialise in AI-powered video interviews. The platform serves major brands including Unilever, Orange, Sanofi, and Google, delivering insights in hours rather than weeks. Conveo describes itself as "the first AI-powered research coworker," handling everything from behavioural research to innovation, UX, branding, and customer experience. The company was founded in April 2024 and has quickly established enterprise credibility by landing clients that traditional consultancies spent decades courting.
 - Clozd takes a different approach, carving out a specific niche rather than replacing general market research. The Utah-based company, which leads the win-loss analysis category, launched an AI interviewer trained on more than 50,000 customer conversations. Enterprise clients, including Microsoft, Toast, Gong, and Blue Cross Blue Shield, use Clozd not for general research but for a single, critical function: understanding why customers choose or reject products during the purchase decision.
 
These platforms share a focus on qualitative research—understanding the "why" behind customer behaviour through conversations. The quantitative side of market research—the "how many" questions answered through surveys and statistical analysis—faces different disruption dynamics.
Quantitative research was already partly automated—online survey platforms like SurveyMonkey, Qualtrics, and Typeform digitised data collection years ago. The labour-intensive aspects—designing surveys, recruiting respondents, cleaning data—still require human oversight, but the core task of gathering structured responses from large samples has been efficient for a decade. AI is enhancing these platforms through better survey design, adaptive questioning, and faster analysis, but the transformation is evolutionary rather than revolutionary.
The qualitative research market, by contrast, remained stubbornly manual. Conducting in-depth interviews required skilled moderators, careful scheduling, hours of transcription, and painstaking analysis of unstructured responses. This is where AI creates the most dramatic efficiency gains—not because the technology is more advanced, but because the baseline was so labour-intensive.
This explains why qualitative AI platforms are attracting venture capital, whilst quantitative tools generate less excitement. Keplar and its competitors aren't just improving existing workflows—they're eliminating entire categories of expensive human labour. The opportunity is larger, the cost savings more substantial, and the transformation more fundamental.
The efficiency revolution
The efficiency gains are real. Traditional qualitative research requires human moderators who can conduct perhaps four to six in-depth interviews per day before mental fatigue degrades performance. Recruiting participants, scheduling calls across time zones, transcribing recordings, and coding responses consume enormous time. A modest project with 30 interviews might take eight to ten weeks from kickoff to final report.
AI platforms compress this timeline dramatically. Automated recruitment reaches thousands of potential participants simultaneously. The AI interviewer operates around the clock, conducting dozens or hundreds of conversations in parallel. Natural language processing analyses responses in real-time, identifying patterns without manual coding. The same project that took ten weeks now finishes in five days.
The cost structure shifts just as dramatically. Traditional research firms charge £50,000-150,000 for projects involving 30-40 interviews. This pricing reflects the labour required: moderator time, transcription services, analyst hours, and project management. When AI handles these tasks, costs drop by 60-70%. Keplar's claim of delivering research at 30% of traditional costs isn't marketing hyperbole—it reflects the extent to which human labour has been eliminated.
The depth question
But interviews aren't just about extracting information. They're about building rapport, reading nonverbal cues, following unexpected threads, and understanding the context that lies beneath the surface of words. Human moderators adjust their approach based on participants' responses. They probe deeper when they sense hesitation, rephrase questions when answers seem confused, and pick up on emotional undertones that signal essential insights.
Can AI do this?
The technology has advanced considerably. Modern conversational AI detects sentiment, adjusts pacing, and follows branching logic based on previous answers. Voice analysis captures tone and inflection. The systems learn from thousands of interviews to improve their questioning techniques. Keplar's voice AI, built on the speech and language model expertise Guliani developed at Google, handles natural conversation flow with surprising sophistication.
Yet something fundamental may be missing. A skilled human interviewer brings intuition developed over years of conversations. They recognise when a participant is giving socially acceptable answers rather than honest ones. They notice when someone contradicts themselves or when a throwaway comment reveals deeper attitudes. They understand the cultural context that shapes how people express themselves.
Traditional researchers argue these nuances matter enormously. A participant might say they'd "probably" purchase a product, but their tone suggests scepticism. They might agree with a concept in principle, but describe usage scenarios that reveal misunderstanding. They might offer rational justifications for behaviour that's actually driven by emotional factors they're reluctant to acknowledge.
AI advocates counter that these human insights often reflect moderator bias as much as participant truth. Different interviewers interpret the same conversation differently. An enthusiastic moderator might read genuine interest where a sceptical one sees polite agreement. Standardisation—ensuring every participant experiences the same interview—prevents this variability.
They also point out that scale changes what's possible. Traditional research samples 20-30 people because conducting more interviews becomes prohibitively expensive. But are 30 people enough to understand a market segment? With AI, you can interview 500 or 1,000 participants for a similar cost. The statistical confidence improves dramatically, even if individual interview depth decreases slightly.
When speed wins, when depth matters
The trade-off becomes clear: depth versus breadth. Traditional research goes deep with small samples. AI research goes broad with large samples. Both approaches have value. The question is which matters more for specific decisions.
For some research questions, breadth clearly wins. Understanding how many customers encounter a specific problem, which features drive purchase decisions, or how satisfaction varies across demographic segments requires large samples. AI's ability to interview hundreds or thousands of people efficiently provides better data for these questions than 30 in-depth conversations.
For other questions, depth remains essential. Exploring why customers feel specific ways, understanding the emotional journey behind purchase decisions, or identifying unmet needs that customers struggle to articulate requires patient, skilled interviewing. A human moderator who spends an hour building rapport and carefully probing responses may uncover insights that automated analysis misses.
Consider a concrete example. A company wants to understand why customer retention declined last quarter. Traditional research might interview 25 customers who cancelled subscriptions, spending 45 minutes with each to explore their full experience. The researcher identifies themes: pricing concerns, feature gaps, poor onboarding, and competitive alternatives. But with only 25 conversations, they lack statistical confidence about which factors matter most.
AI research interviews 300 customers who cancelled, spending 15 minutes with each. The system identifies the same themes with statistical confidence about their prevalence. But the shorter conversations miss context—the customer who mentions pricing but is actually frustrated by poor support, the customer who cites a competitor but is really concerned about the company's financial stability.
Which approach provides better insight? The honest answer is: it depends on what you plan to do with the findings.
The market is essentially betting on both approaches. Enterprise clients with large research budgets use AI for scale and human moderators for depth. They run AI interviews to identify patterns, then conduct traditional research to understand the most important patterns in greater depth. This hybrid model maximises the strengths of both approaches.
Smaller companies face harder choices. They lack budgets for comprehensive research. AI platforms democratise access to customer insights that previously required expensive consultancies. A startup can now afford to interview 100 customers instead of guessing based on the founder's intuition. That's genuine progress, even if the interviews lack the depth of traditional research.
Traditional consulting firms recognise this threat. Some are developing their own AI tools to maintain competitiveness. Others emphasise their unique value—strategic insight, industry expertise, stakeholder management—that goes beyond data collection. The message is clear: AI can gather information, but humans provide wisdom.
This positioning may work for complex, high-stakes projects where clients pay for experienced judgment. But it struggles in the middle market, where companies need good-enough insights quickly, more than perfect insights eventually.
The vanishing middle
What gets lost in this transformation? Perhaps something subtle but important: the craft of interviewing. Skilled researchers spend years developing their technique—how to build trust quickly, how to ask questions that reveal rather than lead, how to listen for what's not being said. As AI handles more interviews, fewer people develop these skills—the expertise atrophies.
This matters because not all research will be automatable. Sensitive topics, complex B2B decisions, cultural contexts, situations requiring human empathy—these will likely need human moderators for years to come. But if the pipeline of researchers dries up because entry-level work shifts to AI, where will experienced interviewers come from?
The consulting industry faced similar dynamics. As more analysis moved to algorithms and junior consultants, firms struggled to develop senior partners capable of handling unstructured strategic problems. You can't jump from AI-assisted analysis to a seasoned strategic adviser without the middle years developing judgement.
Market research may follow this path. AI platforms handle routine projects, leaving human researchers to focus on complex work. But complexity requires experience that comes from doing routine work. The middle vanishes.
This isn't an argument against AI research tools. The efficiency gains are too substantial, the cost reductions too significant. Companies will adopt these platforms because they solve real problems. The question is whether the industry plans for what it loses alongside what it gains.
What this means for the industry
The strategic implications vary dramatically depending on where you sit.
For companies buying research, the opportunity is clear: access to customer insights that were previously unaffordable or too slow to be useful. Startups and mid-market firms can now run sophisticated research programmes that once required Fortune 500 budgets. The risk is mistaking volume for understanding—conducting 500 automated interviews doesn't guarantee better decisions if you're asking the wrong questions or missing crucial context.
For traditional consulting firms, the path forward requires an honest assessment of what clients actually pay for. If your value is conducting interviews and producing reports, AI platforms will undercut you. If your value lies in strategic interpretation, stakeholder management, and translating insights into action, you may be insulated—at least until AI gets better at those tasks, too. The firms that survive will likely be those that adopt AI tools whilst maintaining deep domain expertise.
For researchers and consultants building their careers, the message is uncomfortable but clear: routine interviewing skills won't sustain a career much longer. The valuable expertise will be framing the right questions, interpreting ambiguous findings, and handling the complex human dynamics that AI struggles with. But developing that expertise traditionally required years of conducting routine interviews. The industry needs to solve this pipeline problem before it creates a gap between junior AI-assisted analysts and the senior researchers they're supposed to become.
The transformation is already underway. Keplar and its competitors are capturing market share from traditional firms that have operated largely unchanged for decades. The research that takes weeks will increasingly take days. What remains to be seen is whether anyone is planning for the expertise gap that is created, or whether the industry will discover too late that automating the boring parts also automates away the training ground for the interesting parts.
