Comment by kevinwu2981

3 days ago

Why do you think it didn't work out in legal? We currently don't focus on that domain.

In general, we currently have really high success rates with relatively constrained use cases, such as lead qualification and well scoped customer service use cases (e.g., appointment booking, travel cancellation).

In general, voice AI is hard because WYSIWYG (there is no human in the loop between what the bot is saying and what the person on the other side gets to hear). Not sure about legal, but for more complex use cases (e.g., product refunds in retail), there are many permutations in how two different customers might frame the same issue and so it might be harder to accurately instruct the AI agent in a way to guarantee high automation results (given plentitude of edge cases).

It is our belief therefore that voice AI works the best, when the bot is leading the conversation and it is always very clear what the next steps are...

I think the problem relates to the core value proposition of automating an intake department with voice AI. The best voice AI customer is in an industry in which there is a clear increase in value that comes with the ability to handle a larger mass of calls. This was not the case in the legal world, when one missed client might be a loss of millions (and many firms would live off of < 10 successful cases a year).

Therefore I think the verticals of customer service and lead pre-qualification make a lot more sense. Since you guys have the numbers, I am curious to learn more about the way you define constraints for the bot and how often calls in these verticals deviate from these constraints.

I'm also curious about your opinions/if you've seen any successful use cases where the bot has to be a bit more "creative" to either string together information given to it or make reasonable extrapolations beyond the information it has.

  • We see the main value prop of voice AI to be to enable higher volumes of calls in a cost-efficient manner. It is clear that there is a slight trade-off on quality, because humans will do a better job in "high-stakes" calls and where creativity is more required.

    It thus makes sense why it might not work for legal, since every call there might be high stakes.

    Having the bot be "creative" is actually an interesting proposition. We currently do not focus on it, since the majority of our customers want the bot to be predictable and not hallucinate.