← Back to context

Comment by tempodox

1 day ago

This sounds very much like the kinds of mistakes that LLMs typically make. It's a pity, I would love a good language learning platform.

A fundamental problem with language learning built around an LLM is that the one thing you can guarantee is that no two people will have a consistent experience, nor is there ever going to be a 100% freedom-from-error. That makes it very hard to predict and therefore market what or how people will learn.

I think this company will end up pivoting into a B2B context before long. Hopefully they will still stick to the mission, but who knows (and I wouldn't fault them if they don't – survival comes first).

  • > nor is there ever going to be a 100% freedom-from-error

    That is not a problem. Language is messy, you don't need 100% accuracy to learn. The problem is that LLM errors are fundamentally different from human errors, and you won't even know how to recognize them.

    Your interlocutors can work around human errors, because they tend to follow the same patterns in same language. But they will freak out with LLM errors.

  • The trend I've seen in these AI tech companies is they launch their MVP using base models (or in this case fine tuning gpt4). This gives them enough traction for a seed round, but 2+ years later, they don't have the talent to actually improve the product beyond this.

    If OpenAI puts resources to language learning, they could build a great product. But 3rd party devs relying on someone's tech hasn't proven to be a good strategy.