"Chat UI" can "feel" a bit thin from an eng/product when you initially think about, and that's something we've had to grapple with over time. As we've dug deeper, my worry about that has gone down over time.
For most people, the chat is the entrypoint to LLMs, and people are growing to expect more and more. So now it might be basic chat, web search, internal RAG, deep research, etc. Very soon, it will be more complex flows kicked off via this interface (e.g. cleaning up a Linear project). The same "chat UI" that is used for basic chat must (imo) support these flows to stay competitive.
On the engineering side, things like Deep Research are quite complex/open-ended, and there can be huge differences in quality between implementations (e.g. ChatGPTs vs Claude). Code interpreter as well (to do it securely) is quite a tricky task.
My understanding of YC is that they place more emphasis on the founders than the initial idea, and teams often pivot.
That being said, I think there is an opportunity for them to discover and serve an important enterprise use case as AI in enterprise hits exponential growth.
There are many markets (Europe), and highly regulated industries with air-gapped deployments where the typical players (ChatGPT, MS Copilot) in the field are having a hard time.
On another axis, if you are able to offer BYOK deployments and the customers have huge staff with low usage, it's pretty easy to compete with the big players due to their high per-seat pricing.
Agree that's a lot of other projects out there, but why do you say the Vercel option is more advanced/mature?
The common trend we've seen is that most of these other projects are okay for a true "just send messages to an AI and get responses" use case, but for most things beyond that they fall short / there a lot of paper cuts.
For an individual, this might show up when they try more complex tasks that require multiple tool calls in sequence or when they have a research task to accomplish. For an org, this might show up when trying to manage access to assistants / tools / connected sources.
Our goal is to make sure Onyx is the most advanced and mature option out there. I think we've accomplished that, so if there's anything missing I'd love to hear about it.
I wasn't trying to be a hater, i think it is great they got funded for this. It just felt like there are so many free options and alternatives out there that are addressing basically the same things (and look almost exactly the same) it genuinely surprised me.
"Chat UI" can "feel" a bit thin from an eng/product when you initially think about, and that's something we've had to grapple with over time. As we've dug deeper, my worry about that has gone down over time.
For most people, the chat is the entrypoint to LLMs, and people are growing to expect more and more. So now it might be basic chat, web search, internal RAG, deep research, etc. Very soon, it will be more complex flows kicked off via this interface (e.g. cleaning up a Linear project). The same "chat UI" that is used for basic chat must (imo) support these flows to stay competitive.
On the engineering side, things like Deep Research are quite complex/open-ended, and there can be huge differences in quality between implementations (e.g. ChatGPTs vs Claude). Code interpreter as well (to do it securely) is quite a tricky task.
My understanding of YC is that they place more emphasis on the founders than the initial idea, and teams often pivot.
That being said, I think there is an opportunity for them to discover and serve an important enterprise use case as AI in enterprise hits exponential growth.
w24, those were different times.
Yeah that's like so long ago. But yeah, good luck competing with ChatGPT.
There are many markets (Europe), and highly regulated industries with air-gapped deployments where the typical players (ChatGPT, MS Copilot) in the field are having a hard time.
On another axis, if you are able to offer BYOK deployments and the customers have huge staff with low usage, it's pretty easy to compete with the big players due to their high per-seat pricing.
1 reply →
why?
there's a million other project just like this one, many that are much more advanced and mature, including from Vercel. There's no moat.
Agree that's a lot of other projects out there, but why do you say the Vercel option is more advanced/mature?
The common trend we've seen is that most of these other projects are okay for a true "just send messages to an AI and get responses" use case, but for most things beyond that they fall short / there a lot of paper cuts.
For an individual, this might show up when they try more complex tasks that require multiple tool calls in sequence or when they have a research task to accomplish. For an org, this might show up when trying to manage access to assistants / tools / connected sources.
Our goal is to make sure Onyx is the most advanced and mature option out there. I think we've accomplished that, so if there's anything missing I'd love to hear about it.
4 replies →
I wasn't trying to be a hater, i think it is great they got funded for this. It just felt like there are so many free options and alternatives out there that are addressing basically the same things (and look almost exactly the same) it genuinely surprised me.
[dead]