Comment by sixdimensional
8 days ago
I somewhat agree with you, especially that one could identify a common abstraction that later an LLM could piggyback on top of.
Genuine question though - have you implemented an AI assistant/chat interface recently using LLMs on top of a UI?
I agree it can be a rabbit hole, but I just got through doing it on an app and there were definitely some things it really made way simpler and some complex scenarios that I'm not sure could have been done any more simply.
I built a chat interface in 2017 (this was with chatscript dialog trees with hole-filling and semantic search) that was ostensibly to prevent our data scientists from redundant work, ie, before they spent all day writing a SQL script, describe the job of the script and see if one already exists. The chatbot would then ask for the parameters required for a script, run the job, and then present a CSV of the returned data.
As we collected user feedback and refined the UX, we got closer and closer to an option tree that could be better represented by a drop down menu. It was kind of depressing, but I learned that the actual job of that R & D wasn't to come up with a superintelligent chatbot that replaced data scientists, it was to come up with the infrastructure that would allow data scientists to put their python scripts in a common repository to allow re-use without re-installing locally and screwing around with pyenvs.
Anyway, I'm also traumatized by my involvement with a YC startup that actually had a very good (if ENRONish) product around peer to peer energy futures trading that completely fell apart when investors demanded they make it "AI"
Sorry to here about your bad experience. I do understand it, as when the hype cycle causes perfectly good things to be thrown away or messed up for no practical reason, it really bugs me too.
In this case, it's so easy for some to say everything else is bad if it is not AI, or you have to include AI "because it's the future".
I just wanted to clarify that I am very pragmatic in my approach to new tech and AI.
I used to work on enterprise ERP, MRP, sales and support systems (Oracle, Salesforce, ServiceNow, and more)... not always by choice.
At one point a project came up to rebuild our core customer experience portal across our business. I got to build with the rule based chatbots to implement an assistant in that web app.
Not easy, but it worked well enough, especially for simple scenarios. For example, imagine a field support engineer being able to pull up the exact page in a big technical document for some obscure part by just asking for what they need, in seconds.
Or, being able to reorder some quantity of part by just asking for it, or checking the quantity of same.
These are time savers, especially if you have a voice to assistant interface too. So you can just type something, or ask for what you need without typing.
Tablets were common for manufacturing, inventory or service staff. Being able to pull up our interface and just say what they needed and get prompted for simple things - there were TONs of pragmatic small wins there.
Current LLM just make that easier than the old rigid rule based way we had to code those assistants (at the price of going wrong sometimes).
I care about efficiency, pleasant user experience, and pragmatics.. not AI for the sake of AI.
All that to say, if a drop down box works better for some use case, instead of AI - or even a rule based chatbot - hell yeah! Save on them tokens.