Comment by awongh
4 days ago
This is exactly what I use LLMs for. To just read the docs for me and pull out the base level demo code that's buried in all the AWS documentation.
Once I have that I can also ask it for the custom tweaks I need.
4 days ago
This is exactly what I use LLMs for. To just read the docs for me and pull out the base level demo code that's buried in all the AWS documentation.
Once I have that I can also ask it for the custom tweaks I need.
Back when GPT4 was the new hotness, I dumped the markdown text from the Azure documentation GitHub repo into a vector index and wrapped a chatbot around it. That way, I got answers based on the latest documentation instead of a year-old LLM model's fuzzy memory.
I now have the daunting challenge of deploying an Azure Kubernetes cluster with... shudder... Windows Server containers on top. There's a mile-long list of deprecations and missing features that were fixed just "last week" (or whatever). That is just too much work to keep up with for mere humans.
I'm thinking of doing the same kind of customised chatbot but with a scheduled daily script that pulls the latest doco commits, and the Azure blogs, and the open GitHub issue tickets in the relevant projects and dumps all of that directly into the chat context.
I'm going to roll up my sleeves next week and actually do that.
Then, then, I'm going to ask the wizard in the machine how to make this madness work.
Pray for me.
I just want a service that does this. Pulls in the latest docs into a vector db with a chat or front-end. Not the windows containers bit.
This could not possibly go wrong...
You're braver than me if you're willing to trust the LLM here - fine if you're ready to properly review all the relevant docs once you have code in hand, but there are some very expensive risks otherwise.
This is LLM as semantic search- so it's way way easier to start from the basic example code and google to confirm that it's correct than it is to read the docs from scratch and piece together the basic example code. Especially for things like configurations and permissions.
Sure, if you do that second part of verifying it. If you just get the LLM to spit it out then yolo it into production it is going to make you sad at some point.
There’s nothing brave in this. It generally works the way it should and even if it doesn’t - you just go back to see what went wrong.
I take code from stack overflow all the time and there’s like a 90% chance it can work. What’s the difference here?
However on AWS the difference between "generally working the way it should and not working the way it should" can be a 30,000$ cloud bill racked up in a few hours with EC2 going full speed ahead mining bitcoin.
1 reply →
Well, the "accidentally making the S3 bucket public" scenario would be a good one. If you review carefully with full understanding of what e.g. all your policies are doing then great, no problem.
If you don't do that will you necessarily notice that you accidentally leaked customer data to the world?
The problem isn't the LLM it's assuming its output is correct just the same as assuming Stack Overflow answers are correct without verifying/understanding them.
4 replies →