← Back to context

Comment by dehugger

13 hours ago

surprising considering you just listed two primary use cases (exploring codebases/data models + creating documentation)

Exploring a codebase tells you WHAT it's doing, but not WHY. In older codebases you'll often find weird sections of code that solved a problem that may or may not still exist. Like maybe there was an import process that always left three carriage returns at the end of each record, so now you got some funky "lets remove up to three carriage returns" function that probably isn't needed. But are you 100% sure it's not needed?

Same story with data models, let's say you have the same data (customer contact details) in slightly different formats in 5 different data models. Which one is correct? Why are the others different?

Ultimately someone has to solve this mystery and that often means pulling people together from different parts of the business, so they can eventually reach consensus on how to move forward.

> creating documentation

How is an AI supposed to create documentation, except the most useless box-ticking kind? It only sees the existing implementation, so the best it can do is describe what you can already see (maybe with some stupid guesses added in).

IMHO, if you're going to use AI to "write documentation," that's disposable text and not for distribution. Let the next guy generate his own, and he'll be under no illusions about where the text he's reading came from.

If you're going to write documentation to distribute, you had better type out words from your own damn mind based on your own damn understanding with your own damn hands. Sure, use an LLM to help understand something, but if you personally don't understand, you're in no position to document anything.

I don't find this surprising. Code and data models encode the results of accumulated business decisions, but nothing about the decision making process or rationale. Most of the time, this information is stored only in people's heads, so any automated tool is necessary blind.

  • This captures succinctly the one of the key issues with (current) AI actually solving real problems outside of small "sandboxes" where it has all the information.

    When an AI can email/message all the key people that have the institutional knowledge, ask them the right discovery questions (probably in a few rounds and working out which bits are human "hallucinations" that don't make sense). Collect that information and use it to create a solution. Then human jobs are in real trouble.

    Until that AI is just a productivity boost for us.

    • The AI will also have to be trained to be diplomatic and maybe even cunning, because, as I can personally attest, answering questions from an AI is an extremely grating and disillusioning experience.

      There are plenty of workers who refuse to answer questions from a human until it’s escalated far enough up the chain to affect their paycheck / reputation. I’m sure that the intelligence is artificial will only multiply the disdain / noncompliance.

      But then maybe there will be strategies for masking from where requests are coming, like a system that anonymizes all requests for information. Even so, I feel like there would still be a way that people would ping / walk up to their colleague in meatspace and say “hey that request came from me, thanks!”