← Back to context

Comment by savolai

2 years ago

I’d love to understand the relevance of this comment, but I sincerely don’t.

You describe two cases that are specially designed to anticipate needs of professionals operating a system. That’s automation, sure, but not AI. The system doesn’t even ostensibly understand yser intent, it’s still simply and obviously deterministic, granted complex.

Do you have an underlying assumption about you wishing tech to only be for solving professional problems?

The context Nielsen comes from is the field of Human-Computer Interaction, which to me is about a more varied usage context than that.

LLMs have flaws, sure.

But how does all this at all relate to the paradigm development the article discusses?

LLMs have flaws but they are exceptionally good at transforming data or outputting data in the format I want.

I once asked ChatGPT to tabulate calories of different food. I then asked it to convert table to CSV. I even asked it to provide SQL insert statement for same table. Now the data might be incorrect but the transformation of that data never was.

This works with complex transforms as well like asking it to create docker compose from docker run or podman run command and vice versa. Occasionally the transform would be wrong but then you realise it was just out of date with newer format which is expected because it's knowledge is limited to 2021