← Back to context

Comment by RALaBarge

2 hours ago

The more I work with AIs (I build AI harnessing tools), the more I see similarities between the common attention failures that humans make. I forgot this one thing and it fucks everything up, or you just told me but I have too much in my mind as context that I forget that piece, or even in the case of Claude last night attesting to me while I am ordering it around that it cannot SSH into another server but I find it SSHing into said server about the 5th time I come back with traceback and it just fixes it!

All of these things human do, and i don't think we can attribute it directly to language itself, its attention and context and we both have the same issues.

Right, but when humans are writing the code, they have learned to focus on putting downward pressure on the complexity of the system to help mitigate this effect. I don't get the sense that agents have gotten there yet.