Comment by JoshuaDavid
7 days ago
I've had quite a bit of luck with using AI-assisted tooling for some specific workflows, and very little luck with others. To the extent that there's a trend[^1], it seems to be that tasks where I would spend a lot of time to produce a very small amount of output which is easy to evaluate objectively[^2] are sped up considerably, tasks where I would produce a large amount of output quickly (e.g. boilerplate) are sped up slightly, and most other tasks are unaffected or even slowed down (if I try to use AI tooling for them and decide it's not good enough yet).
As always, my views are my own and do not necessarily reflect the views of my employer.
[^1]: There's less of a trend than I'd expect. There are some quite difficult-to-me tasks that AI nails (e.g. type system puzzles) and some trivial-to-me tasks that AI struggles with (e.g. "draw correct conclusions when an image is uploaded of an ever-so-slightly nonstandard data visualization like a stacked bar chart").
[^2]: My favorite example of this is creating a failing test with a local reproduction of a reported bug on production - sure I _could_ write this myself, but usually these tests are a little bit finicky to write, but once written are either obviously testing the right thing or obviously testing the wrong thing, and the code quality doesn't really matter, so there's not much benefit in having human-written code while there's a substantial benefit in having any tests like this vs not having them.
No comments yet
Contribute on Hacker News ↗