Comment by p1esk
20 days ago
No, I mean that my job in its current form – as an ML researcher with a phd and 15 years of experience - will be completely automated within two years.
20 days ago
No, I mean that my job in its current form – as an ML researcher with a phd and 15 years of experience - will be completely automated within two years.
Is the progress of LLMs moving up abstraction layers inevitable as they gather more data from each layer? First, we fed LLMs raw text and code and now they are gathering our interactions with the LLM regarding generated code. It seems like you could then use the interactions to make a LLM that is good at prompting and fixing another LLMs generated code. Then its on to the next abstraction layer.
What you described makes sense, and it's just one of the things to try. There are lots of other research directions: online learning, more efficient learning, better loss/reward functions, better world models from training on Youtube/VR simulations/robots acting in real world, better imitation learning, curriculum learning, etc. There will undoubtedly be architectural improvements, hardware improvements, longer context windows, insights from neuroscience, etc. There is still so much to research. And there are more AI researchers now than ever. Plus current AI models already make us (AI researchers) so much more productive. But even if absolutely no further progress is made in AI research, and foundational model development stops today, there's so much improvement to be made in the tooling around the models: agentic frameworks, external memory management, better online search, better user interactions, etc. The whole LLM field is barely 5 years old.
If you want a machine (or in fact another human) to do something for you, there are two tasks you cannot delegate to them:
a) Specify what you want them to do.
b) Check if the result meets your expectations.
Does your current job include neither a nor b?
A/B happen at different abstractions levels. My abstraction level will be automated. My manager’s level will probably last another year or so.
So your assumption is that it will ultimately be the users of software themselves who will throw some every day language at an AI and it will reliably generate something that meets those users' intuitive expectations?
3 replies →
What are you going to do for work in 2 years?
I have enough savings for a few years, so I might just move to a lower COL area, and wait it out. Hopefully after the initial chaos period things will improve.
For someone at your position with your experience it’s quite depressing that your job is going to be automated. I feel quite anxious when I see young generations in my country that say themselves they are lazy about learning new things. The next generation will be useless to capitalist societies, in a sense that they won’t be able to bring value through administrative or white collar work. I hope some areas of the industry will move slowly toward AI