because per corps statements, AI are now top 0.1% of PhD in math, coding, physics, law, medicine etc, yet, when I try it myself for my work it makes stupid mistakes, so I have suspicion that corp very pushy on manipulating metrics/benchmarks.
I don't doubt the genuine progress in the field (from like, a research perspective) but my experience with commercial LLM products comes absolutely nowhere close to the hype.
It's reasonable to be suspicious of self aggrandizing claims from giant companies hyping a product, and it's hard not to be cynical when every forced AI interaction (be it Google search or my corporate managers or whatever) makes my day worse.
because per corps statements, AI are now top 0.1% of PhD in math, coding, physics, law, medicine etc, yet, when I try it myself for my work it makes stupid mistakes, so I have suspicion that corp very pushy on manipulating metrics/benchmarks.
I don't doubt the genuine progress in the field (from like, a research perspective) but my experience with commercial LLM products comes absolutely nowhere close to the hype.
It's reasonable to be suspicious of self aggrandizing claims from giant companies hyping a product, and it's hard not to be cynical when every forced AI interaction (be it Google search or my corporate managers or whatever) makes my day worse.