Comment by jprete

2 years ago

I think there are actual existential and “semi-existential” risks, especially with going after an actual AGI.

Separately, I think Ng is right - big corp AI has a massive incentive to promote doom narratives to cement themselves as the only safe caretakers of the technology.

I haven’t yet succeeded in squaring these two into a course of action that clearly favors human freedom and flourishing.