I think that's like, fractally wrong. We don't allow early-stage developers to bypass security policies so that they can learn, and AI workflow and tool development is itself a learning process.
> We don't allow early-stage developers to bypass security policies so that they can learn
Back when I worked at an F500 it was normal practice to give early-stage developers access to a "research" environment where our normal security policies were not applied. (Of course the flipside was that that "research" environment didn't have any access to confidential data etc., but it was a "prod" environment for most purposes)
I think that's like, fractally wrong. We don't allow early-stage developers to bypass security policies so that they can learn, and AI workflow and tool development is itself a learning process.
> We don't allow early-stage developers to bypass security policies so that they can learn
Back when I worked at an F500 it was normal practice to give early-stage developers access to a "research" environment where our normal security policies were not applied. (Of course the flipside was that that "research" environment didn't have any access to confidential data etc., but it was a "prod" environment for most purposes)