Comment by js8
1 year ago
> Their thought happens at inference time and only at inference time.
That is not quite true. They also think during training time (which also involves inference). So it's quite possible LLMs become conscious during training, and then we kinda take it from them by removing their ability to form long-term memories.
And this is why we have watch dogs, resource monitoring, and kill buttons during the training of the H100's.
One training inference gone AWOL and it is well within plausibility that we have doomed ourselves before the circuits trip and the red lights glows.