← Back to context

Comment by adidoit

2 days ago

Ah ok yes that makes more sense to me. Thank you for clarifying - I agree this new philosophy of building on probabilistic software is an outcome of the bitter lesson.

And we will over time have even more capability in the model (that is more general purpose) than our deterministic scaffolds...

> And we will over time have even more capability in the model (that is more general purpose) than our deterministic scaffolds...

Who is "we" in that case? Are you the one building the model? Do you have the compute and data capacity to test every corner case that matters?

In a deterministic system you can review the code and determine what it does under certain conditions. How do you know the ones that do build non-deterministic systems (because, let's face it, you will use but not build those systems) haven't rigged it for their benefit and not yours?

> And we will over time have even more capability in the model (that is more general purpose) than our deterministic scaffolds...

Our deterministic scaffolds sounds so dramatic, it sounds like you think of them like the chains that keep holding you, if only those chains were removed, you'd be able to fly. But it's not you who'd be able to fly, it's the ones building the model and having the compute to build it. And because of its non-deterministic nature, a backdoor for their benefit is now simply plausible deniability. Who is we. You are a user of those models, you will not be adding anything to it, maybe only circumstantially by your prompts being mined. You are not we.

  • This is a genuine concern, which is why it is a very hot topic of research. If you're giving a probabilistic program the potential to do something sinister, using a commercial model or something that you have not carefully finetuned yourself would be a terrible idea. The same principle applies for commercial binaries; without decompilation and thorough investigation, can you really trust what it's doing?