Comment by Levitating
1 day ago
You have some workload demos which all definitely try out but could you paint us an example use-case of the technology?
What are the workloads in the runtime capable of?
1 day ago
You have some workload demos which all definitely try out but could you paint us an example use-case of the technology?
What are the workloads in the runtime capable of?
OK bear with me on this, it'll probably be a idle thought-stream because I don't have a concrete answer right now.
My intention is for Pollen to become a "generic blob of computational capability" into which you idly `pln seed` a workload and do not have to worry about ANY aspects of managing locality, scale, redundancy etc. You seed a workload onto any node, and you call it from any (other?) node. If you want to add more computational power to the cluster, you fire up Pollen on another machine and `pln invite` -> `pln join`.
Every node also has it's own ed25519 cert. The root key pair (the "don't lose this or you're in trouble" key pair) is used to delegate admin certs to other nodes. I'm also working on a mechanism which allows you to bake any arbitrary properties into a cert (as it stands, these are lifted into the WASM guest code for, say, in-application authz purposes). I have more ideas about how this can be extended in the future.
The root authority can invalidate a participating peer's cert at any point, currently just via a `pln deny` command which is eagerly gossiped around the cluster so other nodes stop talking to the denied node, too. I think this offers some opportunities for some fairly novel applications. Perhaps, in the future, you'll provision a node with a certain level or capability or authority to run on some external infrastructure. It'll have all of the (allowed) capabilities of your cluster, but will act like it's local to the external system. Plus, you can revoke it's access or re-set it's capabilities at any point; `pln grant` eagerly applies across the cluster, too.
The workloads, at the moment, are just anything you can compile to WASM via the Extism PDK. Stateless, for now, but with a view to add shared state and persistence in the near future!
Sorry this was rambly, hopefully it offered something useful.
Splitting a big task (like anything ML-related) into a set of smaller ones and distribute them across the "fleet" of workers. Then reap the results, stitching it back into a single artifact at the end. This could be commercially viable. This could even become a p2p platform/market where some people basically buy computation while the others offer their hardware for temporary rent to earn a few bucks. You become the coordinator that just connects the demand with the supply and become rich from just commissions alone.
Absolutely! What's _really_ cool is that if you have disjoint computational steps that don't necessarily scale together linearly, you could split them into separately deployed `pln seeds` and let the cluster organically balance the compute as the different usage patterns occur. And yes, "p2p compute on demand" is certainly an intriguing idea.