← Back to context

Comment by fellowniusmonk

10 days ago

Two things:

1. Software: An OS that masquerades as simple note taking software.

Goal is to put an end to all the disparate AI bullshit and apps owning our data.

I solved context switching for myself ages ago and now I'm just trying to productize it outside my 3 companies internal usage.

It also solves context switching for AI agents as a byproduct.

2. Ethics: Give Ai and proto-Agi a reason not to kill us all.

An extremely minimal, empirical naturalistic moral framework that is universally binding to all agents so AI won't kill us all. I view the alignment problem as a epistemic moral grounding issue and that the current pseudo utilitarianism isn't cutting it. Divine command, discourse ethics, utilitarianism, deontology they are all insufficient.

Would love try out and contribute to both of these! Let me know if you'd like to get in touch.