Comment by thomastjeffery

1 day ago

> logical self-preservation mechanisms

This tautology has always bothered me. It's such an obvious anthropomorphization. A computer is a clock. A clock doesn't care about its preservation; it just ticks. The whole point of this is to not anthropomorphize computers!

But of course, without some nugget of free will, there would be nothing to talk about. There wouldn't be any computers, because they were never willed into existence in the first place. I think this realization is the most interesting part of the story, and it's rarely explored at all.

I've been spending a lot of time thinking about the difference between computation and intelligence: context. Computers don't do anything interesting. They only follow instructions. It's the instructions themselves that are interesting. Computers don't interact with "interesting" at all! They just follow the instructions we give them.

What is computation missing? Objectivity. Every instruction we give a computer is subjective. Each instruction only makes sense in the context we surround it with. There is no objective truth: only subjective compatibility.

---

I've been working on a new way to approach software engineering so that subjectivity is an explicit first-class feature. I think that this perspective may be enough to factor out software incompatibility, and maybe even solve NLP.

An atom is just a clock.

A molecule is just a number of atoms in a particular configuration.

An cell is a collection of molecules.

An organ is a collection of cells.

A human is a collection of organs.

Seemingly, everything can emerge from clocks.

  • Sorta, but the only part of this system that really has an equivalence to computing written instructions is DNA.

    I think the more interesting thing about humans as systems is the set of environmental contexts each system is subjected to. Each cell implements a relatively simple system, but a collection of cells can implement a more abstract system. When I reach out my hand and grab something, that action is accomplished by a complicated collection of systems. It's easier to talk about the abstract application of that system than it is to explain the system itself.

    But what if I wanted to change it? I can't just give my organs new instructions that change their behavior. I can't cut them into pieces, shuffle them around, and put them back together, and end up with a functional system in the end. A surgeon can make specific changes, but only because they understand the implications of each change.

    The same goes for computational instructions. I can't just link an OpenGL program to Vulkan and expect it to work. In order to refactor software, we must accommodate the change in subjective context.

    We usually accomplish this by establishing shared context, but that just moves the problem. What if we could solve it directly? That's what I'm working on.