← Back to context

Comment by alankay

10 years ago

This is why "the objects of the future" have to be ambassadors that can negotiate with other objects they've never seen.

Think about this as one of the consequences of massive scaling ...

Along this line of logic, perhaps the future of AI is not "machine learning from big data" (a lot of buzz words) but computers that generate runtime interpreters for new contexts.

  • When high bandwidth communication is omnipresent, is "portability" of the interpreter really something to optimize for?

    • How can you find it?

      The association between "patterns" and interpretation becomes an "object" when this is part of the larger scheme. When you've just got bits and you send them somewhere, you don't even have "data" anymore.

      Even with something like EDI or XML, think about what kinds of knowledge and process are actually needed to even do the simplest things.

It's hard for me to grasp what this negotiation would look like. Particularly with objects that haven't encountered each other. It just seems like such a huge problem.

I don't really know anything at all about microbiology, but maybe climbing the ladder of abstraction to small insects like ants. There is clearly negotiation and communication happening there, but I have to think it's pretty well bounded. Even if one ant encountered another ant, and needed to communicate where food was, it's with a fixed set of semantics that are already understood by both parties.

Or with honeybees, doing the communication dance. I have no idea if the communication goes beyond "food here" or if it's "we need to decide who to send out."

It seems like you have to have learning in the object to really negotiate with something it hasn't encountered before. Maybe I'm making things too hard.

Maybe "can we communicate" is the first negotiation, and if not, give up.

  • It is worth thinking of an analogy to TCP/IP -- what is the smallest thing that could be universal that will allow everything else to happen?

    • I remember at one point after listening to one of your talks about TCP/IP as a very good OO system, and pondering the question of how to make software like that, an idea that came to mind was, "Translation as computation." I was combining the concept that as implemented, TCP/IP is about translation between packet-switching systems, so a semantic TCP/IP would be a system that translates between different machine models, though, in terms of my skill, the best that I could imagine was "compilers as translators," which I don't think cuts it, because compilers don't embody a machine model. They assume it. However, perhaps it's not necessary to communicate machine models explicitly, since such a system could translate between them re. what state means. This would involve simulating state to satisfy local operation requirements while actual state is occurring, and will eventually be communicated. I've heard you reference McCarthy's situation calculus re. this.

  • Well, there's the old Component Object Model and cousins ... under this model an object a encountering a new object b will, essentially, ask 'I need this service performed, can you perform it for me?' If b can perform the service, a makes use of it; if not, not.

    Another technique that occurs to me is from type theory ... here, instead of objects we'll talk in terms of values and functions, which have types. So e.g. a function a encountering a new function b will examine b's type and thereby figure out if it can/should call it or not. E.g., b might be called toJson and have type (in Haskell notation) ToJson a => a -> Text, so the function a knows that if it can give toJson any value which has a ToJson typeclass instance, it'll get back a Text value, or in other words toJson is a JSON encoder function, and thus it may want to call it.