Comment by cmiles74

6 months ago

Maybe the important thing is that we don't imbue the machine with feelings or morals or motivation: it has none.

If we developed feelings, morals and motivation due to them being good subgoals for primary goals, survival and procreation why couldn't other systems do that. You don't have to call them the same word or the same thing, but feeling is a signal that motivates a behaviour in us, that in part has developed from generational evolution and in other part by experiences in life. There was a random mutation that made someone develop a fear signal on seeing a predator and increased the survival chances, then due to that the mutation became widespread. Similarly a feeling in a machine could be a signal it developed that goes through a certain pathway to yield in a certain outcome.

  • The real challenge is not to see it as a binary (the machine either has feelings or it has none). It's possible for the machine to have emergent processes or properties that resemble human feelings in their function and their complexity, but are otherwise nothing like them (structured very differently and work on completely different principles). It's possible to have a machine or algorithm so complex that the question of whether it has feelings is just a semantic debate on what you mean by “feelings” and where you draw the line.

    A lot of the people who say “machines will never have feelings” are confident in that statement because they draw the line incredibly narrowly: if it ain't human, it ain't feeling. This seems to me putting the cart before the horse. It ain't feeling because you defined it so.