Jump to content
Science Forums

Recommended Posts

Posted

In old-fashioned (1980s) artificial intelligence terms, having empathy is nearly equivalent to having an other model. An other model is a program, or algorithm, that permits an AI’s main program to predict the future actions of an external entity. If the external entity is similar to the entity represented by the AI, a good approach to this is to make the other model something of a duplicate of the AI – an AI within an AI.

 

Human beings are not, of course, programs. Nonetheless, it’s reasonable to assume that we have something analogous to the other model of an AI. In addition to the difference in medium and complexity of the human mind vs. computer programs, an important difference between an AI vs. a human other model is that it’s technically possible to make an AI’s other model from a version of its main program, a human other model is based not on how a human actually thinks, but on how a human thinks he actually thinks. Though I can’t explain my intuition with precision or detail, I suspect that many social and psychological problems are due to the difference between how we actually think, and how we think we think – in other words, inaccuracies in our object models.

 

Although the object models in a human mind is an abstraction, there’s ample objective evidence that such a mental object actually exists. The most dramatic, IMHO, comes from fairly simple testing of people with a glaring, pathological lack of empathic ability, such as certain kinds of autistics, rare brain injury victims, and psychopaths. Although such people may perform well in many areas of intelligence testing normal people consider the most difficult, they show hardly any ability in tasks ordinary people take for granted, such as associating a smiling face with happiness, or a stern-looking one with anger. When shown an illustration of one or more people amidst various objects and asked to make up a story of what is happening, they may perform very poorly, or fail to understand the question.

 

I believe this AI-based model, and analogous supporting psychological testing data, support the common-sense notion that empathy is a crucial to individual and social wellbeing.

 

In addition to its practical importance – the biological success of humans is, in large part, due to our social ability (in short, our resistance to hunting ourselves to extinction with the same efficacy we exhibit with other macrofauna) – other modeling appears related to a more subtle object, the self model. In AI terms, a self model is essentially an other model that the AI applies to itself, allowing it to rehearse. The ability to rehearse is an important cognitive skill, and is essentially dependent on ones ability to predict the future actions of oneself. Note the similarity between an AI definition of other model – a program that allows the AI to predict the future actions of an external entity – and the definition of a self model.

 

The central importance of empathy – and hence, of other models – is also evident in moral and religious philosophy. My favorite statement of this is the one commonly attributed to Hillel the elder (arguably best known from reference to him on the dense script appearing on the labels of Dr Bonner’s Magic Soap), essentially a statement of the primacy of the Golden Rule:

What is hateful to you, do not do to your fellow: this is the whole Law; the rest is the explanation; go and learn.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...