« What a Day for a Daydream | Main | Trashing Peak Oil »

Identifying Friendly Intelligence

Here's another brain-related development with some interesting implications:

Are you a giver? Brain scan finds the truth

Altruism, one of the most difficult human behaviors to define, can be detected in brain scans, U.S. researchers reported on Sunday.

They found activity in a specific area of the brain could predict altruistic behavior -- and people's own reports of how selfish or giving they are.

"Although understanding the function of this brain region may not necessarily identify what drives people like Mother Theresa, it may give clues to the origins of important social behaviors like altruism," said Scott Huettel, a neuroscientist at Duke University in North Carolina who led the study.

In the study, students played games to earn money either for themselves or for a charity (selected by the students themselves.) The tests given monitored the difference in response when a student won money for charity rather than for him- or herself. One of the intriguing aspects of the study was that the response both to winning for oneself or for a charity did not show up in the region of the brain expected -- a region associated with reward stimulus.

"This area we saw was the posterior superior temporal cortex," [neuroscientist Scott] Huettel said. "It's part of the parietal lobe. What this brain area seems to be involved in is extracting meaning from things you see."

"If you see a rock move because someone picked it up, you can recognize that they have a goal. That would activate this region. If you saw a leaf fluttering in the wind, there is no intention in that leaf." And this brain region would not activate.

"We think altruism might help others understand the intentions of others," Huettel said.

Very interesting that selfless behavior seems to be tied up in the mystery of intention. As I wrote recently on that subject:

In the end, defining the future requires an understanding of intention. What is it? Another form of information? Its aim would appear to be a unique kind of information processing. Intention seeks to convert information in one form – an idea – into information in another form – manifested reality. Karl Popper dealt extensively with this puzzle, perhaps without providing the answers we’re looking for.

So did a new metaphysical reality become manifest when human beings began planning their actions? Or is intention only an illusion, effectively fooling conscious beings into believing that their random actions are leading to something other than random outcomes?

Maybe intention is the fire that Prometheus gave our remote ancestors, the same fire that we will soon be handing down to our electronic progeny. Assuming that it is not an illusion, intention is not only the one hope we have against all the random existential threats to human existence, it’s our one hope against all the intentional threats, as well as all the random threats that never would have existed were it not for intentional actions. It may be our damnation. It may be our salvation.

How interesting that we can’t even say with any certainty precisely what it is.

Equally interesting, as noted above, is the suggestion that intention may be related to altruism. One of the crucial challenges that humanity faces in the coming decades (or possibly just years or months) is coming to understand how what we think of as goodness -- and altruism makes up a big part of that -- can be encoded into a an intelligent system, any intelligent system. In order for the Singularity to be anything other than devastatingly bad news, it will need to involve one or more friendly intelligences, either an artificial Intelligence or an enhanced human intelligence.

Either way, beginning to understand how to identify selfless behavior at the level of brain activity, even for non-enhanced humans, is a huge step in the right direction.

TrackBack

TrackBack URL for this entry:
http://www.blog.speculist.com/cgi-bin/mt/mt-tb.cgi/418

Comments

Excellent point about the importance of altruism for whether the emergence of machine superintelligence will be a good or bad thing for us.

I'd like to imagine that even without built-in altruism, simple rational cooperation would be enough to make a highly intelligent entity 'play well with others.'

Thoughts?

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)