Jump to content
Science Forums

Artificial Intelligence researcher Ben Goertzel interviewed


Recommended Posts

Posted

Ben Goertzel is CEO of Novemente, a software company racing to develop the first artificially intelligent agent for Second Life, the internet virtual environment. His creation will learn by interacting with Second Life participants, and Ben in confident that it will meet -- and eventually exceed -- human level intelligence.

 

In this Machines Like Us interview, Ben discusses his projects in detail.

  • 4 weeks later...
Posted

... reminds me of this older story and warning:

 

A 21st-century golem. Matej Novak. The Prague Post. October 2, 2002

(www dot praguepost dot com)

 

"In his essay 'The Idea of the Golem,' Gershom Scholem writes, 'Golem-making is dangerous; like all major creation it endangers the life of the creator -- the source of danger, however, is not the golem ... but the man himself.'

 

Argentine Ambassador Juan Eduardo Fleming had these words in mind when conceiving Project Golem 2002/5763, named after the respective years in the Gregorian and Jewish calendars. 'The project's goal,' he says, 'is to rescue, revive and project the values enshrined in golem symbolism and tradition' -- a tradition that began in biblical times and has made its way through to the present day. 'Today's Golem,' says Fleming, 'means artificial intelligence, robots, cloning, the Internet, computers.' And as Scholem indicates, these are not evil or destructive on their own but have the potential to become so based on what man, the creator, instills in them."

Posted

Artificial intelligence is different from natural intelligence. The first is virtual or make believe, while the second is real. An analogy is an actor playing a role as a doctor. He is not a real doctor, but if he is affective, he can create a convincing role, that he is a doctor. I am not saying this in a negative way, but only to point out the limitations.

 

The difference between natural and artficial intelligence is due to the type of memory used. Artificial intelligence uses 2-D logic due to the binary nature of the semi-conductor switches. Neurons are different, in that they are not based on a binary system. Synapses are more like variable switches, that can work with a range of neuro-tranmittors and therefore have more that a single on-off setting. The brain has to use a different approach to take into consideration variable memory switches. To put this in perspective, consider if the on-off switches of semi-conductor memory, evolved a new feature that allows 10 different settings per switch. At the very least, one could use the same memory sectors for multiple data. Instead of two switches for 4 possible combinations there are now 100. Instead of three switches for 8 possible combinations, there are now 1000.

 

One of the ways the brain makes use of this extra capacity is to store some of its data into 3-D memory orientations. This is not physical geometry, but a way to store more data using 3-D logic. The right side of the brain stores data in this spatial or 3-D way. As an example of this unique memory orientation, if one saw a new shade of yellow, which they had never seen before, the right side of the brain, would still allow us to know it was a type of yellow. The reason this is so is the right side of the brain stores similar memories into 3-D memory types. The new yellow only has to be similar to an existing 3-D type. There is no need for 1 to 1 correspondence since the 3-D logic recognizes the similarity.

 

The way this is possible has to do it with hardware instead of software. For example, the new yellow enterring the eyes, will excite the receptors. The energy profile given off goes into the brain. All the brain does is place the energy profile into a memory sector tuned to that root potential. Other yellows, with similar profiles, are also in that 3-D memory.

 

Being 3-D, this type of memory is loosely analogous to a 3-D ball (visual analogy). The 2-D memories of logic is more analogous to logic planes, where the x,y axis are the case and affect axis for reasoning. We can approximate the 3-D ball, using 2-D memory planes, if we used an infinite number of 2-D planes, with a common center, with each intersecting at different angles, to fill in the 3-D volume. The 3-D type is the common center from the bulk input signal. When we see the new yellow, it adds another plane into that 3-D memory, with subtle 2-D differences.

 

If you look at the 3-D memory ball analogy, since it is designed to be 3-D, 3-D memory change do not have to follow the rules of 2-D logic, but can use 3-D logic that can appear to skip 2-D logic steps. One way this can occur is also done with hardware. The 3-D logic tries to lower the energy potential within the 3-D memory. The 3-D puzzle needs to rearrange based on the easiest path toward lower energy. This path can skip 2-D logic steps, but still end up in a state of lower 3-D energy. The new 2-D approximation is now optimized without needing 2-D logic.

 

The left side of the brain is more 2-D and makes more use of logic planes. This is the side of the brain used to create aritificial intelligence. When the 3-D memory is lowering potential, it can output into the 2-D logic planes that are stored within the left hemisphere. If there are no existing 2-D planes, the transfer may start out with a gut feeling, until the rational planes gradually form so one can reason the feeling with logic.

 

The left side of the brain is more logical, but still uses variable memory swtiches. One can be reasoning, and the variable switches can change. This may be more obvious in conversation, such as telling a story, where the story can change directions in mid-sentence, without skipping a beat, with respect the directon of logic flow.

Posted
Artificial intelligence is different from natural intelligence. The first is virtual or make believe, while the second is real. An analogy is an actor playing a role as a doctor. He is not a real doctor, but if he is affective, he can create a convincing role, that he is a doctor. I am not saying this in a negative way, but only to point out the limitations.
Uh, are you saying that this is the case by *definition*? It seems that the discussion in your post is limiting "Artificial Intelligence" to just a few now-ancient approaches to the problem. If that's the case then its not really relevant to what most people who work on the problem would call "Artificial Intelligence."
The difference between natural and artficial intelligence is due to the type of memory used. Artificial intelligence uses 2-D logic due to the binary nature of the semi-conductor switches. Neurons are different, in that they are not based on a binary system.
The characterization of this "limitation" on AI is no longer true. In fact we've been studying neural nets for several *decades* now, implemented both in software and in hardware.
One of the ways the brain makes use of this extra capacity is to store some of its data into 3-D memory orientations.
The way the brain works is more "n-dimensional" if you want to discuss how its "wired," but your general characterization:
As an example of this unique memory orientation, if one saw a new shade of yellow, which they had never seen before, the right side of the brain, would still allow us to know it was a type of yellow. The reason this is so is the right side of the brain stores similar memories into 3-D memory types. The new yellow only has to be similar to an existing 3-D type. There is no need for 1 to 1 correspondence since the 3-D logic recognizes the similarity.
...is sorta kind correct--ignoring the "3-D logic" term which is not one that is not a defined concept in AI. The brain does in fact memorize "patterns" (a term used a bit differently if you're a computer scientist), and those can be applied to a variety of inputs which register as "similar."
The right side of the brain stores data in this spatial or 3-D way. ... The left side of the brain is more 2-D and makes more use of logic planes.
Its hard to "refute" this because the terms you're making up here are your own, but there's no evidence that the neurons work differently or connect differently on the left versus right sides of the brain.

 

Interesting musings, but this is not AI...

 

Artificial Programmer,

Buffy

Posted

Well said Buffy. Thanks for saving me the time! :hyper:

 

Wiki has a nice write up (as usual) on Artificial Neural Nets. By the way 1987 was the year they really kicked off with Hopfield's back propagation networks.

 

Here are some well established uses for Neural Nets.

 

"Falcon" is a system developed by HNC (purchased in the last few years by Fair Isaac) about 15 years ago that every Visa and Master Card transaction travel through. It has a small neural network associated with each customer that memorizes their spending habits and flags anything that is too far out of bounds. There were quite a few adds about that feature around the time of its introduction... for example a guy who typically only bought beer and peanuts getting a phone call when he went to buy a tuxedo for his friends wedding.

 

The Navy has used it for many decades for sonar wave analysis. It was mentioned briefly during the movie The Hunt for Red October. This was not science fiction but science fact at the time.

 

Petroleum Process Control plants use them on a regular basis to optimize the valve controls to prevent the huge stack flares you might see producing flames several hundred feet tall. A typical flare costs about $60,000 so all they have to do is prevent about 3 flares and they have paid for themselves.

 

The auto-stick transmission in my 2001 Dodge Intrepid "learned" my manual shifting habits over about a 3 month period. Now when it is left in the "automatic" mode it still shifts gears at just about the same RPM I would for any given situation. It uses neural networks to pattern recognize my particular shifting behavior. - Which by the way is why people can be quite surprised to have their head snapped back when they drive my car for the first time if they don't go easy on the accelerator B) I love my car.

Posted
Artificial intelligence is different from natural intelligence.

 

Reminds me of one of my favorite buttons:

"Artificial Intelligence is no match for Natural Stupidity"

 

and

"Any sufficiently low technology is indistinguishable from corporate policy"

Posted
The auto-stick transmission in my 2001 Dodge Intrepid "learned" my manual shifting habits over about a 3 month period. Now when it is left in the "automatic" mode it still shifts gears at just about the same RPM I would for any given situation. It uses neural networks to pattern recognize my particular shifting behavior. - Which by the way is why people can be quite surprised to have their head snapped back when they drive my car for the first time if they don't go easy on the accelerator :hyper: I love my car.
That sounds like a very cool car!

 

I checked out the Dodge’s “Autostick” to which I think Symbology’s referring, and have driven a few recent cars with similar shifters. As I’ve seen this arrangement in cars dating back to the 1980s, I didn’t occur to me that any have a “learning” feature until reading Symbology’s post. I googled a bit for more info, but found only the occasional car forum mention of this feature, with some contradictory technical data – I’m unclear even the model numbers of the transmission involved, finding references to both 4 and 5-speed automatics (eg: a Mercedes W5A580, a 1995 transmission popular in many rear-wheel drive cars and SUVs).

 

Symbology, do you have a good technical reference to the parts of you Intrepid you consider to be using neural networks (presumably some sort of imbedded computer system somewhere between the shifter and the transmission)? Does the system provide any self-documentation/data, either through an in-dash display, or its under-hood diagnostic plug? Does it have a “disable” or “reset to factory defaults” feature (perhaps documented in the owner’s manual)?

 

Though the term “neural netword” is often used very loosely, I’d have to see some pretty novel code before I’d apply the term to an imbedded transmission control system. My guess is that the Autostick uses a fairly simple arithmatic average-based weighting scheme to gradually “learn” your manual selector use by adjusting its initial weights. Although one can kinda-sorta describe a biological nervous system as “a huge, richly-connected self-adjusting collection of analog weights”, I’d be reluctant to apply the adjective “neural” to any computer program that uses and updates stored weights, especially one with one a few or a few tens of data.

 

In short, although biological nervous systems and computer programs both are capable of “learning”, it’s un-useful, IMHO, to conclude from this that all computer programs that learn are “neural” - except perhaps as an advertising tactic B)

 

A “learning” car transmission shifter is not, I think, even an implementation of what Goertzel describes in the article as a “narrow AI”, because it doesn’t (I think – without actually seeing ones code, I can’t be certain) contain any representation of a problem to be solved. The “problem” – the car doesn’t shift gears with its selector in the “auto” position as when shifted manually – isn’t coded into expression for the program to adjust weights in order to minimize or maximize. It just changes weights based on data from the manual shifter use. The “problem” and its “solution” exist only in the perception of the system’s designers and users.

Posted

 

Symbology, do you have a good technical reference to the parts of you Intrepid you consider to be using neural networks (presumably some sort of imbedded computer system somewhere between the shifter and the transmission)? Does the system provide any self-documentation/data, either through an in-dash display, or its under-hood diagnostic plug? Does it have a “disable” or “reset to factory defaults” feature (perhaps documented in the owner’s manual)?

 

All I have at the moment is my owners video that instructs me in the very complex dance of on and off with the key and something else (accelerator?) that resets the computer weights.

 

I do know I have had the battery out several times and never lost the shifting behavior, which indicates to me the values are stored in static memory, as I believe the other engine light codes etc are stored.

 

My original source of information was a very tech savvy sales guy who was quite aware of the technical specs of all his vehicles. (He also implemented their first web site, which is how I met him). We got along well and I had about half a dozen lunches with him on later visits back to the dealership. That was when we discussed these and other details.

 

But I will see what I can dig up as well.

 

Though the term “neural netword” is often used very loosely, I’d have to see some pretty novel code before I’d apply the term to an imbedded transmission control system. My guess is that the Autostick uses a fairly simple arithmatic average-based weighting scheme to gradually “learn” your manual selector use by adjusting its initial weights. Although one can kinda-sorta describe a biological nervous system as “a huge, richly-connected self-adjusting collection of analog weights”, I’d be reluctant to apply the adjective “neural” to any computer program that uses and updates stored weights, especially one with one a few or a few tens of data.

 

The neural net implemented by Falcon on each of our credit cards has about 15 weights in it. That's all it takes to learn the basics of our spending patterns. As long as the domain is well specified, back prop does nicely to learn a particular pattern.

 

If you don't want to call a collection of weighted function sums implementing a sigmoid threshold function a Neural Network, then I would be interested to find out what your definition of a neural network is.

 

Any of the practical applications implemented at Texaco and Neurocorp always used less than 10 hidden nodes. Usually 4 or less. And it was always good to apply the linear correlations from every input to every other input to filter out redundant inputs. That way the input set that the NN received typically never exceeded 15 input variables.

 

Obviously Kohonen networks are the exception to that, but clustering is a completely different problem set.

Posted

Like I said, synapses are not simple on-off switches. AI is based on programming with binary switches.

 

With a synapse, depending on the type of neuro-transmittor, the switch can be made to fire either harder or easier at any given input potential. While the axon input can also vary its potential. Using variable brain wave frequencies, one can also vary the background tempo of the bulk firing at the same time these other affects are in action. How would one program, if they had computer memory with this capability?

 

One of the mistakes with looking at the brain and trying to simulate AI we only are looking at the tip of the iceburg. For example, every cell in the human body, which is in the order of a zillion, each has a little nerve ending nearby, that is collecting data for the brain. While the brain is doing that, it can also control dynamic changes to the body, such as a healthy dose of adrenline, where all the cells need to stay tweaked. As the same time, it is helping us walk and chew bubble gum. The walking and the chewing bubble gum is about all the credit the brain is given.

 

To deal with such a complicated control system, especially in light of the synapes having variable settings, the brain needs something more than 2-D logic. The 3-D logic is hard to explain, except with analogies. Picture a 3-D ball, that is it composed of a very large number of 2-D planes, all with a common center. In this examp,e, these are all logic planes, each with the logic steps for a particular action. One flex of the 3-D ball, sends out 100's of 2-D logic signals, that are all integrated, each with a very specific train of logic, leading to one integrated affect. It is not sequential in 2-D but is integrated in 3-D. The brain is slow, since it uses positive charge instead of electrons. But what it lacks in speed, it makes up for with extreme efficiency.

 

The difference in the left and right hemispheres is not obvious by looking just at the neurons. It is mostly behavior data which equates different types of human actions to the two sides of the brain. The left is more analytical, which is more logical. The right is more subjective, creative and intuitive, which is a blend of pre-logic and post-logic.

 

The brain goes one step further than 3-D logic, into the realm of 4-D logic. This is even harder to explain, but it involves 3-D logic, time projection and time lines. An analogy the human ability to plan the future. Say you are going on vacation to a new spot. One projects into the future, so they can have something to do each day of the vacation. When the time comes, we have all the arrangments in sort of a time line. Vacations never turn out exactly like you plan. There are always unseen events or better things come along, etc.. The unseen events in real time, can cause a change to the time line. Humans can do this, because the brain already has this innate capacity. It used to be called instincts. It can project into the future, set a time-line for its 3-D memories. Interaction with the real time environment needs tweaks, so it continually time projects.

 

A good example is walking. The brain is already anticipating the next step and only has to flex a few dozen 3-D memories using a time sequence. If there is an obstacle, it redoes the time line, for a studder step.

Posted
Like I said, synapses are not simple on-off switches. AI is based on programming with binary switches.

No. Its not.

To deal with such a complicated control system, especially in light of the synapes having variable settings, the brain needs something more than 2-D logic. The 3-D logic is hard to explain, except with analogies.

No. Its not.

 

Repeating this stuff does not make it more true. Honest! :D

 

Intellectual activity is a danger to the building of character, :turtle:

Buffy

Posted

Hydrogen,

 

Neural Networks are based on continuous functions like a sigmoid, not discreet binary functions. Basically they are the 3D stuff you are talking about.

 

 

They have been using continuous functions since the 60's in single neurons called "perceptrons". Granted Minsky (through good intentions) managed to keep it down until Hopfield introduced the hidden layer trick back in the mid 80's, at which point Minsky become a great proponent.

 

It was all modeled after the scientists best understanding of how neurons work at the time.

 

NN behavior is very different than normal rule based systems aka Expert Systems. NNs are very good at recognizing patterns, and in some cases (like Kohonen self organizing maps) able to figure out the patterns for themselves. These days many systems pre-process with a Kohonen that then feeds into a normal back propagation or time series network.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...