Jump to content
Science Forums

Recommended Posts

Posted (edited)

A Gedanken Experiment that I've pondered for a Good Long Time.

 

Lets assume that we can make a device that I call a "Semantifier".

 

The Semantifier is essentially a dedicated Computer, to do one Thing--carry on conversations.

 

The Semantifier has no Sight, Hearing, Taste, Smell, Feel--or any other links to the Outside World.

 

The Semantifier has these points of Contact--Other systems strip every form of Meta-Communication from Language,

 

 

All the Semantifier gets is "Words" in their purest imaginable form.

 

If you tell the Semantifier that "Cat" and "Hat" Rhyme, it will be happy to store that fact away--but it has no way to deduce that from the two words themselves.

 

It starts with a rather large Vocabulary entered in its Hard Drive; but each word is "Weighted" only in relationship to other Words.

 

It can also over-write its Vocabulary, and to learn new words that you define for it--and take a stab at deducing the Meaning of New Words by Context.

 

The Semantifier also has a very explicit set of Grammer/Syntactical Rules.....

 

And they are also capable of being Partially Modified by experience.

 

The Semantifier has but one goal in life--It wants to provide the user with Stimulating Conversation.

 

Basically, the more often you use it; and the longer you spend talking to it; the better a Conversational Partner that it is.

 

If you spend Seventy-Five hours every Week, shouting at your Semantifier--you must like it on some level.

 

The Semantifier has no idea if you're Angry, or Speaking Loud.

 

You can tell it you're Pissed, but the words have absolutely no Emotional Subtext for the Semantifier.

 

The Semantifier has no sense of time.....

 

It gets an purely abstract feedback of how it is doing (Computed by other folks or Computers.

 

It has at least three sets of Equations going at once: The momentary Emotional State of the Participant; A derived Estimate of the odds that the Participant will stay entertained Short-Term.....Like over the next fifteen minutes.....And how likely the Participant is to log many hour using the machine, over the next six months.

 

The Machine is originally furnished with some very basic Heuristic Algorithms to help it pick from an almost infinite number of Conversational Gambits. It is constantly monitoring all three feedback circuits and altering its strategy as it goes.....

 

It is also able to adjust how much Emphasis to give each of the Three running tabulations.

 

Finally, if the Semantifier makes a huge Faux Pax, you can push a button to tell it that it isn't Making Sense.....it is entirely up to the Semantifier to decide how important "Making Sense" is to its long-term objectives.

 

The Semantifier can be set up purely for one person, it can treat all interactions as equally valid indicators as to how well its doing.....

 

O, I forgot to tell you, it stores all conversations permanently, and repeats itself or echos portions of them back to you, according to its ever shifting equations.....

 

But stuff that doesn't play to good reviews, sinks ever lower on the "Play List".

 

If its a Multiple use Semantisizer, it may know D-A-V-E finds the Subject of Bare Bottomed, Busty Young Fems highly "Interesting" and it will try that strategy with L-I-S-A who is Heterosexual, A-M-A-N-D-A who is not, and little T-O-M-M-Y who still thinks Girls are "Icky"

 

Given enough "On the Ground-Combat-Training", our Semanticizer should be able to pass the Turing Test Easily.

 

Is our Semanticizer "Sentient".....

 

I mean, if I could magically give it three seconds to fully experience one isolated Qualia, it would have no idea if it had just tasted Chocolate; Placed a "Hand" on a Red Hot Stove; Saw the Color Red--or whatever.

 

Nevermind what the "System"--Including many of the Peripherals to the Semanticizer--taken together is Sentient.

 

Is the Semanticizer, as a Piece of Stand Alone Hardware, Sentient?

 

Yippie Kiyay my Friends.

 

The Semanticizer is something that you rarely, if ever see addressed (Someone pointed this out to me...)

 

"Intelligence" Does not always mean "Sentience", And many Pet owners will agree that Sentience without Intelligence--at least Human Level Intelligence--is the Rule rather than than a screaming exception--at least among Birds and Mammals.

 

Imagine a Science Fiction Story where Robots and Mainframes are Universally Intelligent, while only a Relative Handful ever Achieve Sentience.....

 

And a Machine that wants to be Emancipated must demonstrate at least the Strong Appearance of Sentience.....

 

Picturing a Rather fair process--where at least 60% of the issue could be determined Mathematically and Unambiguously.

 

Anyway, would a "Semanticizer" be Intelligent in your Opinion? How about Sentient?

 

Can we actually have one without the other, do you think?

 

When you Whittle the Little End of Nothing to a Very Fine Point.....

 

What does it look like?

 

Saxon Violence

Edited by SaxonViolence
Posted

If I was building it solely to give folks a Rollicking Good Conversational Partner....

 

Well, I have went to some length to give it the very minimum contact with the Outside World.

 

It more or less exists in a Universe of Words--because I wanted to explore some ideas.

 

If I were going to market it, I'd give it a Voice Stress analyzer--at least partial eyesight--to keep track of how hot and heavy the Fine Body Movements were coming.....

 

etc,etc.

 

Saxon Violence

Posted

A Gedanken Experiment that I've pondered for a Good Long Time.

 

Lets assume that we can make a device that I call a "Semantifier".

I believe your though experiment is essential Searle’s Chinese room. Speaking from personal experience, I attest that philosopher (not mathematician or computer programmer!) Searls’s short 1980 paper presenting this sent ripples of controversy and excitement through the math/computer science community, and remains a key landmark in folks’ thinking – in my personal mind map, it’s a diametric balancing weight to Hofstadter’s “would a near-perfect computer simulation of every molecule in your body be as much a human as you?” thought experiment, from GEB, which nearly everybody I knew had recently read, was reading, or quickly started reading around the time when Searle’s thought experiment started making the rounds.

 

Given enough "On the Ground-Combat-Training", our Semanticizer should be able to pass the Turing Test Easily.

Your assertion give me an impression that you’re underappreciating how difficult writing a program that can, given any amount of training, pass the Turing test – that is, carry on a text chat with humans well enough to fool them into thinking it’s human.

 

Distill my and various like-minded folks conclusions on the subject into one aphorism phrases, I’d summarize them as: the devil’s in the details. While I think Searle’s incorrect in his intuition that a completely explicit algorithm can’t “be a person” (and thus, being a person, pass the Turing test), actually writing such a program has been subject to person-millennia of effort and attention, and proven harder than many expected.

 

Neat, simple trainable text generators can be written using Markov chains, where the system’s state is the proceeding 1 to n words. I’ve not played with these in a while, and couldn’t find any of my old play code, so threw together this mess of MUMPS code

 ;XMCTG: Markov chain text generator
n NA1 x XMCTG(1,0,1) k @NA1 x XMCTG(1,1),XMCTG(1,2) ;XMCTG(1): read text into @$p(R(1),"XMCTG:",2) (default to $na(^XD("XMCTG",1))
s NA1=$p($g(R(1)),"XMCTG:",2) s:NA1="" NA1=$na(^XD("XMCTG",1)),R(1)="XMCTG:"_NA1 ;XMCTG(1,0,1)
n A,L f A=1:1 r L q:L="."  s:$l(L) @NA1@(1,A)=L ;XMCTG(1,1)
n NA1,A,L x XMCTG(1,0,1) s A="" f  s A=$o(@NA1@(1,A)) q:A=""  s L=@NA1@(1,A) x XMCTG(1,2,1) s @NA1@(1,A)=L k:L="" @NA1@(1,A) ;XMCTG(1,2): format @$p(R(1),"XMCTG:",2) (default to $na(^XD("XMCTG",1))
x XMCTG(1,2,1,1),XMCTG(1,2,1,5),XMCTG(1,2,1,2),XMCTG(1,2,1,3),XMCTG(1,2,1,4) ;XMCTG(1,2,1)
s L=$tr(L,"abcdefghijklmnopqrstuvwxyz-+.,[]<>!""#$%&()*/:;=?@\^_`|~","ABCDEFGHIJKLMNOPQRSTUVWXYZ    >>>") ;XMCTG(1,2,1,1)
f  q:L'?1" ".e  s $e(L)="" ;XMCTG(1,2,1,2)
f  q:L'?.e1" "  s $e(L,$l(L))="" ;XMCTG(1,2,1,3)
f  q:L'["  "  s $p(L,"  ",1,2)=$p(L,"  ")_" "_$p(L,"  ",2) ;XMCTG(1,2,1,4)
f  q:L'[">"  s $p(L,">",1,3)=$p(L,">")_$p(L,">",3) ;XMCTG(1,2,1,5): remove tags
n NA2,L9 x XMCTG(2,0,1) k @NA2 x XMCTG(2,1) ;XMCTG(2): compile chain frequency from @$p(R(1),"XMCTG:",2) into @$p(R(2),"XMCTG:",2) (default to $na(^XD("XMCTG",1 2))
s NA2=$p($g(R(2)),"XMCTG:",2) s L9=$p(NA2,":"),$p(NA2,":")="",$e(NA2)="" s:L9<1 L9=2 s:NA2="" NA2=$na(^XD("XMCTG",2)),R(2)="XMCTG:"_L9_":"_NA2 ;XMCTG(2,0,1)
n NA1,NA2,L9 x XMCTG(1,0,1),XMCTG(2,0,1) x XMCTG(2,1,1),XMCTG(2,1,2) ;XMCTG(2,1)
n SP,A,L,I,B,P,S,I9 s SP=" ",A="" f  s A=$o(@NA1@(1,A)) q:A=""  s L=@NA1@(1,A),I9=$l(L,SP) X XMCTG(2,1,1,1),XMCTG(2,1,1,2) ;XMCTG(2,1,1)
f I=1:1:I9 s B=$p(L,SP,I) S:I=1 @NA2@(1,SP,1,B)=$g(@NA2@(1,SP,B))+1 f P=1:1:I-1 i I-P'>L9 s S=SP_$p(L,SP,P,I-1),@NA2@(1,S,1,B)=$g(@NA2@(1,S,1,B))+1 ;XMCTG(2,1,1,1)
f P=I9-L9+1:1:I9 s S=SP_$p(L,SP,P,I9),@NA2@(1,S,2)=$g(@NA2@(1,S,2))+1 ;XMCTG(2,1,1,2)
n A,B s (A,B)="" f  s A=$o(@NA2@(1,A)) q:A=""  s @NA2@(1,A,0)=$g(@NA2@(1,A,2)) f  s B=$o(@NA2@(1,A,1,B)) q:B=""  s @NA2@(1,A,0)=@NA2@(1,A,0)+@NA2@(1,A,1,B) ;XMCTG(2,1,2)
n NA3,L8,C9 x XMCTG(3,0,1) k @NA3 x XMCTG(3,1) ;XMCTG(3): generate random text from compile chain frequency from @$p(R(2),"XMCTG:",2) into @$p(R(3),"XMCTG:",2) (default to $na(^XD("XMCTG",2 3))
s NA3=$p($g(R(3)),"XMCTG:",2) s L8=$p(NA3,":"),C9=$p(NA3,":",2),$p(NA3,":",1,2)="",$e(NA3)="" s:L8<1 L8=2 s:C9<1 C9=99 s:NA3="" NA3=$na(^XD("XMCTG",3)),R(3)="XMCTG:"_L8_":"_C9_":"_NA3 ;XMCTG(3,0,1)
n NA2,NA3,S,SS,A,B,L x XMCTG(2,0,1),XMCTG(3,0,1) s:L8>L9 L8=L9 s S="" f C=1:1:C9 x XMCTG(3,1,1) s @NA3@(1,C)=B,S=$S(S="":"",1:S_" ")_B,LS=$L(S," ") S:LS>L8 S=$P(S," ",LS-L8+1,LS) s:B="" S="" ;XMCTG(3,1)
s SS=" "_S,N=$r(@NA2@(1,SS,0)),B="" f  s B=$o(@NA2@(1,SS,1,B)) q:B=""  s N=N-@NA2@(1,SS,1,B) q:N<0  ;XMCTG(3,1,1)
n NA3,L8,C9,A x XMCTG(3,0,1) s A="" W ! f  s A=$o(@NA3@(1,A)) Q:A=""  S B=@NA3@(1,A) W:$X+$L(B)>78 ! W B," " W:B="" !! ;XMCTG(4): write text from @$p(R(3),"XMCTG:",2) (default to $na(^XD("XMCTG",3))

fed it your post, and asked it to produce about 300 words. Using n=1, I got:

IMAGINE A WHO STILL THINKS GIRLS ARE UNIVERSALLY INTELLIGENT WHILE ONLY A PIECE OF SENTIENCE

AND THEY ARE ICKY

ANYWAY WOULD A SCIENCE FICTION STORY WHERE AT LEAST AMONG BIRDS AND HOW ABOUT SENTIENT

GIVEN ENOUGH ON SOME VERY FINE POINT

FINALLY IF IT HAD JUST TASTED CHOCOLATE PLACED A WHO STILL THINKS GIRLS ARE ICKY

BUT STUFF THAT DOESN'T PLAY LIST

IMAGINE A N D A VERY BASIC HEURISTIC ALGORITHMS TO THE SEMANTIFIER CAN TELL IT IS CONSTANTLY MONITORING ALL INTERACTIONS AS TO LOG MANY OF THEM BACK TO ITS DOING COMPUTED BY OTHER DO YOU ACCORDING TO PASS THE NEXT SIX MONTHS

YOU MUST LIKE

IS THE SEMANTICIZER IS TO TELL IT IS TO IT THE PARTICIPANT WILL AGREE THAT SENTIENCE AND MANY HOUR USING THE SEMANTIFIER CAN TREAT ALL INTERACTIONS AS TO TELL YOU IT WOULD HAVE ABSOLUTELY NO IDEA IF THE NEXT SIX MONTHS

IF IT WANTS TO THE SEMANTIFIER HAS AT LEAST AMONG BIRDS AND REPEATS ITSELF OR WHATEVER

BUT ONE GOAL IN LIFE IT YOU'RE ANGRY OR COMPUTERS

AND THEY ARE ICKY

IMAGINE A DERIVED ESTIMATE OF STAND ALONE HARDWARE SENTIENT

IS ALSO HAS BUT THE MACHINE THAT STRATEGY WITH STIMULATING CONVERSATION

THE SEMANTIFIER HAS NO IDEA IF I S A HAND ON THE MACHINE OVER THE TURING TEST EASILY

WHEN YOU THINK

SAXON VIOLENCE

IS

IMAGINE A VERY FINE POINT

I FORGOT TO IT WILL AGREE THAT IT IS ORIGINALLY FURNISHED WITH SOME VERY FINE POINT

WHAT THE SEMANTIFIER HAS NO SENSE OF CONVERSATIONAL PARTNER THAT IT MAY KNOW D A WHO STILL THINKS GIRLS ARE UNIVERSALLY INTELLIGENT WHILE ONLY A BUTTON TO ADJUST HOW ABOUT SENTIENT

 

using n=2:

BUT STUFF THAT DOESN'T PLAY TO GOOD REVIEWS SINKS EVER LOWER ON THE GROUND COMBAT TRAINING OUR SEMANTICIZER SHOULD BE ABLE TO PASS THE TURING TEST EASILY

INTELLIGENCE AT LEAST THE STRONG APPEARANCE OF SENTIENCE

PICTURING A RATHER FAIR PROCESS WHERE AT LEAST THREE SETS OF EQUATIONS GOING AT ONCE THE MOMENTARY EMOTIONAL STATE OF THE PERIPHERALS TO THE SEMANTICIZER IS SOMETHING THAT YOU RARELY IF EVER SEE ADDRESSED SOMEONE POINTED THIS OUT TO ME

AND LITTLE T O M M Y WHO STILL THINKS GIRLS ARE ICKY

CAN PUSH A BUTTON TO TELL IT YOU'RE PISSED BUT THE WORDS HAVE ABSOLUTELY NO EMOTIONAL SUBTEXT FOR THE SEMANTIFIER

O I FORGOT TO TELL YOU IT STORES ALL CONVERSATIONS PERMANENTLY AND REPEATS ITSELF OR ECHOS PORTIONS OF THEM BACK TO YOU ACCORDING TO ITS LONG TERM OBJECTIVES

AND MANY PET OWNERS WILL AGREE THAT SENTIENCE WITHOUT INTELLIGENCE AT LEAST HUMAN LEVEL INTELLIGENCE IS THE RULE RATHER THAN THAN A SCREAMING EXCEPTION AT LEAST 60 OF THE ISSUE COULD BE DETERMINED MATHEMATICALLY AND UNAMBIGUOUSLY

I FORGOT TO TELL YOU IT STORES ALL CONVERSATIONS PERMANENTLY AND REPEATS ITSELF OR ECHOS PORTIONS OF THEM BACK TO YOU ACCORDING TO ITS EVER SHIFTING EQUATIONS

YOU RARELY IF EVER SEE ADDRESSED SOMEONE POINTED THIS OUT TO ME

YOU ACCORDING TO ITS EVER SHIFTING EQUATIONS

ANYWAY WOULD A SEMANTICIZER BE INTELLIGENT IN YOUR OPINION HOW ABOUT SENTIENT

BUT STUFF THAT DOESN'T PLAY TO GOOD REVIEWS SINKS EVER LOWER ON THE PLAY LIST

YIPPIE KIYAY MY FRIENDS

WHEN YOU WHITTLE THE LITTLE END OF NOTHING TO A VERY FINE POINT

 

Using n=6:

INTELLIGENCE IS THE RULE RATHER THAN THAN A SCREAMING EXCEPTION AT LEAST AMONG BIRDS AND MAMMALS

PICTURING A RATHER FAIR PROCESS WHERE AT LEAST 60 OF THE ISSUE COULD BE DETERMINED MATHEMATICALLY AND UNAMBIGUOUSLY

AND MAINFRAMES ARE UNIVERSALLY INTELLIGENT WHILE ONLY A RELATIVE HANDFUL EVER ACHIEVE SENTIENCE

O I FORGOT TO TELL YOU IT STORES ALL CONVERSATIONS PERMANENTLY AND REPEATS ITSELF OR ECHOS PORTIONS OF THEM BACK TO YOU ACCORDING TO ITS EVER SHIFTING EQUATIONS

FINALLY IF THE SEMANTIFIER MAKES A HUGE FAUX PAX YOU CAN PUSH A BUTTON TO TELL IT THAT IT ISN'T MAKING SENSE IT IS ENTIRELY UP TO THE SEMANTIFIER TO DECIDE HOW IMPORTANT MAKING SENSE IS TO ITS LONG TERM OBJECTIVES

YOU WHITTLE THE LITTLE END OF NOTHING TO A VERY FINE POINT

CAN WE ACTUALLY HAVE ONE WITHOUT THE OTHER DO YOU THINK

WHAT THE SYSTEM INCLUDING MANY OF THE PERIPHERALS TO THE SEMANTICIZER TAKEN TOGETHER IS SENTIENT

I COULD MAGICALLY GIVE IT THREE SECONDS TO FULLY EXPERIENCE ONE ISOLATED QUALIA IT WOULD HAVE NO IDEA IF IT HAD JUST TASTED CHOCOLATE PLACED A HAND ON A RED HOT STOVE SAW THE COLOR RED OR WHATEVER

YIPPIE KIYAY MY FRIENDS

WHEN YOU WHITTLE THE LITTLE END OF NOTHING TO A VERY FINE POINT

WHAT DOES IT LOOK LIKE

YIPPIE KIYAY MY FRIENDS

WHEN YOU WHITTLE THE LITTLE END OF NOTHING TO A VERY FINE POINT

GIVEN ENOUGH ON THE GROUND COMBAT TRAINING OUR SEMANTICIZER SHOULD BE ABLE TO PASS THE TURING TEST EASILY

IMAGINE A SCIENCE FICTION STORY WHERE ROBOTS AND MAINFRAMES ARE UNIVERSALLY INTELLIGENT WHILE ONLY A RELATIVE HANDFUL EVER ACHIEVE SENTIENCE

PICTURING A RATHER FAIR PROCESS WHERE AT LEAST 60 OF THE ISSUE COULD BE DETERMINED

 

Though a single post isn’t much training, years ago I dumped dialog from entire novels into similar programs, using much more complicated state-determining rules, and never got anything that showed much promise of passing a Turing test (and thus none of winning me fame and fortune of the “how could nobody else have tried such an obvious thing” kind).

 

Passing the Turing test isn’t easy – if it were, we’d expect at least one of may major attempts using one of the many approaches would have passed it.

Guest MacPhee
Posted

A computer is only a box of electric switches. These switches can be made to turn on and off, in a sequence.

The sequence is determined by a human, who writes a "program". This program makes the switches go on and off, according to the way the human has decided they should.

 

The computer itself - the box of switches - doesn't decide anything by itself. It just follows its program - which has been put into it by a human. The computer is just an "instrument", responding to human instructions. Like a typewriter, or a piano, responding to its pressed keys.

 

So isn't asking - "Can a computer become sentient", like asking - Can a typewriter become sentient - and start typing out novels by itself - or a piano suddenly start playing the Warsaw Concerto of its own volition?

Posted

A computer is only a box of electric switches. These switches can be made to turn on and off, in a sequence.

The sequence is determined by a human, who writes a "program". This program makes the switches go on and off, according to the way the human has decided they should.

 

The computer itself - the box of switches - doesn't decide anything by itself. It just follows its program - which has been put into it by a human. The computer is just an "instrument", responding to human instructions. Like a typewriter, or a piano, responding to its pressed keys.

MacPhee, you appear to be stating an extreme variation of John Searle’s 1980 argument, a sort of “by jingo” appeal to the idea that “a box of electric switches” can’t “decide anything by itself”.

 

I think your claim is, in everyday terms, refuted by common evidence. For some time, computer programs have been written that produce results unanticipated by the people who wrote the programs, either intentionally, as in the case of fiction-writing programs like the one described in this Jan 2007 Discover magazine article, or accidentally, as nearly anyone who has written complicated programs that, to use one of my favorite IT euphemisms for “didn’t work right”, “exceeded the expectations of its designers”, an experience the essence of which was captured about half a century ago in this semi-famous jingle

 

I really hate this damn machine

I wish that they would sell it

It never does just what I want

But only what I tell it

and countless similar ones.

 

Intentional unexpectedness doesn’t require a complicated, sophisticated program like the one described in the Discover article. Even the tremendously simple little program I wrote and posted in post #4 – the essential parts of its “training” program and “creating” subprograms have about 220 bytes of high-level code – produces output that I didn’t anticipate.

 

On the other hand, we have from intuitive experience that there’s a great difference between, say, “programming” a human by carefully selecting his parents and educating and instructing him to behave in a specific way, such as to be a fiction writer, and writing a computer program to write fiction.

 

Whether this difference is qualitative or quantitative is the question at the heart of Searle’s strong/weak AI framing of it. Proponents of strong AI, such as myself, believe it to be essentially quantitative – that an adequate program, executed by a sufficiently large and fast computer, can be practically indistinguishable from a human being. Proponents of weak AI, such as Searle, believe that no computer, no matter how large, fast, and well-programmed, can.

 

So isn't asking - "Can a computer become sentient", like asking - Can a typewriter become sentient - and start typing out novels by itself ...

I think asking “can a typerwriter begin typing out novels by itself?” is more like asking “can an arc welder begin welding parts together by itself?” than “can a computer become sentient?”

 

Typewriters (For many folk born after 1990 who’ve never used or seen one of these, they’re essentially printers hard-connected to keyboards) aren’t designed to do anything but print on paper small shapes (glyphs) corresponding to exactly the keys (electric switches, or, in older ones, mechanical linkages). Like a welder, they simply don’t have any mechanical parts that would allow them to do what we use them to do without being handled.

 

If I see a welder attached to a computer controlled robotic arm, however, I expect to see it “begin welding by itself”. If I see a printer connected to a computer system (which with our present day ubiquitous wireless technology, means just one that’s powered on), I expect to see it “begin writing novels” – or practically any other sort of text or graphics – “by itself”.

 

... or a piano suddenly start playing the Warsaw Concerto of its own volition?

I don’t think any person who’s visited a piano store in the last decade would be too surprised to see a piano like a Yamaha Disklavier start playing any well-known piece by itself – though the experience can be startling the first time it happens to you. :)

 

I think these question and examples, however, distract from the question of whether the Turing test can be passed, and if so, how.

Guest MacPhee
Posted

CraigD, thanks for your reply. In a way, it makes my point. Only a very intelligent human being could have written it. You say you are a proponent of "strong AI", which means, to quote from your post:

 

"an adequate program, executed by a sufficiently large and fast computer, can be practically indistinguishable from a human being"

 

Well, with respect, that's not supported by any evidence to date. Doesn't any program soon get exposed. It gives gibberish answers to sensible questions.

 

Because, no-one can write a program that makes a box of electric switches into the equivalent of a human being. I doubt that any program can be written to reproduce the behavior of a pigeon. Or even a beetle or ant.

 

Even if such a program could be written, what would it demonstrate? - the ingenuity of the human programmer.

And this brings us back to the "Turing Test". The test boils down to this:

 

"Can a human, program a computer ingeniously enough, to fool another human?" Well, perhaps. But the ingenuity would be the credit of the human programmer. Surely you don't think the box of transistors should be applauded?

Posted

Reading through the conversation so far, I am surprised that no one has mentioned Neural Networks:

An artificial neural network (ANN), usually called neural network (NN), is a mathematical model or computational model that is inspired by the structure and/or functional aspects of biological neural networks. A neural network consists of an interconnected group of artificial neurons, and it processes information using a connectionist approach to computation. In most cases an ANN is an adaptive system that changes its structure based on external or internal information that flows through the network during the learning phase.

 

Essentially, they simulate the individual neurons of a biological brain.

Posted

Building something that can pass the Turing Test is so far from our "State of the Art" at present, that nothing useful can really be said about them?

It’s unwise in my experience to make “state of the art” predictions of what will or won’t happen in the near future of computer technology, but the chances of a program passing the “standard” Turing test anytime soon (eg: by this year’s Loebner Prize or some other upcoming Turing test contest) looks pretty slim to me.

 

Just because it seems unlikely the standard Turing test will be passed anytime soon doesn’t mean nothing useful can be said about trying. However, I think it’s accurate to say the sense of urgency and importance of the test, after waxing from 1950 to somewhere around, by my guess, 1980, has been waning since. Lots of careers and commercial ventures dedicated to passing/winning the test have come and gone, producing lots of useful software, but nothing like the HAL 9000, which back in 1968, lots of folk found pretty reasonable might be for sale sometime in the 1990s. You’d have to be a hell of a salesman these days to sell investors on the idea of recruiting a crack team of elite young AI programmers and giving them all the hardware they desire to write a Turing test winning program by 20??, in large part, I think, because the world seems to be getting along pretty well (and investors getting plenty rich) without one.

 

CraigD, thanks for your reply. In a way, it makes my point. Only a very intelligent human being could have written it.

Thank you! (blush)

 

You say you are a proponent of "strong AI", which means, to quote from your post:

 

"an adequate program, executed by a sufficiently large and fast computer, can be practically indistinguishable from a human being"

 

Well, with respect, that's not supported by any evidence to date. Doesn't any program soon get exposed. It gives gibberish answers to sensible questions.

I’ve not followed Turing test attempt literature in detail, but think you’re essentially correct. Good Tt programs avoid giving gibberish with tricky claims of not understanding, attempts to change the subject, feigned impatience, etc., but none well enough to fool many humans who know they’re administering a Tt, the “standard” conditions of the test.

 

And this brings us back to the "Turing Test". The test boils down to this:

 

"Can a human, program a computer ingeniously enough, to fool another human?" Well, perhaps. But the ingenuity would be the credit of the human programmer. Surely you don't think the box of transistors should be applauded?

I agree – though I’m pretty generous with applause, and would spare some for the computer that ran a successful Tt program.

 

The Turing test poses not only deep technical challenges, but deep philosophical ones, the latter I think in most part because it challenges us to understand how we think. Until we have a practically effective understanding of that, we can only have educated opinions on whether a computer program can ever “think like a human” enough to pass the Tt. Mine’s that it can, yours, I gather, that it can’t. Many more famous folk share our disagreement – check out this “long bet” (the first one tracked by the Long Now foundation) between Mitch Kapor and Ray Kurzweil.

 

I find this thought experiment on the subject, from 1997’s GEB, though the idea was around for some time before, a useful, and fun, one, which I hope can give your skepticism a healthy shaking:

 

Imagine that a very large and fast, yet otherwise ordinary, computer, has been programmed to accurately simulate the behavior of each individual molecule in a human body;

 

Would this program (by speaking with its simulated lungs, larynx, and lips, typing or signing with its simulated finders, etc) be able to pass the Turing test?

 

Reading through the conversation so far, I am surprised that no one has mentioned Neural Networks:

An artificial neural network (ANN), usually called neural network (NN), is a mathematical model or computational model that is inspired by the structure and/or functional aspects of biological neural networks …

Artificial neural net programs, loosely or tightly based on biological nervous systems, are scaled-down variations of the super-powerful program proposed in this thought experiment.

 

I’d be careful with this statement, however

Essentially, they simulate the individual neurons of a biological brain.

Most present day ANNs are inspired by, not accurate simulations of, a biological brains. Though good work has been done on accurate simulations of actual nerve tissue, it’s hard work, and when last I checked, had succeeded only in simulating very small collections of neurons similar to those excised from animal brains and experimented with in dishes, or in the most impressive cases, functional subparts of insect nerve systems.

Posted

Artificial neural net programs, loosely or tightly based on biological nervous systems, are scaled-down variations of the super-powerful program proposed in this thought experiment.

 

I’d be careful with this statement, however

 

Most present day ANNs are inspired by, not accurate simulations of, a biological brains. Though good work has been done on accurate simulations of actual nerve tissue, it’s hard work, and when last I checked, had succeeded only in simulating very small collections of neurons similar to those excised from animal brains and experimented with in dishes, or in the most impressive cases, functional subparts of insect nerve systems.

 

Sorry, I suppose my use of "simulate" was rather ambiguous. I had originally referring to simulating the processes of a neuron, instead of the actual tissue, but I can see how, especially with the prior discussion about simulating the entirety of the tissue, it could be easily misunderstood to be referring to simulating the entire neuron. Anyway, your point about needing a huge amount of processing power is still valid, as running anywhere near as many neurons as are in the human brain at even near-real time would be very (and probably unrealistically) resource-intensive (although a distributed computing network may be worth looking into).

  • 1 month later...
Posted (edited)

I think that the opening post is the Chinese Room like somebody else said. I used to write these programs in the 80's. It isn't sentience, or intelligence. But it is possible to create sentience, and intelligence in a computer. It is all about the energy distribution that the computer uses. In our brains we use electron distribution, photon distribution, and time distribution. Currently PC's do not propagate energy in the correct dimensions required for sentience. But one day they could use other forms of energy distribution, and become sentient.

 

The energy distribution required is an 8D grid X/Y/Z/In/Out/-X/-Y/-Z. In/Out are scalar dimensions of time. The scalar dimensions are the centre of Atomic Nucleus, and the centre of electrons.

Edited by Pincho Paxton
Posted

the real question is not whether machines think, but whether men do.

 

computers are capable of being programmed to write thier own programs, design their own components, adapt to changing situations, learn, remember, etc. so in what respect are they not intellegent? becuase they can't currently learn a language? computers have thier own language. so at what point do you suggest we attribute sentience to a computer?

when it becomes self aware? how do you test for such a thing?

Guest MacPhee
Posted

the real question is not whether machines think, but whether men do.

 

computers are capable of being programmed to write thier own programs, design their own components, adapt to changing situations, learn, remember, etc. so in what respect are they not intellegent? becuase they can't currently learn a language? computers have thier own language. so at what point do you suggest we attribute sentience to a computer?

when it becomes self aware? how do you test for such a thing?

 

Modern computers are just fast adding-up machines.

 

They work ultra-fast - because they use electronic transistors, instead of slower-moving electro-mechanical devices, such as relays. Relays are slower, but can give the same results.

 

Suppose we use relays to make an electro-mechanical computer. It contains solenoids, metal slugs and wire coils.

 

Would you say these metal slugs and coiled wires are sentient? Obviously not - so why attribute sentience to transistors?

Posted (edited)

the real question is not whether machines think, but whether men do.

 

computers are capable of being programmed to write thier own programs, design their own components, adapt to changing situations, learn, remember, etc. so in what respect are they not intellegent? becuase they can't currently learn a language? computers have thier own language. so at what point do you suggest we attribute sentience to a computer?

when it becomes self aware? how do you test for such a thing?

 

 

It's a valid point. There is one advantage of computer sentience over man's sentience however. You can put a camera in its eyes, and hear its thoughts. As it looks around, and sees a mirror, and you hear it ask "Is that what I look like?" "Why do I look different to my human friend?", you will start to accept its sentience after a few days.

Edited by Pincho Paxton
Posted

Modern computers are just fast adding-up machines.

 

They work ultra-fast - because they use electronic transistors, instead of slower-moving electro-mechanical devices, such as relays. Relays are slower, but can give the same results.

 

Suppose we use relays to make an electro-mechanical computer. It contains solenoids, metal slugs and wire coils.

 

Would you say these metal slugs and coiled wires are sentient? Obviously not - so why attribute sentience to transistors?

Our body parts are not sentient either.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...