swampfox Posted March 5, 2007 Report Posted March 5, 2007 My heart leaps up when I beholdA rainbow in the sky:So was it when my life began,So is it now I am a man,So be it when I shall grow oldOr let me die!The Child is father of the Man:And I could wish my days to beBound each to each by natural piety. by Wordsworth (I believe this is relevant to the topic of the discussion since it illustrates what a computer can't do -- how would a computer device as we know them today, ever conclude on it's own without us programming it to "blindly" say so that the child is father to the man and then explain why it concluded that without our help?) Quote
Tormod Posted March 5, 2007 Report Posted March 5, 2007 We don't only drill for oil and kill baby seals in Norway. We actually do useful stuff, too: Computer Generated Writing Quote
TheFaithfulStone Posted March 5, 2007 Report Posted March 5, 2007 Did you read "Age of Spiritual Machines" Kalesh? Good book, if kinda the transhumanist equivalent to "Your Best Life Now" or whatever the moneyfromjesus program of the week is. I don't think it's really designed to be "prophesy", and I think Ray even hedges his bets accordingly, talking about "Wild Card" technologies and such. More than predicting what the world is ACTUALLY going to be like, I think it's kind of a post-singularity day dream. "If only we'd invest in AI research more, then we could all go live in the land of digital milk and honey." So, it is what it is - but it's still fun. My favorite? The global network of intelligent nanobots that enable you create anything your hear desires out of thin air. Utopian literature, par exellance! On another note - I was fooled by the Hypobot for about ten seconds in the chat room once. Does ten seconds of passing the Turning Test means it had ten seconds of consciousness? Here's one - how do you know a computer ISN'T self aware? It "knows" what the state of it's processors are, what is in it's memory. Give it an internet connection and it can figure out where it is the universe. If a computer could tell it was talking to another computer in the Turing test, has it made strides toward passing? TFS Quote
kalesh Posted March 5, 2007 Author Report Posted March 5, 2007 Did you read "Age of Spiritual Machines" Kalesh? sorry no On another note - I was fooled by the Hypobot for about ten seconds in the chat room once. Does ten seconds of passing the Turning Test means it had ten seconds of consciousness? Umm... No,I have been tinkering with the so called "chatbots" and I happen to know how they work. What they do is to take the input and match it against its database of sentences and then takes the responses of that sentence (supplied by its programmer) and chooses one and puts it as the output. The inherent problem is that it can only respond to the sentences that are in its database and cannot "think" and give responses to sentences that are not in its database. Moreover it only gives out what its programmer has put in there and is unable to give out responses "in its own words". The time that computers stop storing data only and start storing "knowledge" and "understanding" of concepts than it could able to start to have consciousness Here's one - how do you know a computer ISN'T self aware? It "knows" what the state of it's processors are, what is in it's memory. Give it an internet connection and it can figure out where it is the universe. If a computer could tell it was talking to another computer in the Turing test, has it made strides toward passing? TFS It doesn't "know" anything all it has is data. Once it "knows" something it should be able to apply the knowledge to unrelated but similar concepts but it cannot do that without being told "exactly" how to do it. Quote
Queso Posted March 5, 2007 Report Posted March 5, 2007 NOT YET Computers evolve every day with the help of man and someday in the future it will be vica versa. Quote
TheFaithfulStone Posted March 6, 2007 Report Posted March 6, 2007 It doesn't "know" anything all it has is data. Once it "knows" something it should be able to apply the knowledge to unrelated but similar concepts but it cannot do that without being told "exactly" how to do it. You have a pretty distinct program that tells you how to do things. A computer can in fact with specialized software "learn" things. For example, my computer (with Quicksilver) has "learned" that when I type "com" that I mean "Compose Email" sometimes, but in other contexts I mean "Compress." It, for instance used to think that I mean "Compress" all the time. Of course, there is a program that tells it "exactly" how to do all this, but don't you have a biological program that does the same thing? It's a lot more advanced that simple predictive text algorithims, granted - but it's a matter of degree. But let's back up a step. Does my dog have a sense of self? I think he does - he comes when I call him, but not when I call his brother. He eats (anything) off the floor, but not off the table. He has applied simple knowledge - "things on the floor are okay to eat" to a different situation - "stone dropped a piece of bacon - fair game" vs "stone has a piece of bacon in his hand - uncool." A dog is self aware. What about a cricket? Is it self aware? I have questions of whether we would even know we'd created an artificial intelligence. We might just think it was an especially annoying version of ELIZA. TFS Quote
kalesh Posted March 6, 2007 Author Report Posted March 6, 2007 To sum it up in simple terms I would say a computer is self aware and conscious when it can do the following:A) be creative (without the programmer explicitly telling it how to do so).;) have a sense of humor (real not implicated)C) have a sense of emotion (real not implicated)D) Store knowledge and understanding of concepts at the storage level instead of storing data and applying algorithms to it. E) able to converse with a human as easily as one human converses with another. I think it can only be done with a massively parallel computer, which store data in analogue format instead of digital. Quote
Queso Posted March 6, 2007 Report Posted March 6, 2007 I think it can only be done with a massively parallel computer, which store data in analogue format instead of digital. This better blow your mind 'cause it blew mine. Ready? There's going to be a new kind of intelligence.Analog intelligence has been evolving for millions, maybe even billions of years I don't know. Up until not too long ago, there was never digital intelligence here on Earth. We made it up. When you think about the evolutionary paradigm of intelligence, you can clearly see that there is evidently going to be something new that will spawn from the digital and the analog. It will be both, but it will be something different. I can't even imagine what.Intelligence will pawn intelligence. It's just obvious, too. Look around. AI WILL BE SMARTER THAN HUMAN BEINGS. The evolution is inevitable! This means you can't even imagine what that means. Quote
Boerseun Posted March 6, 2007 Report Posted March 6, 2007 Quite interesting, Kalesh. Lemme see...A) be creative (without the programmer explicitly telling it how to do so).Once again, the problem with us not being able to emulate the human brain yet is because we don't completely understand the human brain. For instance, this requirement of yours of being creative without being programmed to be, might be flawed. You see, creativity might be hardwired in humans. It might have been a necessary trait years ago when you were still living in a cave and was confronted with an enormous mammoth. Your tribe will starve if you don't kill the mammoth, but seeing as you're a heck of a lot smaller, you have to come up with some amazingly creative way of killing it. Creativity might be hardwired (in humans, at least), in which case it will be perfectly allowable for it to be programmed into some conceivable brain-emulator computer (once we understand not only the brain, but creativity, as well!);) have a sense of humor (real not implicated)C) have a sense of emotion (real not implicated)Both humour and emotion are responses to the social environment. The Brits have a completely different sense of humour than, say, the Fins. It's a social thing, and once a machine can learn, it can probably learn what's funny and what's not. Young kids generally don't laugh at adult jokes, because they don't have a good developed frame of reference yet and their sense of irony is still to emerge. Emotion, on the other hand, seems to exist on a much lower brain level than humour and is probably also hardwired to the brain. Therefore, emotional responsed can be preprogrammed into such a conceivable machine. For instance, a baby a few days old can discern between frowning faces and happy faces without ever having learnt the meaning of the visual input of smiles, frowns, nods, etc. They are universal and ingrained into us. We can therefore program the machine to do the same without breaking any rules.D) Store knowledge and understanding of concepts at the storage level instead of storing data and applying algorithms to it.Although I see what you're trying to say, our understanding of the workings of the human mind doesn't necessarily indicate this. We still don't know where in the flow of waves from neuron to neuron, synapse to synapse, a memory is 'perceived'. Our understanding is way too lacking to make this a prerequisite, as well.E) able to converse with a human as easily as one human converses with another.Although I might be picking nits here, the ease with which humans communicate with each other is also debatable. There are people who can't communicate at all, not because there's anything physically wrong with them, but because of neuroses and paranoia and a host of conceivable mental issues, that could be considered in a hypothetical machine as malprogramming, or evidence that human intelligence can't be mimicked electronically. Yet, those with these kind of mental issues are still considered to be human, and intelligent. The lines are way too gray for communication to be a prerequisite in the level you described above.I think it can only be done with a massively parallel computer, which store data in analogue format instead of digital.I tend to agree with the analogue vs. digital model, and eventually what we'll end up with is some sort of mixture of both. The interesting thing is that a neuron is probably the slowest switching element imaginable, and although the 'speed of thought' is in popular culture used to describe the fastest speed there is, brainwaves don't propagate much faster than a cruising donkey-cart. Yet, with parallel 'processing' on an analogue machine, it outperforms digital machines in a host of ways. But besides, why exactly would we want to mimick intelligence? Imagine for a second you had such a machine. What good will it be, apart from the coolness and geek factors involved? A machine that sucks at calculation, whose actions can't be predicted, etc.? What, exactly would the use be of such a contraption? Sorry if I came across as a bit anal in the analysis of your prerequisites, but I find it very interesting, and I think the points that need to be satisfied before we know that we've emulated a human mind is very gray, at best, because we don't really know what the human mind is about in the first place! ;) Quote
Buffy Posted March 6, 2007 Report Posted March 6, 2007 Your definition is awfully vague. Most of these items depend on a judgement call where different people will make different judgements:A) be creative (without the programmer explicitly telling it how to do so).We're already there, depending on how you define "creative." If its simply "without the programmer explicitly telling it how to do so" we've been doing this for a long time. Creativity is a pretty rote process. Remember that Edison said it was 99% perspiration. He even tried peanut butter when "creating" the light bulb. Moreover "truly creative" ability is really only demonstrated by a small segment of the population! Are all those humans non-sentient?;) have a sense of humor (real not implicated)C) have a sense of emotion (real not implicated)How would you be able to tell? What is a sense of humor or emotion? Haven't you ever faked an emotion yourself?D) Store knowledge and understanding of concepts at the storage level instead of storing data and applying algorithms to it. This has been an area of research for nearly two decades. Its been long recognized that in order to do machine understanding of language (as opposed to translation which has succumbed to brute force techniques in recent years), you need to have massive stores of "world knowledge" and its virtually impossible to manually construct the relationship graphs of this knowledge. As a result, the computers themselves are "learning" the knowledge and building the structures using neural network techniques. The process is understood, the hardware is simply not fast enough yet, so it is awaiting those massively parallel machines!E) able to converse with a human as easily as one human converses with another.Already done except for the hardware. Heck Eliza was mezmerizingly good at this 40 years ago! Are you familiar with the Turing Test? Its interesting implication is that the best we can do is say that we can't tell the difference between a machine and a human. And worse, its subject to having the human falsely identified as the machine because its still entirely subjective! As Supreme Court Justice Potter Stewart once said "I can't define pornography, but I know it when I see it." Not exactly definitive...I think it can only be done with a massively parallel computer, which store data in analogue format instead of digital.But I still *agree* with you that its possible to produce "intelligent," even "sentient" computers, and I also agree that its going to take "massively parallel computers (note the plural!). It is *not* clear that they have to *physically* model neurons, in fact its easy to argue that neurons are unbelievably poorly designed and could be improved dramatically. But what we are clueless about is how they work together, to the point of not even being able to figure out how to manually program them *except* by letting them learn on their own (where they are actually quite useful today: they're used extensively in pattern recognition and visual sensing applications). But until we figure out to get to the next level, we've gotten to forming arrowheads while dreaming of going to the moon: its massively speculative, and so speculative its pointless. Its very much like Neanderthals trying to conceive of transistors or orbital mechanics. We'll get there, but what's a lot more interesting--and useful--right now is just getting to these next steps. Remember the perspiration part,Buffy TheFaithfulStone 1 Quote
kalesh Posted March 6, 2007 Author Report Posted March 6, 2007 This better blow your mind 'cause it blew mine. Ready? There's going to be a new kind of intelligence.Analog intelligence has been evolving for millions, maybe even billions of years I don't know. Up until not too long ago, there was never digital intelligence here on Earth. We made it up. When you think about the evolutionary paradigm of intelligence, you can clearly see that there is evidently going to be something new that will spawn from the digital and the analog. Ok my bad so it will be both digital and analogue. AI WILL BE SMARTER THAN HUMAN BEINGS. The evolution is inevitable! Isn't that what this thread is saying from the start? Quote
kalesh Posted March 6, 2007 Author Report Posted March 6, 2007 Well I never thought this thread was going to get so long. And I never imagined anyone analyzing my comments. However it was interesting to read all of your comments and replies.There seems to have been some difference between what I was thinking and what came out as typed text.:friday: This has led to some confusion. Let me see if I can get to the bottom of this. Quite interesting, Kalesh. Lemme see... Once again, the problem with us not being able to emulate the human brain yet is because we don't completely understand the human brain. For instance, this requirement of yours of being creative without being programmed to be, might be flawed. You see, creativity might be hardwired in humans. It might have been a necessary trait years ago when you were still living in a cave and was confronted with an enormous mammoth. Your tribe will starve if you don't kill the mammoth, but seeing as you're a heck of a lot smaller, you have to come up with some amazingly creative way of killing it. Creativity might be hardwired (in humans, at least), in which case it will be perfectly allowable for it to be programmed into some conceivable brain-emulator computer (once we understand not only the brain, but creativity, as well!) I didn't mean entirely without being programmed. A computer could be programmed to be creative, but cannot be told how to be creative for each and every problem. For example a programmer cannot program its logic as "when mode is creative, problem is problem A, do this, then that, then that". The problem with this type of programming is that the computer will only be as creative as its programmer, and can only be creative on the types of problems on which it has been programmed to do so. However a truly conscious computer can be creative at any problem and can also be more creative than its programmer. Both humour and emotion are responses to the social environment. The Brits have a completely different sense of humour than, say, the Fins. It's a social thing, and once a machine can learn, it can probably learn what's funny and what's not. Young kids generally don't laugh at adult jokes, because they don't have a good developed frame of reference yet and their sense of irony is still to emerge. Emotion, on the other hand, seems to exist on a much lower brain level than humour and is probably also hardwired to the brain. Therefore, emotional responsed can be preprogrammed into such a conceivable machine. For instance, a baby a few days old can discern between frowning faces and happy faces without ever having learnt the meaning of the visual input of smiles, frowns, nods, etc. They are universal and ingrained into us. We can therefore program the machine to do the same without breaking any rules. Once again its not the programming but the type of programming. You could program a computer to recognize text, speech, visuals which indicate emotion, but room must be left for it to learn new situations of emotion and for self improvement. Also you cannot program it in a logic that says "when emotion is anger increase anger level by 1" or anything similar. You must let the computer decide what it wants to do with emotions received. Although I see what you're trying to say, our understanding of the workings of the human mind doesn't necessarily indicate this. We still don't know where in the flow of waves from neuron to neuron, synapse to synapse, a memory is 'perceived'. Our understanding is way too lacking to make this a prerequisite, as well. Look at it this way when you are learning something (assume you are in school or college) do you cram entire books or do you learn and understand the concepts?Naturally you learn and understand concepts so this is what we should make the computers do whereas what the computers are doping right now are cramming entire books (so to speak). Although I might be picking nits here, the ease with which humans communicate with each other is also debatable. There are people who can't communicate at all, not because there's anything physically wrong with them, but because of neuroses and paranoia and a host of conceivable mental issues, that could be considered in a hypothetical machine as malprogramming, or evidence that human intelligence can't be mimicked electronically. Yet, those with these kind of mental issues are still considered to be human, and intelligent. The lines are way too gray for communication to be a prerequisite in the level you described above. You don't have any problems conversing with your friends and family members. Do you? I think not. Thats the type of communication I am talking about. I tend to agree with the analogue vs. digital model, and eventually what we'll end up with is some sort of mixture of both. The interesting thing is that a neuron is probably the slowest switching element imaginable, and although the 'speed of thought' is in popular culture used to describe the fastest speed there is, brainwaves don't propagate much faster than a cruising donkey-cart. Yet, with parallel 'processing' on an analogue machine, it outperforms digital machines in a host of ways. Yeah on second thought I believe it will be a combination of the two. But besides, why exactly would we want to mimick intelligence? Imagine for a second you had such a machine. What good will it be, apart from the coolness and geek factors involved? A machine that sucks at calculation, whose actions can't be predicted, etc.? What, exactly would the use be of such a contraption? There will be a huge benefit. Firstly as you have said neurons are inherently slow, however their electrical counterparts are about a million times faster than them. So what we end up with is something that can think and solve problems a million times faster than any human, also since its actions are unpredictable all you have to do is to give it a problem and let it "think" it through and it will usually come up with answers no human could even dream of. While if its actions were predictable it would only come up with answers as good as its programmer which doesn't leave much room for improvement. Sorry if I came across as a bit anal in the analysis of your prerequisites, but I find it very interesting, and I think the points that need to be satisfied before we know that we've emulated a human mind is very gray, at best, because we don't really know what the human mind is about in the first place! :eek2: No prob mate, it was interesting reading your analysis. Your definition is awfully vague. Most of these items depend on a judgement call where different people will make different judgements: Please see the analysis above. Quote
Buffy Posted March 6, 2007 Report Posted March 6, 2007 Please see the analysis above.While Boerseun was responding to the same issues, he posed an entirely different set of questions. You might want to go back and read my post because your response to him does not deal with much of what I was asking about... Thanks!Buffy Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.