Annoying Twit Posted January 7, 2007 Report Posted January 7, 2007 Hello all. What would people say is a Robotics/AI proof definition of "consciousness". By "Robotics/AI proof" I mean a definition which couldn't be satisfied by a computer system that is too simple to be reasonably described as "conscious". For example, here's the first paragraph of the wikipedia page on conciousness. Consciousness is a quality of the mind generally regarded to comprise qualities such as subjectivity, self-awareness, sentience, sapience, and the ability to perceive the relationship between oneself and one's environment. It is a subject of much research in philosophy of mind, psychology, neuroscience, and cognitive science. If we look at some of these attributes, then some at least could be satisfied by very simple systems. For example a robot able to plan its path given knowledge of its own size and shape would be able to perceive a relationship between itself and its environment. It might also be "self-aware" in a way, if it's able to construct plans with itself as an agent. But I personally wouldn't want to say that was consciousness. Would others agree? And if so, what should a computational definition of "consciousness" be? Tormod 1 Quote
Tormod Posted January 7, 2007 Report Posted January 7, 2007 I agree that consciousness requires more than functioning as an agent. IMHO consciousness is an emergent trait in living entities and as such I think a requirement for AI might be AL (artificial life). The relationship between "itself" and "the environment" will probably have to go beyond mere visal and auditory perception, but I am not sure that a sense of self is necessary. It's a very interesting topic! Quote
Buffy Posted January 7, 2007 Report Posted January 7, 2007 Great topic Twit! One of my faves. I will tell you that I don't think we have a good enough handle on the definition of the words wiki is using to define consciousness, and as a result, no, we can't define "consciousness." Can you define sentience? How do you demonstrate "self-awareness?" All of them are so subjective that they come under the rubric of Justice Stewart's "I can't define it but I know it when I see it." We're a lot further along now technologically than we were when Alan Turing broached the subject, but that's not saying much. He basically seemed to say: the only way you can tell is if you sit down in front of it and a real intelligence and can tell no difference. As a practical matter, my personal belief is that the implementation "intelligence" will be far simpler than we think, but that creating it is an evolutionary (as in letting neural networks build themselves) rather than algorithmic. Fuzzily,Buffy Quote
Annoying Twit Posted January 7, 2007 Author Report Posted January 7, 2007 Great topic Twit! One of my faves. I will tell you that I don't think we have a good enough handle on the definition of the words wiki is using to define consciousness, and as a result, no, we can't define "consciousness." Can you define sentience? How do you demonstrate "self-awareness?" All of them are so subjective that they come under the rubric of Justice Stewart's "I can't define it but I know it when I see it." It's an interesting exercise to get people to define what "learning" is, which seems on the surface a much easier definition. They quickly run into problems either leaving things out that we'd call learning, or by including things (such as a car engine "running in") which we wouldn't call learning. We're a lot further along now technologically than we were when Alan Turing broached the subject, but that's not saying much. He basically seemed to say: the only way you can tell is if you sit down in front of it and a real intelligence and can tell no difference. It's always nice to see a correct description of the Turing Test :shrug: It's a personal bugbear of mine that many people seem to think that you only talk to a single agent, and the question is whether it's a human or machine. This leaves out the game-playing side to the test where the human will be encouraged to use the full scope of their intelligence if they think the machine is likely to "beat" them. As a practical matter, my personal belief is that the implementation "intelligence" will be far simpler than we think, but that creating it is an evolutionary (as in letting neural networks build themselves) rather than algorithmic. Fuzzily,Buffy Oh dear. Again I have chosen my moniker appropriately. While I have a long history in AI, I'm, shall we say, less than impressed by Neural Networks. As a learning technique they are, I believe rather weak. The problems they solve are limited to simple problems that can be expressed in a zero-order logic (*). And they are not able to make use of background knowledge, meta-knowledge instructing them how to learn. They try to learn everything from a near true "scratch", which I do not believe humans do. Modifications to neural networks to make them learn in these ways, I believe, changes them so much that they are no longer neural networks. I think that neural networks are actually a dead end in learning research, and that they will eventually die out. They are popular. And that's not only because people think it's "cool" to simulate the human brain or suchlike. But also because they do work quite well on simple problems and there are a lot of simple problems around. But, while the current huge amount of effort goes into NNs, less work is being done on the research to invent and develop techniques that will solve the more difficult, more "intelligent" problems. However, even for the simpler problems that NNs do well, I still prefer what are called Support Vector Machines for learning. You mention NNs "evolving" themselves to find better solutions to problems. I'm not sure if you are referring to neuro-genetic systems here, but these are systems that use evolutionary computing techniques to optimise their weights to solve problems. Or, in simple language, they sort of search around for a good solution to a problem. With support vector machines, the learning problem is transformed into a form that can be solved optimally using mathematical optimisation techniques. That doesn't mean that they always do better than neural networks on every problem as something may be lost in the transformation. But, if someone put a gun to my head and forced me to say whether I thought NNs or SVMs would work better on an undescribed problem, I'd go for the SVM any day. If anyone looks at the book "Machine Learning: An Artificial Intelligence" approach, vol 1 (Michalski, Carbonell, Mitchell), and look at the kind of problems people were trying to solve over 20 years ago, and look at the kind of problems people are trying to solve with neural networks nowdays, I suspect my point will be made. (*) Note that it's possible to convert simple problems typically expressed in higher order logics into zero-order logics such as the fixed length vectors of numbers typically passed to neural networks as input. But the methods for doing this completely break down when more complicated problems are attempted. Complicated problems that humans solve. Quote
Buffy Posted January 7, 2007 Report Posted January 7, 2007 Oh dear. Again I have chosen my moniker appropriately.I'm not convinced of that yet! :shrug:While I have a long history in AI, I'm, shall we say, less than impressed by Neural Networks. As a learning technique they are, I believe rather weak. The problems they solve are limited to simple problems that can be expressed in a zero-order logic (*). And they are not able to make use of background knowledge, meta-knowledge instructing them how to learn. They try to learn everything from a near true "scratch", which I do not believe humans do. Modifications to neural networks to make them learn in these ways, I believe, changes them so much that they are no longer neural networks. I think that neural networks are actually a dead end in learning research, and that they will eventually die out.I'll disagree with your definition of neural nets then! I agree that people have gotten way too wrapped up in *exactly* modeling the human brain, but it is obvious that "consciousness" can arise from nothing but a really hairy NN, so I would not dismiss them so definitively! Your description is accurate as to the current state of this technology, but right now we're just "frobbing the frobnitz's": we have no idea what we're doing, or how to implement these things. The "from scratch" issue is the biggest one at the moment: we have to "learn how they learn" in order to get the evolutionary steps to start working better. Once that happens, its my opinion that the "using world knowledge" will happen rather straightforwardly as simply a "place to get your random genetic changes" for your goal-seeking. But, while the current huge amount of effort goes into NNs, less work is being done on the research to invent and develop techniques that will solve the more difficult, more "intelligent" problems.True, but I don't think we even know where to start, so all this random work right now is hardly a waste. You gotta walk before you can run.You mention NNs "evolving" themselves to find better solutions to problems. I'm not sure if you are referring to neuro-genetic systems here, but these are systems that use evolutionary computing techniques to optimise their weights to solve problems. Or, in simple language, they sort of search around for a good solution to a problem.Weights is only the start, you gotta do algorithm changes too if you're gonna get anywhere. I disdain the use of some of these terms as people making distinctions just to make what they've found important, when no one knows what's important yet. Neural nets, evolutionary/genetic computing, I don't care: we're going to be better off if we just explore for a while and realize that our definitions and taxonomies are going to change radically as things become clearer. Seeking global maximums,Buffy Quote
jungjedi Posted February 12, 2007 Report Posted February 12, 2007 where as we can wonder what it would mean to go back in time to kill hitler and that world line of causation could a robot go back in time and kill a TSR-80.and what would it mean for that robot.it might be good for that robotto kill his grandfather.there could have been bad blood between them.like that TSR-80 might have always discounted what the robot had to say and always held him back.its like that TSR-80 was jealous that the robot was young and could get women-robots and that TSR-80 just got to sit around watching the icecubes melt so it was always belittling his basic programmingand comming up with snide remarks all the time....WELL GET SOME VIAGRA YOU OLD FOOL...and you still wont make it with the lady-robots.i dont know why you had to talk to liz-robot that way.sure i only come around here for the robot-money.but i swear sometimes you cant pay me enough to put up with you.all you do is watch dr. quiiin medice robot and walker texas robot.you boreing me to death with your stories about back in the day when everybody had to do things in fortran Quote
Boerseun Posted February 12, 2007 Report Posted February 12, 2007 where as we can wonder what it would mean to go back in time ....back in the day when everybody had to do things in fortranNow that was a particularly dumb post... Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.