CraigD Posted February 10, 2007 Report Posted February 10, 2007 "Thinking" technologies, or even AI robots in a possible future, more like mirror the human intellect. However such level of development would require humans to achieve more evolved, holistic progress, being observers to such observed technolgy.I’m leery of the suggestion that an essentially technological task – true “thinking” artificial intelligences – depends much on human culture’s preference for holism vs. monism, reductionism, or whatever. Consider this idea, popular in 1970s discussions of AI (I first encountered it in print in the 1981 anthology “The Mind's I”, but had discussed it at least 4 years earlier): Assume an entirely conventional, but very large and fast digital computer, is programmed to accurately simulate the chemistry and anatomy of a living human body, including the brain. Also assume it simulates the (far easier) stimulation of this virtual human’s senses, to provide a ordinary, mundane sense of external reality. Even though the programmers understand nothing more than the small-scale simulation of cells and celular organelles, such a simulation would, in principle, behave in every way like a human being. A conversation with it would be just like one with an ordinary person. An examination of the state of the virtual being would match what one measures of an ordinary person using various medical imaging techniques. Even though the programmers may have little understanding of the many physiological and mental behaviors of a human being, the simulation they write emulates them perfectly. There are major disputes in the AI community and literature over whether such a simulation is, in principle possible – I discuss them briefly in ”Reductionism, empiricism, strong AI and Mysterionism”. If the "strong AI" school is correct, then it’s quite possible that a true artificial human can be created insilico by people who don’t, in a profound way, know fully what they are doing. Quote
rocket art Posted February 11, 2007 Report Posted February 11, 2007 Assume an entirely conventional, but very large and fast digital computer, is programmed to accurately simulate the chemistry and anatomy of a living human body, including the brain. Also assume it simulates the (far easier) stimulation of this virtual human’s senses, to provide a ordinary, mundane sense of external reality. I guess an accurate but artificial simulation may not be as comparable with eloquent, microcosmic mechanisms of our organic structures. There may yet be profound principles that differentiate between data delivered via electronic wirings, to that of solar photonic data absorbed by organic molecules. Perhaps the most that a computer can do is to mimic logic processing of the human brain given the conventional technology, though an advanced computer is incapable of composing poetry. But I believe that such an advancement of a 'conscious' AI may occur when this space-time dimension leaps to higher level as the Conscious observer evolves and the observed (physical) holistically (all things being 'conscious') evolves with it. This may require a leap at lightspeed beyond the existing dimension of c^2 as my Rocket's Theory may imply. Quote
Kriminal99 Posted February 12, 2007 Report Posted February 12, 2007 No motivation or no trust in their motivation Computers have no motivation, so they require us to constantly alter their programming to meet our goals. In order to do this, we must understand what we are trying to accomplish and how to go about it, and all that is left to the computer is simple yet tedious computation. In order to avoid this, we would have to give them motivations similar to our own, which would essentially be creating an AI or do as you say and give us the computation ability. I think it is better if they are left as a computational tool. What if I told you, hey man you don't have to think. Just do everything I tell you. If you don't trust me that much, then you shouldn't trust a computer who thinks like me either. So no, I think people should always advance their computational abilities. I don't see why implanted calculators should not be considered thought though. Even if we completely altered ourselves into cyborgs I would still consider it thought, but then I guess it just depends if you believe in things like the soul (which I do not). Thought defeating thought- Either you trust someone or something to think for you, or you think for yourself. Which is it? Infinity Our concept of infinity is a cobbled together from our experiences of a discrete and finite (even if finite means what portion of an infinite world we are able to percieve) enviornment. A function of the discrete and finite used to model the infinite. Any understanding or attributes of infinity we claim to have, were simply derived from the function we used to create it from our experiences in a discrete and finite world. If infinity is real, we do not have experience of it. But then again if infinity was real, we could not have experience of it because we die before we did. Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.