AnssiH Posted May 16, 2009 Report Posted May 16, 2009 Now you see I wouldn't have said that! I would say that the design of the program which yielded that ability to interpret input was the artificial part, not the world view you think is valid! I think that is exactly why all the people who work on AI miss the issue. Well to be fair towards those people, I don't think Modest' comment represents the general perspective among serious AI research, at least not the perspective held by people who are concerned with so-called "strong AI" (as oppose to "video game AI" or some other very limited domain decision making). I mean, at least they have some idea about the nature of the problem. At least, Marvin Minsky agreed quite immediately to the assertion that a general learning system cannot begin with ANY explicit "knowledge" about the world it is building a conception of, as otherwise its consequent worldview is always constrained by that "knowledge", and that sort of constraint does not exist with humans. And thus there at least exists some discussion about the problem of creating a "general learning mechanism that begins with absolutely empty knowledge base". But to my knowledge no one has solved it to the extent that you seem to have. Anyway, my commentary about the "nature of semantics" has to do with exactly the fact that we do not have that constraint of "knowing about reality explicitly". And btw Modest you should know perfectly well that "strong AI" or "general AI" is exactly what was being discussed :Glasses: Oh, something else I also wanted to comment to others. Remember when I said: "There are always all kinds of arguments that may seem rational, until you realize their validity hinges on the specific definitions of what constitutes those other "things" that the argument rests on. Often the participants miss the fact that when all is said and done, there is no one else but us to define what is meant by "x" (and all associated definitions); they miss the fact that there is no (ontologically) correct answer to be found anywhere. I said that in context of "arguments regarding 'what is life?'", but it really applies to any argument about any aspect of ontology, and Boerseun just wrote a post to "Assertion of an absolute now" thread that is just a perfect example of exactly that kind of "seemingly rational" argumentation:http://hypography.com/forums/philosophy-of-science/19450-assertion-absolute-now-what-spacetime-really-20.html#post265162 He is describing a worldview, that is with all likelyhood quite valid predictionwise. But what is missed completely is that it's not really the individual definitions of that worldview that can be considered "true", it's rather the exact logical relationships between those defined entities that are the essence of the worldview. As long as you just consider how the relationships between defined entities are valid, you can also understand that it is quite possible to change a definition of any given entity, as long as you change the definitions of all the related entities so to preserve the exact same relationships. In other words, there exists quite many arbitrary choices when it comes to all sorts of questions of "what ontologically constitutes a thing"), i.e. you have many ways to answer that question which will all allow you to end up to the same exact relationships. In other words, after such transformation to your worldview (from one self-coherent set to another self-coherent set), you are still perfectly aligned with the same exact raw data, discussing the same exact consequences and drawing exactly as accurate predictions. The only difference is that you are discussing and thinking about those predictions in terms of completely different entities and concepts. For practical examples, think about any single scientific breakthrough in the history of the world. And that is just another way to understand what I mean by "semantical wordview", and how it is a consequence of NOT knowing the explicit meaning of the raw data. This issue, when you really get down to the details, goes far deeper than you can trivially comprehend in your mind (meaning, it is usually not trivial to understand reality "in terms of relationships", and so be aware of the arbitrary facets of your conception of a given situation. Much easier to just think about the behaviour of your defined entities more or less individually and straightforward manner). But it should at least be trivial to understand how something like a definition of relativistic simultaneity is in fairly straightforward manner related to the definition of "geometry" and to the definition of c and to the time dilation etc... If you understand those relationships in immaterial manner, you can surely also understand that simultaneity can be defined to be absolute, as long as you know how to change other definitions so to preserve the relationships between things absolutely intact. And that is exactly what DD is talking about in those threads. And it is exactly these sorts of "not immediately obvious" relationships in general that are being investigated and uncovered by the epistemological analysis. -Anssi Quote
Qfwfq Posted May 16, 2009 Report Posted May 16, 2009 But, when you really get down to facts, evolution may have built a machine which can solve the problem, but can you honestly profess to believe that the world view itself together with all the definitions behind that world view exist within that fertilized egg? Yup, that's exactly what I said that you said, but you seem to have missed my joke of putting quotes around the words "few months". :lightning It means that, by the time the egg has been fertilised, you can no longer say:...to create a world view based entirely on totally undefined information (the actual impact of reality upon you) transformed by a totally undefined transformation (your senses).simply because the transformation has already been defined (DNA) and thus the information already has some meaning to start with. You can yet say the deed has been done, but certainly not "in a matter of a few months"; the actual task of each child during those first months is less impressive. At the very least, sensations are already pleasant or unpleasant, to start with. I feel that an AI device should be able to create a world view on its own or it has not really achieved AI.I once thought that myself! :Glasses: A few years ago I read about research according to which, as soon as a baby begins to see, it is already put at ease by a smile, at unease by a frown and frightened by an angry face. It also seems that baby girls pay much more attention than baby boys to whatever faces they are shown and to toy dolls while it's the other way round with mechanical toys; quite contrary to the progressist creeds of many modern parents that have tortured their children with gifts regardless of sex. Even Darwin, way back then, reasoned on hereditary behaviour being selected in nature; an example he mentions is what the chicks automatically do when mother hen gives the danger clucking. Despite lack of any prior individual experience or training, they hide wherever they can and the hen can escape relieved of concern for them. We should therefore say that Nature, in its eons of research into NI, has "inserted the solution (that world view itself) into the DNA (program)" of each fertilised egg; therefore nature hasn't really achieved NI and we are not really intelligent as we think. No wonder we dumb apes can't succeed in achieving AI without trying to do the same. Quote
AnssiH Posted May 16, 2009 Report Posted May 16, 2009 Certainly there is reason to believe that some of our behaviour is hereditary. It's not just the chicks, we also get startled by sudden noises or visions without consciously choosing to do so. That's already an action that is probably implanted to us already at birth for survival reasons; it takes time to analyze what was being heard, but generally unexpected loud noises are a reason to react (run, dodge, whatever). But, when we get to the part where we actually try to analyze what is being heard/seen/felt, there are a lot of indications showing a lot larger plasticity to the brain than most people intuitively expect. For instance, all the artifial sensory systems, whose data people learn to interpret as "visual view" as long as that information is somehow embedded into the data. There are systems that stimulate the brain directly, and there are systems that stimulate the nerves on the tongue, and there are systems where visual information is embedded into a sound that the person listens to and learns to interpret as visual information. Artificial Vision for the Blind - Brain Implant? Bionic Eye?What is it like to See with your Ears? But that shouldn't be too surprising, being that we all can learn to interpret languages and all kinds of things "without conscious effort" after sufficient training. Just learning to ride a bike is incredibly complex procedure in terms of transforming sensory data into muscle responses, and it can't be hereditary function as our ancestors did not have bikes... Yet we don't have to concentrate very hard in order to perform that action. At any rate, becoming able to interpret artificial sensory data meaningfully, is clearly a case of re-building your worldview from very low level. That is already a case of "modeling your senses" or them being "an open parameter to your worldview". Also another indication of that is that same old observation that if we wear goggles that flip our visual view upside down, we eventually become so used to it that consciously we experience a normal view... ...until we take the goggles off and then it's time to re-train the brain all over again. Of course the subjective experience regarding the "orientation of the visual view" would be quite immaterial and could be random (or meaningless in the sense that no such awareness would exist), if it wasn't for the data from other senses, and consequently other ideas in our worldview that gave some meaning to "up" or "down" in terms of interpreting visual data. After all, you are not a little man in your brain watching a TV; you are not "upside down" or "sideways" or in any other orientation in there. Well that should suffice in explaining what is meant by us "modeling our senses", but more examples of general plasticity of the brain keep popping to my mind. This is getting more and more off topic, but hey, maybe you guys find it interesting... There are cases where a person has lost their language abilities due to damage to the language ares in the left hemisphere, but have learned to talk again using their right hemisphere. Apparently we are the only animal to not have completely identical brain halves, in terms of what sorts of functions they seem to perform. For all the other animals, the left and right hemispheres are basically a redundancy system, and the organism can still survive after losing parts of its brain, even after losing the entire other half. And even though for humans it is not a complete redundancy system, we can learn to use different areas to perform different functions if needed, and there actually are cases of someone losing the other hemisphere entirely, but still continued to live more or less normally. Actually found an account of such case: YouTube - Brain Plasticity http://www.youtube.com/watch?v=TSu9HGnlMV0 At any rate, sure, we should expect that evolution has found some shortcuts that are beneficial for survival; and I suspect less intelligent animals have got more of these shortcuts and that is also why they can learn to walk, run and survive in nature a lot faster than human infants. Humans on the other hand are capable of understanding the world in very many different ways, starting from very low level ideas (case in point; this thread), and I suspect that is because our brain forms its worldview from very low level information in an open-ended fashion. It is only to be expected that that sort of learning procedure takes considerably longer than the ~60 minutes it takes for the zebra to learn to run from the lions... :Glasses: -Anssi Quote
modest Posted May 16, 2009 Report Posted May 16, 2009 The artificial part of AI is the world view we give it. If it evolves its ability to interpret sensory input itself then it's just a naturally evolved intelligence—not so much artificial.Now you see I wouldn't have said that! I would say that the design of the program which yielded that ability to interpret input was the artificial part, not the world view you think is valid! I think that is exactly why all the people who work on AI miss the issue. Just for clarification, Kant believed there could be no understanding of the natural world without the a priori concept of space and time in our program—if we give such an understanding or such a framework to an AI, would we consider that to be giving it a world view or giving it the "the design of the program which yielded that ability to interpret input"? You contrast the two in your quote above, but I don't know where you draw the line between them, or why for that matter they should be mutually exclusive concepts. ~modest Quote
AnssiH Posted May 16, 2009 Report Posted May 16, 2009 Just for clarification, Kant believed there could be no understanding of the natural world without the a priori concept of space and time in our program—if we give such an understanding or such a framework to an AI, would we consider that to be giving it a world view or giving it the "the design of the program which yielded that ability to interpret input"? That would be a very clear case of "giving it a world view". If its world view was based on a specific definition for space and time, it could not juggle with different sorts of ways to understand space and time, like we can. (That is, those AI systems would not find themselves bickering about space and time in these forums :Glasses:) You contrast the two in your quote above, but I don't know where you draw the line between them, or why for that matter they should be mutually exclusive concepts. The point from the perspective of AI is that if you start the knowledge base from some pre-defined set, then that pre-defined set is not a subject for further iteration, and the further learning and cumulated knowledge is a function of whatever knowledge you happened to set in the beginning. On the other hand, when the world model is built in an open-ended fashion, the system can, when required, shift its entire perspective on "what reality is like". Just like we are able to shift our perspective (with enough effort), and of course this has been quite useful many times in the history of the world (be it relativity or quantum physics or string theory). DD's analysis is an explanation about how the familiar conception of space and time forms inside a world view, without having any pre-defined notions about such things. I.e. how that sort of view arises from the symmetries that the prediction function must obey. And "why" that sort of view arises, I would also be inclined to say that it indeed seems to be a very cost-efficient (read: "simple") way to keep track of complex data (relevant to survival). -Anssi Quote
modest Posted May 16, 2009 Report Posted May 16, 2009 That would be a very clear case of "giving it a world view". If its world view was based on a specific definition for space and time, it could not juggle with different sorts of ways to understand space and time, like we can. (That is, those AI systems would not find themselves bickering about space and time in these forums :Glasses:) Oh, I couldn't disagree more. As Kant argues with his transcendental idealism (and which I agree): we cannot experience objects without being able to represent them spatially. We humans are born with that spatial understanding hardwired. You'll notice with DD's analysis the first thing he does is to create a geometry on which to map undefined elements. To think that giving an AI the ability to think spatially and temporally would mean that it couldn't bicker about the meaning of space and time is most certainly a non sequitur. ~modest Quote
Doctordick Posted May 16, 2009 Author Report Posted May 16, 2009 Thank you Anssi for your attempts to communicate with modest. And I am sorry modest but I am afraid I just have to disagree with you. As I said, I really didn't want to get into this discussion as it is essentially an attempt to avoid thinking about what Anssi and I are talking about. And Qfwfq, I don't think you have actually thought these issues out.It means that, by the time the egg has been fertilised, you can no longer say:simply because the transformation has already been defined (DNA) and thus the information already has some meaning to start with.I think that is a false assertion as even DNA itself is an aspect of your world view. I doubt very much that the fertilized egg has any idea as to what the DNA is doing; that ability does not even arise without the development of a “brain”.I actually said what I said because I didn't want to give you that free out. People, and that includes you, are just too quick to use any excuse to avoid thinking about “the problem”.But, since you insist I will try to give my position though I am quite confident you won't accept it. Thank you Anssi; you are a very singular person in that you see things that others cannot.At the very least, sensations are already pleasant or unpleasant, to start with.Many mechanical devices depend very strongly upon feed back mechanisms to accomplish the tasks they were designed for (essentially a 'proceed' or 'avoid' signal; quite equivalent to 'pleasant' or 'unpleasant'); but I would never be moved to believe this amounted to “intelligence”.A few years ago I read about research according to which, as soon as a baby begins to see, it is already put at ease by a smile, at unease by a frown and frightened by an angry face.Again, you are speaking of mere mechanical feed back mechanism, easily inserted into the simplest of computer programs; Anssi's "video game AI" does this kind of thing all the time and again, I would not suggest that those video games have even begun the problem of creating a world view. That was all written into them in the program itself.It also seems that baby girls pay much more attention than baby boys to whatever faces they are shown and to toy dolls while it's the other way round with mechanical toys; quite contrary to the progressist creeds of many modern parents that have tortured their children with gifts regardless of sex.Oh, I am sure you can give example after example of such things. These are all “hereditary behaviour being selected in nature”; perhaps a necessary aspect of any mechanism capable of developing a world view but clearly not requiring the achievement of a world view.I wouldn't suggest that chickens have a world-view though it is entirely possible they do an example he mentions is what the chicks automatically do when mother hen gives the danger clucking. Perhaps chickens do have a world view. Even a chicken brain seems to be a complex thing somewhat beyond our understanding; but I very much doubt that egg I had for breakfast had a world view.Just for clarification, Kant believed there could be no understanding of the natural world without the a priori concept of space and time in our program—if we give such an understanding or such a framework to an AI, would we consider that to be giving it a world view or giving it the "the design of the program which yielded that ability to interpret input"? You contrast the two in your quote above, but I don't know where you draw the line between them, or why for that matter they should be mutually exclusive concepts.The issue is that the input itself must be allowed to have any relationship possible with reality and an intelligent device would be able to transform that input into a meaningful structure. That is the essence of intelligence right there and I think Anssi gave a number of very specific cases which give evidence that the human brain can do such a thing. I didn't want to discuss this issue because I didn't want to give anyone that kind of free out. Most people are just too quick to use any excuse available to avoid thinking about “the problem” Anssi and I are talking about. Suppose we just agree to disagree. The real issue should be the solution I have found and, in order to understand that solution, you have to understand the problem. Have fun -- Dick Quote
AnssiH Posted May 17, 2009 Report Posted May 17, 2009 Oh, I couldn't disagree more. As Kant argues with his transcendental idealism (and which I agree): we cannot experience objects without being able to represent them spatially. I would say I agree with Kant as well, and likewise I would also say that we cannot have an experience without first having a world view to represent defined objects spatially. I'm just saying that those definitions that yield that 3D-world representation are to a great degree open parameters to our world view, which leads me to two observations: 1. As a result of the 3D-world being merely a representation of some data from unknown reality, we are able to represent that same data with different ideas of space; with different number of dimensions, on euclidean or non-euclidean spaces, etc.. (While not visually in our mind, we are able to understand that such representations are equally valid than the one we intuitively use) 2. DD's work is essentially an explanation as to why, the mental 3D representation is very useful way to represent the data, and why it is valid regardless of the ontological nature of the expressed data. And that explanation is valid regardless of the details about how much of that transformation mechanism is hereditary and how much is individually learned. I said I was going off topic when I started talking about the plasticity of the brain, and I should stress it here as well; whether my ideas about the plasticity of the brain are correct or not, that's not really relevant to the validity of the epistemological analysis. I just think the topic of plasticity is quite interesting, and that there are many indications towards the idea that a lot of the transformation is individually learned... There are a lot more examples that I've come across than what I can readily remember. One more just popped in; if you work with orthogonally rendered scenes on a computer screen long enough (with free movement), the normal perspective corrected rendering starts to look wrong (objects look distorted), until you get used to it again. And I think it is interesting that we are able to learn to understand patterns from reality spatially even after they have gone through randomly chosen transformation process. About AI, I'm not really sure about how you meant your question anymore, but what I was trying to say was simply that its world view must be able to adopt different conceptions of space and time, just like we are able to. And I suspect you'd agree with that. -Anssi Quote
ldsoftwaresteve Posted May 17, 2009 Report Posted May 17, 2009 AnsiH: "DD's analysis is an explanation about how the familiar conception of space and time forms inside a world view, without having any pre-defined notions about such things. I.e. how that sort of view arises from the symmetries that the prediction function must obey. And "why" that sort of view arises, I would also be inclined to say that it indeed seems to be a very cost-efficient (read: "simple") way to keep track of complex data (relevant to survival). DD's work is essentially an explanation as to why, the mental 3D representation is very useful way to represent the data, and why it is valid regardless of the ontological nature of the expressed data. And that explanation is valid regardless of the details about how much of that transformation mechanism is hereditary and how much is individually learned." All of the recent disagreements seem to me to be like the guy who is standing next to the pool looking for a reason not to jump in. And in this case, it's the math that needs to be looked at. The water IS cold AND wet and my willy WILL shrivel. Look at the pretty puppy! 1... 2.... uh, where is my sunblock? AnssiH 1 Quote
modest Posted May 17, 2009 Report Posted May 17, 2009 I said I was going off topic when I started talking about the plasticity of the brain, and I should stress it here as well; whether my ideas about the plasticity of the brain are correct or not, that's not really relevant to the validity of the epistemological analysis. Yes, I think this tangent has detracted from the topic. I apologize for my part in that. There is one more question, however, that I would ask... DD's work is essentially an explanation as to why, the mental 3D representation is very useful way to represent the data, and why it is valid regardless of the ontological nature of the expressed data. Assuming reality has 4, 5, or 6 spatial dimensions, why would it be useful for me to have a 3D representation or map of that reality? Expecting the world to be 3D when it's actually 4, I might build a 6-sided box and expect it to hold water when I really needed to build an 8-celled tesseract to accomplish the task. 3D only seems useful in a 3D universe. ~modest Quote
AnssiH Posted May 17, 2009 Report Posted May 17, 2009 Assuming reality has 4, 5, or 6 spatial dimensions, why would it be useful for me to have a 3D representation or map of that reality? Expecting the world to be 3D when it's actually 4, I might build a 6-sided box and expect it to hold water when I really needed to build an 8-celled tesseract to accomplish the task. 3D only seems useful in a 3D universe. That is quite ill-posed question, as you went ahead and assigned "dimensionality" as a feature of ontological reality without considering the epistemological aspects of "dimensionality". That is what most people intuitively do, and it is exactly the blockage that people have as they are thinking about this issue from the wrong paradigm (let's say, when they hold too many undefendable naive realistic facets in their idea of reality). I don't want to sound like I'd be dodging your question, so let me try and explain the whole issue here. If you just forget everything you think you know about reality, and consider the task of trying to understand what some unknown patterns mean, there is certainly no objectively defendable dimensionality to be seen there. Now, notice how prior to defining any "persistent objects", there is no way to assign any meaning to any "motion" and thus no way to assign any meaning to any "space" either. Note: The fact that the exact meaning of "space" is a function of the exact ways we assign "identities" (=define what features of the data are "persistent objects"), was actually my original motivation for starting to view space and dimensionality as something that must be carefully defined by a worldview. Now, the moment you said "Assuming reality has 4, 5, or 6 spatial dimensions..." you effectively assumed that there exists some specific "ontological identity of objects" (which would correspond to 4, 5 or 6 degrees of freedom for those objects) It is possible that reality was that way and that our definitions just were appropriately mis-aligned to see it as 3D, but we could never know that, and there is absolutely no objective reason to make such assumptions about reality... FURTHERMORE, in my mind, thinking that "ontological identities" exist is about as probable as the idea that reality is ontologically a set of numbers and math equations... "Identity", like math, exists in our head, and lifting them to ontological status is very much an undefendable idea. (That, I think, is the essence of what Kant was on about) The epistemological analysis very much operates on the observation that we have absolutely no ontological information about the identities of reality when we build our conception of reality, and thus we have also no ontological information about the dimensionality of the picture. So we are not really asking whether reality has got "different number of dimensions than what we are aware of", we are rather taking that one step further and assuming that dimensionality is something that hinges on the exact definitions that we make in our worldview; the definitions we make so that we can build a predictive world conception at all. This is not to be taken as an assertion about ontological reality; perhaps there exists dimensions, perhaps there doesn't. We are not making any assumptions simply because we know that the raw data does not give us any objectively defendable answers, and any worldview we come to build is not based on such explicit knowledge. It is absolutely essential to the epistemological analysis that we don't make superfluous assumptions at all. (And btw even if you question the need of taking it this far, I'd think the results speak for themselves) As an additional commentary, note that because dimensionality hinges on definitions, we can build very many valid ways to define reality with different number of dimensions. So then, if you can understand the epistemological issues that we are presenting here, what you would really like to ask is, "if we indeed in our minds build a simple representation of complex data; data which has got no dimensionality embedded to its "explicit meaning", what makes 3 dimensional euclidean picture preferable to other logically valid options?" Well, I mentioned cost-effectivity assessment, so more about that; First, I think it is quite sound observation that euclidean space is, in terms of representing information, simply a way to represent parameters that are independent from each others (parameters that we had to define from the patterns). That makes the picture simple and explains the euclidean part. That is basically what DD was referring to here:http://hypography.com/forums/philosophy-of-science/19450-assertion-absolute-now-what-spacetime-really-20.html#post265115 Next the question of why the definitions are such that the magic number 3 is used for dimensionality, what about 1, 2, 4 or 5? I think it is quite interesting observation, that 1 and 2 dimensional pictures are quite limited in terms of what sorts of objects and what sort of motion they can exhibit (when compared to 3D picture). Representing some situation in front of you in terms of 1 or 2 dimensional objects would produce simple objects but generally very complex situation as there would be very many objects involved (you'd have to track the interaction between very many objects to produce even reasonably accurate predictions) 3 dimensional picture would allow the same situation to be represented with a lot fewer objects, and yet those objects themselves would be relatively simple. 4 or 5 dimensional picture would allow you to represent the same situation with even fewer objects, but the individual objects themselves would become increasingly complex to handle. So in a nutshell, the less dimensions there are, the more objects you need. The more dimensions there are, the more complex the individual objects become. I think we should expect a golden middle ground to be found somewhere in there. I have no objective proof for pointing out why that middle ground would be exactly 3 dimensions, but on the other hand I personally don't find it hard to believe that 3 might be just the appropriate number of dimensions in terms of cost-effectiveness. DD talked about that issue more here:http://hypography.com/forums/philosophy-of-science/18619-why-should-universe-appear-three-dimensional.html -Anssi Quote
Michael Mooney Posted May 17, 2009 Report Posted May 17, 2009 This is the "Philosophy of Science" section and the thread title question is "What can we know of reality?It is not about whether Dodctordick's and AnssiH's perspective is the only one... and everyone else just doesn't get it. My reply to AnssiH in post 313 above though thoroughly "diss-ed" by the Doc, does in fact address the title question.In part I said:Glad to hear that you are not an idealist/solipsist.But I don't think you answered my question about to what extent you trust your senses to give accurate information about the 'real world' we perceive. In my spacetime thread, there was a long period of debate, mostly with Modest about the "objective" distancees (as I call it) between objects... allowing of course that everything is in motion making those "objective distances" change as things move closer or further apart. But he and those who say (as an absolute, ironically) that "everything is relative" maintain that there is no such thing as "objective distance between bjects... that all "distances vary with the relative frame of reference of the point of observation.This "absolute relativity" I called Idealism... that distances actually change with observational perspective... as per relativity. So, likewise, I asked you if you really question whether the tennis ball is "the same ball" each moment of your observation. Granted the "atoms" and molecules which give the ball "substance" are in constant motion. But your comment seems to posit that they are not "the same" atoms/molecules from moment to moment. This would seem to suggest that you believe there is no "real ball out there" independent of your perception.... and your "definition" of it confering a persistent "identity" through time as "this particular ball" as distinguished from millions of other ones... or that it blips in an out of existence with each moment of your observation. Please clarify.So, can we "know" as an actual "fact" that the distance between sun and earth is about 93 million miles (8.3 light minutes) or does that distance actually vary with relativistic frames of reference and velocities of observational perspectives. This is a fair question under the section title and thread title. If the latter, as claimed my "absolute" relativists, then relativity is a form of philosophical idealism, and there is no "cosmos as it is" independent of observation.Likewise, if language has meaning relevant to the universal experience of all tennis players, then the ball served and the ball returned... and batted back and forth... is the "same ball" until another one is put into play. To say otherwise is ridiculous hyperbole... And to say that the sun and earth move closer and further apart depending on observer velocity and relative frame of reference is also ridiculous hyperbole.To say that since everyone has a different perspective means that there are as many different worlds/cosmi as there are observers of it is also... well, you get the idea....philosophically speaking, of course.Michael Quote
Turtle Posted May 18, 2009 Report Posted May 18, 2009 This is the "Philosophy of Science" section and the thread title question is "What can we know of reality?It is not about whether Dodctordick's and AnssiH's perspective is the only one... and everyone else just doesn't get it. In fairness to the Doctor, this is his thread and a further effort on his part to index his ideas in separate threads where some side issue interrupts a discussion. That "we" take it somewhere other than Doc intended is not Doc's fault. (In fact, it vexes him greatly I wager. ) Anyway, Don Mahooney, you made a statement I care to comment on. To whit:...To say that since everyone has a different perspective means that there are as many different worlds/cosmi as there are observers of it is also... well, you get the idea....philosophically speaking, of course.Michael I get the sense you deride this idea and wonder if it is inspite of or because of the "many worlds theory". So I know nothing about the nuances but thought I'd interject a source in case it's overlooked here or in some way pertains to the good Doctor's OP. :rockon: Many-Worlds Interpretation of Quantum Mechanics (Stanford Encyclopedia of Philosophy) Many-Worlds Interpretation of Quantum MechanicsFirst published Sun Mar 24, 2002The Many-Worlds Interpretation (MWI) is an approach to quantum mechanics according to which, in addition to the world we are aware of directly, there are many other similar worlds which exist in parallel at the same space and time. The existence of the other worlds makes it possible to remove randomness and action at a distance from quantum theory and thus from all physics. Quote
modest Posted May 18, 2009 Report Posted May 18, 2009 That is quite ill-posed question, as you went ahead and assigned "dimensionality" as a feature of ontological reality without considering the epistemological aspects of "dimensionality". That is what most people intuitively do, and it is exactly the blockage that people have as they are thinking about this issue from the wrong paradigm (let's say, when they hold too many undefendable naive realistic facets in their idea of reality). I am concerned with your assertion that a 3D mental representation (we'll say a 3D map) is useful *regardless* of the ontological nature of the territory. I give specific examples of ontological entities (a box, a tesseract, space, water... etc) in order to introduce a counterexample to your assertion—not to assert their validity. It's disheartening in so far as our ability to communicate that you didn't get that. I'll give it another go—hopefully in a way not easily misunderstood. DD's work is essentially an explanation as to why, the mental 3D representation is very useful way to represent the data, and why it is valid regardless of the ontological nature of the expressed data. A map is well-made if it works. We cannot compare our map to reality and spot inconsistencies. All we can do is use the map and judge how useful it is. You claim that a 3D map is useful regardless of the ontology of which the map represents. If my worldview is 2 dimensional then I will have certain expectations about the world given that view. I will (for one completely arbitrary example) expect to be able to hide myself in a square such that nothing outside the square could observe me. This worldview will either be useful (if hiding in the square works) or it will not be useful (if hiding in the square does not work). If my map is 3D then I would expect not to be hidden by a square, but rather by a box. This is again useful or not. If my map is 4D then I would not expect to be hidden by a box, but rather a tesseract. For 5D I'd expect to need a penteract... and so on. Now, I don’t care if the universe is *really* 2D or 3D or 4D or if there is or is not such a real thing as a dimension. The point is that the 3D map is not useful if a person finds it impossible to hide in a 6-sided box. There exists a logically consistent territory (a 4D territory—as an example) for which our 3D map is not useful. If anyone wants me to move this conversation to another thread or a new thread then that would be no trouble—let me know. ~modest Quote
watcher Posted May 18, 2009 Report Posted May 18, 2009 This is not to be taken as an assertion about ontological reality; perhaps there exists dimensions, perhaps there doesn't. We are not making any assumptions simply because we know that the raw data does not give us any objectively defendable answers, and any worldview we come to build is not based on such explicit knowledge. It is absolutely essential to the epistemological analysis that we don't make superfluous assumptions at all you are claiming not to make any assumption but go on with your reply that you somehow recommend or prefer 3d space than hyperspace. isn't that already an assumption? that 3d space is superior to 4,5 d space in representing reality? i think it is impossible to create a model without any assumption to be made, but it does simplify if our a prioris are minimized. Quote
arkain101 Posted May 18, 2009 Report Posted May 18, 2009 you are claiming not to make any assumption but go on with your reply that you somehow recommend or prefer 3d space than hyperspace. isn't that already an assumption? that 3d space is superior to 4,5 d space in representing reality? i think it is impossible to create a model without any assumption to be made, but it does simplify if our a prioris are minimized. I've highlighted key aspects, and underlined key words of that statement that you responded to. This is not to be taken as an assertion about ontological reality; perhaps there exists dimensions, perhaps there doesn't. We are not making any assumptions simply because we know that the raw data does not give us any objectively defendable answers, and any worldview we come to build is not based on such explicit knowledge. It is absolutely essential to the epistemological analysis that we don't make superfluous assumptions at all Anssih also said, I have no objective proof for pointing out why that middle ground would be exactly 3 dimensions, but on the other hand I personally don't find it hard to believe that 3 might be just the appropriate number of dimensions in terms of cost-effectiveness. He describes that, as for a "correct" view of the ontological, he carefully makes the point it would be false against our purpose to do so. (even though our processing brains have already performed that task, and are performing that task (assuming 3D is correct) and so it lives 3d) But as for the cognitive processes that a brain evolved. Anssih describes; of all the variable assumptions a brain can make by forming a map of ontological reality, a quick analysis suggests the possibility that the 3dimensional map would be most efficient. Which I believe implies we could imply that maybe other dimensional maps have existed in some past time of history of some conscious processed map, or still do exist in some types of organisms, however, either peaked in their evolutionary possibilities, and remained dorment as what they are, or, failed to survive and evolve to a point where the individual organism could sit and discuss the experience of it along side us. The point to all that is to support that assumptions are not made to what ontological reality ought to be. However, for what its worth at this stage, Annsih hypothesises that just like a lot of evolutionary traits, what remains is what survived, and what survives relative to environment is typically the most cost effective, or efficient epistemological process. What remains, and what is, in us is the 3dimensional epistemological map. Hence the elaboration on 3D's efficiency in comparison to other dimensional considerations. It is a fine line discussion.. I agree... and maybe watcher you really do have a valid point, that quote: "it is impossible to create a model without any assumption to be made", but I can't confidently make a choice if it is valid or not based on that statement alone. So, if for any reason my response does not clear anything up. I'd be interested in what further you have to elaborate on that quoted statement. I am just as interested in things challenging the accuracy of this philosphical aspect as I am interested things supportive. I would like to weigh all that anyone could think up related to this. Quote
arkain101 Posted May 18, 2009 Report Posted May 18, 2009 Oh, by the way, I thought about this post after, and realized it might be taken as an attempt to answer for Anssih. I am not attempting to answer for Anssih. I read what Anssih wrote, and this is basically how I understood him, I included your post to try and kill two birds with one stone. One bird being, a check to see if I understand Anssih properly and the other bird being your question, a question that I also looked into. (and when he responds I will have found out if I was on the mark.. I should really be waiting for him to respond first... :hihi: I am only hoping I am correct, and providing an answer, I realize this could create confusion, so I will try to avoid this kind of post in the future.. (my bad. This topic has got me excited to post and get involved.. lol...no ones perfect right?)-looking forward to anssih's response- Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.