hazelm Posted February 8, 2020 Report Posted February 8, 2020 Definition: An observation by Intel founder Gordon Moore that the capacity of electronic devices roughly doubles annually. Question. Are they saying that the capacity of already made devices doubles? Or are they saying that newer devices will have double the capacity that the older ones had? Does this apply to silicon chips? As we know, in Quantum worlds, anything can happen. :-) Thank you. Quote
hazelm Posted February 8, 2020 Author Report Posted February 8, 2020 Sort of, the amount of transistors/complexity has been approximately doubling each year BUT they are reaching the theoretical limit of the amount of transistors that can be mounted on silicon chips. Quantum computers if they ever get them working will apparently surpass a silicon computer operated on binary, if they ever get them working on a commercial scale.So what it is saying is that they are managing to load more onto newer chips. For a moment there I had those bouncy little electrons creating new electrons. :-) Then I thought better of it but gremlins kept reminding me that I was in the quantum world. :-) Thank you. Quote
Mutex Posted February 8, 2020 Report Posted February 8, 2020 (edited) Definition: An observation by Intel founder Gordon Moore that the capacity of electronic devices roughly doubles annually. Question. Are they saying that the capacity of already made devices doubles? Or are they saying that newer devices will have double the capacity that the older ones had? Does this apply to silicon chips? As we know, in Quantum worlds, anything can happen. :-) Thank you. It was originally about transistor count on silicon for integrated circuits, but also switching speed has been incorporated as well, they both really go hand in hand. It's is also now 18 months (or has been for awhile). Moore's law has really been dead (no longer really applies) for quite some time, the curve is getting very flat. What really has not changed is the technology! Sure, you can now fit millions or billions of transistors per mm of silicon, but the architecture has almost not changed at all, we are still using the same basic design for CPU's that we were using in the 1970's (my fist CPU was a 2650, and Z-80's) they have the same basic structure as your i7 or the latest thing from Intel.It's the standard Von Neuman architecture, with the same programming methodology. There are other 'non-Von Neuman' architectures (such as the Harvard architecture) that can provide advantages in computing power but require a different thinking about how to program. Harvard separates data from program onto different busses, as opposed to how we generally do it now where program and data are mixed. Harvard then has some advantages and can provide faster processing but it is very difficult to program or to port Von Neuman programs to different architectures. So Moore is just about number of transistors and speed (and power dissipation that results), but not so much about actual system advancements. I think method and system advantages have been investigated less because of the performance gains that have come from transistor density and switching speed.. but as Moore's law declines we are going to have to look more into different methods as well. Such as parallel processing and different processing architectures. Edited February 8, 2020 by Mutex Quote
Dubbelosix Posted February 14, 2020 Report Posted February 14, 2020 I would not say it is dead, but it is breaking down. Quantum computers will be when it has reached its absolute limit, unless... Quantum computers can teach us about technologies we have not understood properly. Quote
GAHD Posted February 15, 2020 Report Posted February 15, 2020 Definition: An observation by Intel founder Gordon Moore that the capacity of electronic devices roughly doubles annually. Question. Are they saying that the capacity of already made devices doubles? Or are they saying that newer devices will have double the capacity that the older ones had? Does this apply to silicon chips? As we know, in Quantum worlds, anything can happen. :-) Thank you.It's referring to the ability to put roughly double the storage or double the active gates into the same surface area. It held true for Magnetic storage like hard drives and ribbon and disks for many years, same with Silicon for operation sets or RAM/ROM. IBM is trying to keep the trend alive by moving into quantum computing to sidestep the per-square-inch limit by including more than binary states: if a q-bit can reliably run 4 states it's double the binary equivalent, and if they can expand that into 8 and 16(etc...) year by year they'll maintain that general rule. OGC this comes with major changes to base architecture, since the standard Serial UART can't accept more than binary input/output even as it bridges clock speeds. It's the standard Von Neuman architecture, with the same programming methodology. There are other 'non-Von Neuman' architectures (such as the Harvard architecture) that can provide advantages in computing power but require a different thinking about how to program. Harvard separates data from program onto different busses, as opposed to how we generally do it now where program and data are mixed. Harvard then has some advantages and can provide faster processing but it is very difficult to program or to port Von Neuman programs to different architectures.AFAIK you got it wrong there. We don't use VNA much anymore, and your own point about HA kinda points to this... AFAIK there's plenty of different architectures in use on different Integrated-circuits; not just differing instruction sets in register but entirely different operation methods between MCU/CPU/GPU types. A lot of that probably has to do with different doping levels in the silicon and efficiency vs speed bottlenecks in the mosfet or resonator design on die. EG the GPU architecture of NVIDIA and AMD is terribly different from each other as well as different from CPU architecture. Quote
hazelm Posted February 15, 2020 Author Report Posted February 15, 2020 (edited) It's referring to the ability to put roughly double the storage or double the active gates into the same surface area. It held true for Magnetic storage like hard drives and ribbon and disks for many years, same with Silicon for operation sets or RAM/ROM. IBM is trying to keep the trend alive by moving into quantum computing to sidestep the per-square-inch limit by including more than binary states: if a q-bit can reliably run 4 states it's double the binary equivalent, and if they can expand that into 8 and 16(etc...) year by year they'll maintain that general rule. OGC this comes with major changes to base architecture, since the standard Serial UART can't accept more than binary input/output even as it bridges clock speeds. AFAIK you got it wrong there. We don't use VNA much anymore, and your own point about HA kinda points to this... AFAIK there's plenty of different architectures in use on different Integrated-circuits; not just differing instruction sets in register but entirely different operation methods between MCU/CPU/GPU types. A lot of that probably has to do with different doping levels in the silicon and efficiency vs speed bottlenecks in the mosfet or resonator design on die. EG the GPU architecture of NVIDIA and AMD is terribly different from each other as well as different from CPU architecture.Thank you, GAHD. hazel Edited February 15, 2020 by hazelm Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.