And the conference is in Italy. Check out the picture on the front page:
http://www.hpcc05.unina2.it/default.asp?r=501
http://www.hpcc05.unina2.it/default.asp?r=501
It's not that simple. The depth of the instruction pipeline (which determines the distance; longer pipelines means more information has to go across greater distances) and the size of the transistors are also variables.SineSwiper wrote:Anybody know what clock frequency would be the physical limitation? It seems like it would be somewhat easy to calculate. Figured somebody would have written up an estimate on that.
I don't know. That's outside of my (basic) architecture knowledge. I have a computer scientist's understanding of a processor - you'd need to talk to an engineer.Mental wrote:Something I've wondered (and I'm mostly talking out of my ass here) is whether or not one could use a third dimension to add more processing power to a chip. Since all current processors I know of are printed on silicon dies and so necessarily are in two dimensions, if you could somehow construct a way to have a fully three-dimensional chip design (where the current can flow "up" and "down" instead of just in two dimensions), could you have a more complex or more powerful chip design?
And looky looky, we got one right here! (I actually started writing the following post before you posted yours, Kup - it's nice the way things work out sometimes, isn't it?)Kupek wrote:I don't know. That's outside of my (basic) architecture knowledge. I have a computer scientist's understanding of a processor - you'd need to talk to an engineer.
Nah, I'm literally the only research student in the faculty that is not doing pure computer/network security related stuff here. My research area is in eBusiness applications. All your servlet, .Net bunch of nonsenseMental wrote:Oooo, I want to talk to you, too. I want to learn about security from a university perspective as well...
Yup. The sort of work I did might still remain firmly in the domain of high performance computing. It's not necessarily worth the effort to multithread a typical desktop application, particularly fine grained multithreading. But, all production OS kernels have been multithreaded for a while, so overal system throughput can still improve with the emerging hardware.Ishamael wrote: Parallel processing is definitely the wave of the future. The only problem of course is, not all things are parallizable.
Hrm. I think you're severely underestimating the contributions of scientific and high performance computing, which have much higher computing requirements and have been using parallel processing for decades.Ishamael wrote:So I see limited gains in that area, with most of the advancements being pushed by game programming where graphics specialized hardware is common place, phsyics specialized hardware is being developed and AI specialized hardware may become popular too.
I hope you mean "doing very well" as in "selling a lot of copies". See under my post on the board yesterday about a complete backdoor in their color management module...Garford wrote:Oops, I mean "hidden in here" as in, "hidden in the shrine".
Anyway, Microsoft is surprising doing very well when it comes to cost/performence/security etc at the mid-high range scale of eBusiness/enterprise solutions, but that's another debate
Kupek wrote:Hrm. I think you're severely underestimating the contributions of scientific and high performance computing, which have much higher computing requirements and have been using parallel processing for decades.Ishamael wrote:So I see limited gains in that area, with most of the advancements being pushed by game programming where graphics specialized hardware is common place, phsyics specialized hardware is being developed and AI specialized hardware may become popular too.