The Other Worlds Shrine

Your place for discussion about RPGs, gaming, music, movies, anime, computers, sports, and any other stuff we care to talk about... 

  • Presentation by Epic Games CEO on graphics hardware future

  • Somehow, we still tolerate each other. Eventually this will be the only forum left.
Somehow, we still tolerate each other. Eventually this will be the only forum left.
 #139202  by Kupek
 Tue Aug 11, 2009 10:20 am
http://graphics.cs.williams.edu/archive ... PG2009.pdf

It's really technical. I wish I could see the actual presentation, but I have a good idea what he said from the slides. In short, we probably won't have "graphics cards" in the future. General purpose CPUs (like the processor in your computer) and GPUs (that loaf of bread size thing you installed in your computer to make games purdy) are converging in architectural designs, and we'll likely only have one in the future. He then talks about the implications this has to for games are written and designed.

 #139210  by SineSwiper
 Tue Aug 11, 2009 7:03 pm
I don't see how this COULDN'T be like this. After all, are they just going to have you stuff a 2 foot PCI-XXXE card on your motherboard 10 years from now?

However, these new chips are going to be HUGE.

 #139214  by Mental
 Tue Aug 11, 2009 8:02 pm
Sweeney is respected throughout the industry for his influence on engine design. I don't have time to check it out right now, but if he says something, you usually want to listen.

 #139217  by Kupek
 Tue Aug 11, 2009 9:58 pm
SineSwiper wrote:However, these new chips are going to be HUGE.
Probably not - Moore's Law!

Moore's observation was about the number of transistors that could fit on a chip, not the clock speed. Any exponential growth has to eventually flatten out, but it's still holding for now. That's why we're seeing these radically different processor architectures come to market, like Cell, Intel's Quad Cores, GPUs for general purpose computing, and Intel's upcoming Larabee. The number of transistors we can fit on a chip is still increasing, but we don't know what to do with them.

For a while, architects would use the extra transistors for more cache and to increase the pipeline length, while upping the clock speed. As the clock speed increased, the relative cost of going to RAM also increased, so more cache helped offset that. Deeper instruction pipelines allowed more instructions to be in flight at the same time, increasing the exploitation of Instruction Level Parallelism. The problem was that as these pipelines got deeper, it got harder to keep them coordinated with an increasing clock speed. Hence the fact that few CPUs come to market now over ~3.2GHz.

That was the old way of improving performance: optimize the processor for sequential code. That no longer works, so we can't continue to do that. But, at the same time, we can still fit more and more transistors on a chip.

So, right now, processor architects are in an experimental phase. We know the chips of the future are going to have parallelism, we just don't exactly how.

 #139218  by SineSwiper
 Tue Aug 11, 2009 10:13 pm
Frankly, the OSs themselves seem to be a bottleneck in terms of parallelism. After all, why force all programs to be coded with new processes when the logic to divide tasks should be better left with the OS itself? This concept of ILP sounds pretty cool and a step in the right direction, but there should be some high-level division of work on the OS level.

For example, all programs are nowadays are high-level commands to OS libraries. Even Visual C++ stuff is just accessing DirectX, or Windows GUI libraries. Make THOSE multi-threaded. Really, it makes sense: it's the low-hanging fruit. Why re-code billions of programs when you can re-code the libraries themselves which are used by those billions of programs?

 #139220  by Kupek
 Tue Aug 11, 2009 10:59 pm
ILP isn't a "step in the right direction." It's the status-quo. It's how we've been able to squeeze more performance out of sequential code for years - and why we no longer can.

It's not the OS that will handle the parallelism, but what I think will happen is the virtual machines and runtimes that sit between the OS and "your code" will have to handle much of the parallelism. The problem is that the language you write your programs in will have to expose the parallel constructs. Getting all of that right - how to express the parallelism, how to implement it in a runtime or VM, what kind of parallelism it will even be - is hard, and that's what the industry and researchers are trying to do right now.