The Other Worlds Shrine

Your place for discussion about RPGs, gaming, music, movies, anime, computers, sports, and any other stuff we care to talk about... 

  • GeForce GTX 260

  • Somehow, we still tolerate each other. Eventually this will be the only forum left.
Somehow, we still tolerate each other. Eventually this will be the only forum left.
 #136665  by Tessian
 Tue May 19, 2009 10:43 pm
So I bought a new graphics card this week... mainly because my old one kept rebooting when it got the least bit hot. Figured I'd go all out and got the EVGA GTX 260... it is a BEAUT!

A good bit bigger than I expected... I knew the GTX's were bigger, but this thing is the size of a loaf of bread. It almost did not fit in my case... technically it doesn't, but I jammed it in there and was extremely lucky it lined up still. Notice how my case bows out in the back.

http://twitpic.com/5j4cz
http://twitpic.com/5is90
http://twitpic.com/5j4c7

One thing I don't like is it's kind of loud when I'm playing a game... while running CoD: World at War (came with the card) on full settings the fan was going pretty fast, it was Xbox loud. Luckily, unlike the Xbox, it's very quiet when not playing a game such as now.

Amazes me what they're doing these days with these things... when will I need a separate case for my video card??

 #136667  by SineSwiper
 Wed May 20, 2009 6:28 am
I notice the 750W power supply, too. Pretty soon, these PCs will require their own power plant. When are we going to get the eco-friendly solar or wind powered version?

 #136670  by Tessian
 Wed May 20, 2009 7:52 am
SineSwiper wrote:I notice the 750W power supply, too. Pretty soon, these PCs will require their own power plant. When are we going to get the eco-friendly solar or wind powered version?
Yeah these monsters require 500w minimum with 2 PCI-E connectors... my old PSU was 460w so I had to buy a new one. VERY nice Corsair, I had no idea they made such quality PSUs until I saw this one top the list in Newegg for best reviews.

At this point there's a better chance of getting a PC with a nuke reactor in it.. solar/wind are too insufficient for a power hungry computer :P

 #136671  by Eric
 Wed May 20, 2009 11:35 am
*Hides 3x GTX set-up*

 #136672  by Imakeholesinu
 Wed May 20, 2009 1:25 pm
I don't want to start a debate but I'd like to know at what point does hardware really overtake the software developers ability to fully utilize the hardware's potential? For example, I have an 8600GT card that I purchased last year. It runs every game I have flawlessly on High settings. I'm wondering if I were to take my 8600GT which runs counter-strike source at an average of about 60-70FPS and set it next to your 260 at the same settings, would we be able to tell a difference? What about on Fallout 3?

I'm glad hardware manufacturers can keep going and building great and wonderful cards, but I hate to see features get squandered in the process. Remember 3dfx and the GLIDE API? This was cutting edge back when Quake II was out. GLIDE stomped OPENGL to the curb. We never hear about any other sort of API war like this anymore though, probably because it segregated the market between having a 3dfx card or an nVidia/ATI/Rendition card at the time.

Also, I want to know why game developers are not adopting multi-threaded gaming for mutli-core processor setups. Does anyone know of a game that is completely compatible and can use a 64-bit operating system and memory space and can operate above the 2GB memory limit a 32-bit operating system provides? If a game today in the Core 2 era of processors can only utilize one of those cores, then I think the developers are too far behind. Soon we will be hearing about multi-core GPU's which is what nVidia has been working on the last 3 years.

Another example, Phsy-X by Aegia. Not all games will take advantage of this hardware which means a developer needs to code for this hardware so the software can take advantage of it. Name me 2 other games (other than FarCry 2 and Crysis) that take full advantage of a Phys-X card. I remember when everyone in the gaming world was so impressed with this but now you rarely hear about it since nVidia took it over. Did they shelve it after they enabled the support on all of their cards after a driver update?

My point is, having the biggest and best hardware is like having a huge penis, you have a huge penis, but it doesn't mean it will get you laid more if you don't know how to use it.

 #136673  by Tessian
 Wed May 20, 2009 1:53 pm
Anarky I replaced this with an 8600GT... the card ran great and gave me the performance I wanted, it just kept rebooting so I had to replace it and at this point why bother wasting money on the same thing?

No, of course older games will only run so well... CounterStrike won't look any better on my PC than yours, BUT newer games will. Farcry 2, for example, I know I had to back off on graphics for but now I bet I can run it at the highest settings. I bought it more planning for the future than for now... right now it is slightly overkill. Hopefully I'll be able to keep this for a good 3 years at least before needing to replace it.

As for PhysX-- it's actually native on GeForce cards now. Check your nVidia control panel if you have anywhere recent of drivers. I know my 8600 was able to do Phys-X. It's not that popular yet but at least they significantly increased the # of cards that can support it now.

 #136674  by Kupek
 Wed May 20, 2009 1:59 pm
Imakeholesinu wrote:Also, I want to know why game developers are not adopting multi-threaded gaming for mutli-core processor setups.
Two things. First, games have been taking advantage of parallel hardware for years - that's what a graphics card is. It's a specialized vector processor. It's very good at handling data parallelism: http://en.wikipedia.org/wiki/Data_parallelism Graphics requires lots of vector and matrix operations, which you can speed up through data parallelism.

But you were wondering about the multicore general purpose processor, and the short answer is "It's hard." The code in a game that's left over - physics, AI, etc. - is harder to parallelize. The tasks they are accomplishing are more inherently sequential. I'm sure that it can be done, and it probably will be done, but writing good code is hard, and good, parallel code is even harder.

Also keep in mind that, right now, the majority of the computational power is needed for the graphics. That's already taken care of with the graphics card, which are, at this point, general purpose hardware hardware accelerators. (In my area of research, high performance computing and systems, using GPUs for scientific computations is a growing trend.) Parallelizing the less computationally intensive part of a game won't have many benefits.

Addendum: GPUs have been "multicore" for years. What's coming is integrating a GPU onto the same chip as a general purpose multicore processor.
Last edited by Kupek on Wed May 20, 2009 2:23 pm, edited 1 time in total.

 #136678  by Mental
 Wed May 20, 2009 2:18 pm
Imakeholesinu wrote:If a game today in the Core 2 era of processors can only utilize one of those cores, then I think the developers are too far behind.
I agree with Kupek. You're asking a LOT of the developers.

Multithreaded, multicore parallel programming is some of the hardest code in the world to write. There probably aren't more than a few thousand people in the world who can do it successfully at any given time. It's an order of magnitude more difficult than the speed gain it represents.

I understand and recognize that gamers are some of the most demanding, unforgiving consumers on the planet, but you're basically asking for developers who've been doing backflips (so to speak) to make the jump to doing triple backflips from one moving helicopter to another, while they're on fire. It's going to take some time for the development talent to work up to it, and it's no reflection on the talent itself. What you're asking is very, very, very hard.

 #136700  by Imakeholesinu
 Wed May 20, 2009 8:57 pm
http://msdn.microsoft.com/en-us/library/bb147385.aspx

Microsoft is stating that it seems like it is pretty easy to do. So why don't developers do this?

A couple quick google searches show that there are plenty of resources out there for developers to begin coding and building engines in 64-bit. It appears that the Steam engine which was used for HL2 Lost Coast has a 64 bit version of the game.

There is a huge performance benefit in enterprise applications that run native 64 bit, why are there not that many games that do? Lack of 64 bit OS?

 #136701  by Kupek
 Wed May 20, 2009 9:07 pm
64-bit compatible is entirely different than paralellization.

First, there are two benefits to being in 64-bits: you can address more than 4 GB of memory, and you have greater precision in math. I doubt math precision is an issue for games, so the only issue that matters is being able to address more memory. Someone can correct me, but I don't think that's much of an issue, either. Right now, at least.

Also, if you read that and think it's easy, then I submit you don't understand the ramifications of each point they make. Writing an application using good C/C++ coding habits should yield something that can be used in both 32 and 64-bit environments. But once you mix in having to deal with different kinds of hardware, which may or may not have 64-bit drivers, having to deal with the system in both 32-bit and 64-bit ways... it gets messy.

This line alone could cause major headaches:
Inline assembly code is not supported on 64-bit platforms and needs to be replaced.
Edit, after yours: There's a huge performance benefit for "enterprise" applications that gobble up huge amounts of memory. That's the only real benefit. It doesn't make everything magically go faster to be in 64-bits.

 #136703  by SineSwiper
 Wed May 20, 2009 9:45 pm
Kupek wrote:But you were wondering about the multicore general purpose processor, and the short answer is "It's hard." The code in a game that's left over - physics, AI, etc. - is harder to parallelize. The tasks they are accomplishing are more inherently sequential. I'm sure that it can be done, and it probably will be done, but writing good code is hard, and good, parallel code is even harder.
Replay wrote:Multithreaded, multicore parallel programming is some of the hardest code in the world to write. There probably aren't more than a few thousand people in the world who can do it successfully at any given time. It's an order of magnitude more difficult than the speed gain it represents.
Pffft...I fork into multiple processes all the time on Perl. How hard can it be? If there are only a few thousand people in the world who can do it successfully, then I must be a fucking genius!

And even if it is hard, so what? Everybody's PC is multi-core nowadays. Get used to it! Writing GAMES is hard, so why should they be bitching about making hard code?

"Oh god! I have to think about which tasks should be split off into separate processes. I might actually have to write a parent/child communication layer! OH NOES!"

 #136709  by Mental
 Wed May 20, 2009 10:36 pm
Sine, with all due respect, I think you don't understand the concepts we're talking about. I run multiple timers on Windows Forms in C# all the time, which is the equivalent of the Perl threads you're talking about. It doesn't matter that there are many of them, because they all take turns trading off timeslices in a programming architecture that's fundamentally single-threaded under the hood. No process is actually running literally simultaneously with any other process, so you never have to worry about a simultaneous memory access at the hardware level by two threads. You might have a lock/unlock issue with the hard drive, but not usually the memory.

When you're talking about multiple processors, you're not talking about that. You're talking about several processors that are trying to access the same data at the same time, and there aren't easy ways to know which processors are accessing a given bit of memory at any given instant. This is more akin to writing database logic and lock/unlock access routines, but for every bit of shared data in your app, which adds a layer of complexity. You can't have two processors access the same memory at the same time, so you have to make sure that what each thread is running at any given instant doesn't access the same data anyone else's routine is accessing.

There are some routines that can be fairly easily parallelized, and some that can't, and it's not always clear at the outset which is which. Yes, you could probably parallelize a one-person programming task fairly easily, but that's not what we're talking about - these are teams with dozens to hundreds of people and huge amounts of coding and they're all manipulating a lot of the same stretches of data. It helps to have clearly modular code and good data abstraction and good clean design, and that's great, but I'm not sure if you're aware (read as: I am very very sure that you are highly aware) that a lot of production code isn't always as modular as it should be, particularly highly commercial code on deadline.

I should qualify my statement: I think that there are plenty of programmers who can successfully write code for parallelized routines under the guidance and direction of an experienced parallel processing project manager, if given the requirements and a basic idea of the concept. But you also need that manager, and *that* is the girl or guy I'm talking about.

Perhaps I should have said "I don't think there are more than a few thousand people in the world who can successfully lead a large parallel processing project to completion at any one time"...I guess far more people can write the code, I just don't think the code will plug into anything if there's not someone who really knows what they're doing leading the project. The number of people with parallel processing programming experience is growing, but it's not high.

Anyway, Kup probably knows more than I do on this. I'd trust what he's saying.

 #136710  by Kupek
 Wed May 20, 2009 10:39 pm
Sine, the kind of parallelism you're going to exploit in a game is going to be finer grain than what is allowed in a fork-exec paradigm.

Replay's claim of only 1,000 is too small. But it is hard, and it sounds like you have no experience in the area. If your goal is to make the program faster through parallelism, that generally means you're going to use multiple threads with shared state. This is hard. It requires synchronizing correctly on your shared data structures, and in such a way that you actually benefit from the parallelism.

 #136711  by Mental
 Wed May 20, 2009 10:56 pm
Kupek wrote:64-bit compatible is entirely different than paralellization.

First, there are two benefits to being in 64-bits: you can address more than 4 GB of memory, and you have greater precision in math. I doubt math precision is an issue for games, so the only issue that matters is being able to address more memory. Someone can correct me, but I don't think that's much of an issue, either. Right now, at least.
Everything is an issue sometimes, but no, generally actually a lot of us even end up trading off floating-point for fixed-point math because we need speed, not precision. As long as you can fake the illusion successfully, a few one-trillionths of a decimal point one way or the other doesn't make much of a difference.

We always love memory, but it'll be awhile before the target machine for most games has 4GB of RAM, so no, that doesn't matter much right now either.

What game programmers never get enough of is speed (and never will)...we can always use more processing cycles. So there are indeed some very heavy gains to be made from multiple cores and parallelization. But I think it's only just now that dual-core processors have really started to comprise a sizable chunk of the PCs in the marketplace (they've been sold for awhile, but that's not the same thing), and so it's only now that more programmers are really taking an interest in learning how to code for it.

It's a lot like where GPU programming was, say, probably eight years ago (and since the GPU is indeed a parallel processor, the comparison is valid). Everything finally had a GPU, but only a few killer teams really understood shader programming and that expertise. Now it's eight years later, and more and more people at all levels of game programming are picking up casual HLSL experience as the language has gotten more friendly and more knowledge has become available. I'm writing pixel shaders that function on the 360 myself now. GPUs are doing things they couldn't have dreamed of doing eight years ago.

So will parallel cores, one day. I suspect the wait may be similar, and I think people will say it was worth it in the end - look at the spectacular heights shader programming is reaching.

Personally, I don't even understand the syntax of coding for a two-core system (i.e., how do you actually make sure to utilize the second core? Are there special development libraries available? How would I do it in MSVC++, or C#? Are there special function calls? I'd love if you'd explain the basics to me, Kup. In particular I don't understand whether one core is the "main" core and the second is used to farm out additional threads, or whether it's more complicated. The GPU model of a graphics processor subordinate to the CPU makes sense to me, but I don't know any of the models for a dual-core and in what ways threads get assigned to various processors.

 #136712  by Mental
 Wed May 20, 2009 11:00 pm
Kupek wrote:Sine, the kind of parallelism you're going to exploit in a game is going to be finer grain than what is allowed in a fork-exec paradigm.

Replay's claim of only 1,000 is too small. But it is hard, and it sounds like you have no experience in the area. If your goal is to make the program faster through parallelism, that generally means you're going to use multiple threads with shared state. This is hard. It requires synchronizing correctly on your shared data structures, and in such a way that you actually benefit from the parallelism.
I never actually said one thousand, I said a few thousand, but you're right, it probably is too small. The last time I had serious conversations about parallel core development with someone was with my friend who does netcode at EA back in late 2006, and in his opinion there weren't more than a handful of guys in his department who understood it well enough to lead a project. I'd suspect it's grown by now.

Still, I would bet that even in this market, you'd have a tricky job hiring a team comprised expressly of parallel-aware programmers, which stands in contrast to hiring GPU-aware programmers (there would be no problem with the second). I doubt you'd do better than getting a few really experienced people, with the rest having to learn on the go from them.

 #136718  by SineSwiper
 Wed May 20, 2009 11:18 pm
Replay wrote:Sine, with all due respect, I think you don't understand the concepts we're talking about. I run multiple timers on Windows Forms in C# all the time, which is the equivalent of the Perl threads you're talking about. It doesn't matter that there are many of them, because they all take turns trading off timeslices in a programming architecture that's fundamentally single-threaded under the hood. No process is actually running literally simultaneously with any other process, so you never have to worry about a simultaneous memory access at the hardware level by two threads. You might have a lock/unlock issue with the hard drive, but not usually the memory.

When you're talking about multiple processors, you're not talking about that. You're talking about several processors that are trying to access the same data at the same time, and there aren't easy ways to know which processors are accessing a given bit of memory at any given instant. This is more akin to writing database logic and lock/unlock access routines, but for every bit of shared data in your app, which adds a layer of complexity. You can't have two processors access the same memory at the same time, so you have to make sure that what each thread is running at any given instant doesn't access the same data anyone else's routine is accessing.
True, but if you give each task its OWN memory share and use a communication layer to share data to the master thread/process/whatever only when it's needed, you eliminate problems associated with a process polluting somebody else's memory heap.

I keep hearing this talk about "It's hard, it's hard, it's hard", but frankly, those are just excuses. Every new technology is harder to program than the next, especially in the first stages of developing for it. It's much much harder to make Crysis than it is to make Pong, or that horrible ET game. Even if you factor in the advances in code development and engines, it is STILL easier to make an ATARI 2600 game in the 80's than it is to make a modern FPS today.

However, the fact of life is that the multi-processor systems are already common place, sooo....

<big><big>GET USED TO IT!</big></big>

Making excuses about how hard it is to code the technology doesn't change the fact that what was once using close to 100% of a single-core CPU is now using 25% of a quad-core CPU. Your games are now horribly inefficient, and your fans DO NOT appreciate the fact that you're making lame excuses as to why your game is not using the full power of the processor. Get the fuck used to it!

 #136722  by Tessian
 Wed May 20, 2009 11:28 pm
I don't know... these days it seems the only real important things to look for in a game's requirements are physical memory and graphics card. CPU is listed of course, but when was the last time that was an issue? My point is I don't see how forcing games to make full use of multiple cores on the CPU will do much for the games themselves... as Kup said it's mostly the GPU pulling the weight now.

When was the last time a game of yours ran poorly because your CPU was too slow? I can only ever recall upgrading memory and graphics in order to play better games.

 #136725  by Mental
 Wed May 20, 2009 11:59 pm
SineSwiper wrote: True, but if you give each task its OWN memory share and use a communication layer to share data to the master thread/process/whatever only when it's needed, you eliminate problems associated with a process polluting somebody else's memory heap.

I keep hearing this talk about "It's hard, it's hard, it's hard", but frankly, those are just excuses. Every new technology is harder to program than the next, especially in the first stages of developing for it. It's much much harder to make Crysis than it is to make Pong, or that horrible ET game. Even if you factor in the advances in code development and engines, it is STILL easier to make an ATARI 2600 game in the 80's than it is to make a modern FPS today.

However, the fact of life is that the multi-processor systems are already common place, sooo....

<big><big>GET USED TO IT!</big></big>

Making excuses about how hard it is to code the technology doesn't change the fact that what was once using close to 100% of a single-core CPU is now using 25% of a quad-core CPU. Your games are now horribly inefficient, and your fans DO NOT appreciate the fact that you're making lame excuses as to why your game is not using the full power of the processor. Get the fuck used to it!
If you mean "me" as "me", your comment isn't relevant. I don't have any reason as an indie developer to waste time learning about dual-cores right now, other than to prepare myself for a career in the future. Dual-core technology will not improve the scale of the projects I work on. I'm working on the 360 with XNA, not AAA level PC games. It's a small-team/indie development setup. They haven't even exposed the multi-core 360 library to us, and probably with good reason. My time is better spent learning about shaders on the GPU, because that's what I'm likely to see.

So I will assume you mean "you" in the sense of "you, the hypothetical developer at a major game company working on a marquee title that must use a dual-core to be on the cutting edge". Just about anyone else is likely to see way more improvement per dev-hour spent working on other facets of a game - optimization, shaders, whatever.

You're not hearing what I'm saying - the technological expertise isn't there. You can only teach people how to do things so fast. Dual-core programming isn't the kind of thing you can reasonably expect to learn in three months and then, bam, you're off and running coding whatever you want to. Also, the technological *access* isn't there well. There aren't as many multi-core libraries. You can get at the GPU pretty easily through a variety of technologies now, but that's not true for dual cores.

If you're saying that the industry should have prepared better for dual-core processing - well, it's fine to say, but there are a lot of things that a game developer has to put resources into, and they're always tight. Some developers won't want to put dev time into reaping an unknown speed gain (and maybe lack of stability) from an unproven dual core technology when they know they could write shaders that improve graphic quality at an existing framerate reliably, or other such tradeoffs.

Giving each task its own memory share is a wonderful idea. I'm sure that it's a common practice. But it's not that simple, because you still need that communications layer, and the more data that is shared, the more complex it will be. Very few real-world tasks boil down to "and here's this data here that we manipulate, and here's this other data over here, and they don't interact much". Games share a lot of data between all aspects of the program at the same time, being notoriously difficult to easily compartmentalize.

I think my point is that developers aren't Gods. It's easy to heckle the big guys from the cheap seats, but what they do is harder than you realize. If you have a little patience, I do believe it will get there. Alternately, try learning about it instead of just complaining that other people don't know how to do their jobs well enough, particularly when you can't do it either.

 #136727  by Mental
 Thu May 21, 2009 12:16 am
Sorry, multi-core, not dual-core. But that exposes another element of the problem, which is that it's not so easy to break tasks into chunks that will actually take up four processors. It's not "four times as fast" by any means. What if you only have two or three threads that can safely concurrently run without deadlock? What if you have three threads you can concurrently run, so to realize your full speed potential you have to make the game for single-core, dual-core, and quad-core systems differently? That shit starts to boggle the mind real fast.

Give it time.

 #136732  by Kupek
 Thu May 21, 2009 2:46 am
Sine, there's decades of work in parallel computing that you're not familiar with. There are many different techniques for writing parallel code. Shared-state multithreading, however, is generally going to give you the best performance.

GPUs are already doing most of the heavy lifting for games. Read up on Ahmdal's Law (http://en.wikipedia.org/wiki/Amdahl%27s_law) for an explanation why speeding up the non-graphics portions of PC games probably won't make a big difference in overall performance.

Replay, you've basically asked for a primer on the basics of parallel computing. Hopefully I can write up a few paragraphs later. But for now, look up POSIX threads. That's generally the lowest level one has to deal with when writing mulithreaded code - and even if you're not a system with Pthreads, the concepts can still apply. On top of that, you can build better abstractions. (And people have.)

 #136733  by Mental
 Thu May 21, 2009 3:16 am
I will do that. Thanks man.

There's decades of work in the realm of academia, but not a lot of it has filtered into the workplace, in my experience. It's usually grad students or master's students, or bachelors of science with a focus on parallelism specifically, who have experience with the theory, in my experience.

I don't know. Maybe I'm not as familiar with current statistics as I thought, but I don't personally know a lot of programmers who have any experience with parallelism at all, and I've known quite a few. And it takes more "any experience" (i.e., "I heard about that in college once") to write production code. My friend noted to me that when he was at EA, the senior-level programmers wouldn't let anyone junior touch parallel processing code without a very, very good reason.

 #136742  by SineSwiper
 Thu May 21, 2009 7:49 am
Granted, the GPU handles much of the heavy lifting, but the CPU is nothing to sneeze at. Improvements in CPU speed is going to make a good impact on the speed of the system.

Sure, I'm used to my 8-core server, so if I'm not using multi-process code, it's not going to be using enough resources to spread the load. But, the cores will continue to increase, and game programmers (as well as every other software developer) better get a handle on the technology fast.

 #136748  by Lox
 Thu May 21, 2009 9:04 am
I've done a little bit of basic multithreaded programming and it's tough stuff. Computers have been designed to process instruction after instruction in a linear fashion. Implementing parallel computing is extremely complicated to do without bugs and it's even more complicated to do it and gain something. Difficulty shouldn't be a hindrance however, and it's not, but there is a lot more involved than just "kick off threads and you've got parallel computing".

 #136757  by Mental
 Thu May 21, 2009 12:52 pm
Lox wrote:I've done a little bit of basic multithreaded programming and it's tough stuff. Computers have been designed to process instruction after instruction in a linear fashion. Implementing parallel computing is extremely complicated to do without bugs and it's even more complicated to do it and gain something. Difficulty shouldn't be a hindrance however, and it's not, but there is a lot more involved than just "kick off threads and you've got parallel computing".
I think you just succinctly explained what was taking me paragraph after paragraph.

 #136758  by Lox
 Thu May 21, 2009 1:24 pm
Only because between you and Kupek, you've covered all of the nitty gritty details that I don't feel like writing. :) I had the luxury of summation because I'd just be repeating you guys.