SineSwiper wrote:
True, but if you give each task its OWN memory share and use a communication layer to share data to the master thread/process/whatever only when it's needed, you eliminate problems associated with a process polluting somebody else's memory heap.
I keep hearing this talk about "It's hard, it's hard, it's hard", but frankly, those are just excuses. Every new technology is harder to program than the next, especially in the first stages of developing for it. It's much much harder to make Crysis than it is to make Pong, or that horrible ET game. Even if you factor in the advances in code development and engines, it is STILL easier to make an ATARI 2600 game in the 80's than it is to make a modern FPS today.
However, the fact of life is that the multi-processor systems are already common place, sooo....
<big><big>GET USED TO IT!</big></big>
Making excuses about how hard it is to code the technology doesn't change the fact that what was once using close to 100% of a single-core CPU is now using 25% of a quad-core CPU. Your games are now horribly inefficient, and your fans DO NOT appreciate the fact that you're making lame excuses as to why your game is not using the full power of the processor. Get the fuck used to it!
If you mean "me" as "me", your comment isn't relevant. I don't have any reason as an indie developer to waste time learning about dual-cores right now, other than to prepare myself for a career in the future. Dual-core technology will not improve the scale of the projects I work on. I'm working on the 360 with XNA, not AAA level PC games. It's a small-team/indie development setup. They haven't even exposed the multi-core 360 library to us, and probably with good reason. My time is better spent learning about shaders on the GPU, because that's what I'm likely to see.
So I will assume you mean "you" in the sense of "you, the hypothetical developer at a major game company working on a marquee title that must use a dual-core to be on the cutting edge". Just about anyone else is likely to see way more improvement per dev-hour spent working on other facets of a game - optimization, shaders, whatever.
You're not hearing what I'm saying - the technological expertise isn't there. You can only teach people how to do things so fast. Dual-core programming isn't the kind of thing you can reasonably expect to learn in three months and then, bam, you're off and running coding whatever you want to. Also, the technological *access* isn't there well. There aren't as many multi-core libraries. You can get at the GPU pretty easily through a variety of technologies now, but that's not true for dual cores.
If you're saying that the industry should have prepared better for dual-core processing - well, it's fine to say, but there are a lot of things that a game developer has to put resources into, and they're always tight. Some developers won't want to put dev time into reaping an unknown speed gain (and maybe lack of stability) from an unproven dual core technology when they know they could write shaders that improve graphic quality at an existing framerate reliably, or other such tradeoffs.
Giving each task its own memory share is a wonderful idea. I'm sure that it's a common practice. But it's not that simple, because you still need that communications layer, and the more data that is shared, the more complex it will be. Very few real-world tasks boil down to "and here's this data here that we manipulate, and here's this other data over here, and they don't interact much". Games share a lot of data between all aspects of the program at the same time, being notoriously difficult to easily compartmentalize.
I think my point is that developers aren't Gods. It's easy to heckle the big guys from the cheap seats, but what they do is harder than you realize. If you have a little patience, I do believe it will get there. Alternately, try learning about it instead of just complaining that other people don't know how to do their jobs well enough, particularly when you can't do it either.