The Other Worlds Shrine

Your place for discussion about RPGs, gaming, music, movies, anime, computers, sports, and any other stuff we care to talk about... 

  • The 360 as a medical breakthrough

  • Somehow, we still tolerate each other. Eventually this will be the only forum left.
Somehow, we still tolerate each other. Eventually this will be the only forum left.

 #140682  by Zeus
 Thu Sep 24, 2009 2:02 am
Did realize Rare was US-based. I always thought they were an English-based development house....

That is pretty cool. But I'm sure it's more a technique as opposed to the power of the system itself. The PS3 should easily have the horsepower to do it

 #140684  by Mental
 Thu Sep 24, 2009 2:25 am
Sony's devkits and SDKs are, by industry consensus, not nearly as easy to use or quick to develop for as Microsoft's. That's why you're more likely to see stuff like this for the 360, barring a genuine corporate effort by Sony or another party to explicitly develop medical software for the Core processor and the PS3.

 #140687  by Julius Seeker
 Thu Sep 24, 2009 7:18 am
Rare was located in Leicester UK. What has existed for a while now is Rare in name only, so it may very well be located in the US now.

 #140688  by Tessian
 Thu Sep 24, 2009 7:51 am
How the hell can an Xbox 360 perform faster than a supercomputer? Is the programmer THAT good that they just streamlined the code so it can run on a 360 instead, or is there something else that made this possible?
To create a heart model now, researchers must use supercomputers or a network of PCs to crunch millions of mathematical equations relating to the proteins, cells and tissues of the heart, a time-consuming and costly process. Scarle's Xbox system can deliver the same results at a rate five times faster and 10 times more cheap, according to the study.

 #140690  by Kupek
 Thu Sep 24, 2009 8:20 am
http://research.microsoft.com/en-us/people/sscarle/

The research has little, fundamentally, to do with the 360. The comparison he made the benefit of using the GPU over the CPU. By "supercomputer," they mean a cluster of typical compute nodes, containing x86 processors. But nothing prevents one from sticking a GPU into a typical compute node.

 #140693  by Mental
 Thu Sep 24, 2009 11:51 am
GPGPU (general-purpose graphical processing unit) techniques can result in a lot more than a fivefold increase for some applications. It's a new technology that I studied a bit and am incredibly impressed with the implications of - the basic idea involves using the GPU to do analysis of a texture or mesh using per-pixel or per-vertex operations, and then passing that information back to the CPU.

For instance, let's say you have a texture you want to divide into individual areas and do something like a luminance analysis on (the classic looking-for-dark-areas in an x-ray). You can take a high-resolution texture, run it through a very specialized pixel shader, and then blit it to a smaller texture, then read that texture data back. As an example, let's say you have something like a 1500x1500 texture that you'd like to analyze in terms of smaller regions - blit the whole thing through a specialized scientific pixel shader onto, say, a 10x10 texture, and suddenly you've divided a 2.25million pixel image into 100 unique sections that are already partially analyzed. Read back a single pixel from the new texture and it will provide detailed information about one-one-hundredth of the original, without having to actually iterate through 22500 pixels in the CPU, which is very slow. A CPU is relatively slow at doing image processing or any kind of work that involves a texture or 3D mesh. GPUs are not, and therein lies the potential of these techniques.

My guess is that a staggering number of research developments will result from greater mastery of GPGPU theory and tech, which is very much in its infancy.

 #140695  by Kupek
 Thu Sep 24, 2009 12:05 pm
I'm writing a paper right now comparing the implementation of a data-parallel problem on a GPU, Cell and an Intel Quad Core. For this problem, the GPU performs terribly, and it's all about the cost of transferring memory from the host to the GPU. If you're interested, I can post a copy here.

My personal opinion is that using GPUs for general purpose programming is a stop-gap solution. For problems which have N-squared (or worse) passes over memory, it's worthwhile to ship the data to the GPU, but for problems that have an linear (N) pass over the memory, it's not. Once GPUs are on chip, or at least, on-motherboard, everything changes. This is happening in the future with things like Intel's Larrabee.

 #140696  by Lox
 Thu Sep 24, 2009 2:11 pm
I'd be interested to read the paper.

 #140697  by Kupek
 Thu Sep 24, 2009 2:27 pm
I submit it on Monday, I'll put it up after that.

 #140701  by Mental
 Thu Sep 24, 2009 5:32 pm
The name is terrible. I didn't make it up. The idea is not, however, to rope the GPU into being a CPU, it is to exploit the more parallel (at least conceptually) nature of per-pixel and per-vertex operations on the GPU to simplify certain kinds of processing (specifically that related to geometry and texture data).

Don't take my word for it. Here...GPGPU techniques are already used all over the place.

http://en.wikipedia.org/wiki/Gpgpu

 #140705  by Kupek
 Thu Sep 24, 2009 6:05 pm
Mental, I was trying to softly say that this falls into my area of research, and consequently, my area of expertise.

The architectures for CPUs and GPUs are converging. Whether or not they will converge to the point that even high end systems only have a CPU/GPU hybrid remains to be seen. But what I can say for certain is that having to send data off-motherboard to the GPU limits using GPUs as general purpose (that's the first GP part of GPGPU) computing devices.

 #140706  by Mental
 Thu Sep 24, 2009 6:19 pm
This is true, and it may not be an area where I am qualified to claim "expertise". I'm not qualified to do supercomputing comparisons. I'll take your word for it. What I can tell you is that, at an "everyday" level, GPGPU techniques work. I've used them. I'm unqualified to do speed metrics on tasks like the one in the article. I can, however, unequivocally say GPGPU enables a lot of techniques that were not previously available if you're not optimizing for a CPU-intensive supercomputer. And I find the article's claims fairly credible. There are things the 360 does well.

 #140707  by Mental
 Thu Sep 24, 2009 6:24 pm
To put it another way: GPGPU techniques are not just for simulating CPU-intensive tasks. They are eminently useful in situations where one might want a practical "quick fix" or approximation - or else in tasks that relate natively to image manipulation. I also am of the opinion they are useful to smaller development teams who need a rapid solution to certain tasks even if that solution is not completely optimized.

Certainly from a syntax standpoint as an end-line developer working with the technologies in use on the 360, I can tell you I'd rather be writing GPU code than CPU code any day of the week for image processing.

 #140708  by Kupek
 Thu Sep 24, 2009 6:27 pm
You can attain incredible performance with GPUs - I never said otherwise. My point is that GPGPU programming is well suited to a certain class of problems, but there are problems where it is not well suited.

As for the original point about the 360, I'm sure it is a great little machine, but nothing in their study is unique to it. You can attain similar speedup by hooking up a CUDA enable GPU to the computer sitting in front of you.

 #140710  by SineSwiper
 Thu Sep 24, 2009 6:43 pm
Kupek wrote:Mental, I was trying to softly say that this falls into my area of research, and consequently, my area of expertise.

The architectures for CPUs and GPUs are converging. Whether or not they will converge to the point that even high end systems only have a CPU/GPU hybrid remains to be seen. But what I can say for certain is that having to send data off-motherboard to the GPU limits using GPUs as general purpose (that's the first GP part of GPGPU) computing devices.
It was only a matter of time. I think I mentioned this before in another thread, but they can't just keep making GPU cards that take up half the case.

 #140716  by Zeus
 Thu Sep 24, 2009 8:33 pm
Kupek wrote:Mental, I was trying to softly say that this falls into my area of research, and consequently, my area of expertise.
Softly? Didn't John Doe teach you anything? "You can just tap people on the shoulder anymore, you have to hit them over the head with a sledgehammer". Oh so very true