Page 1 of 1
My Master's work will be published in a conference
PostPosted:Fri Jul 08, 2005 11:10 am
by Kupek
And the conference is in Italy. Check out the picture on the front page:
http://www.hpcc05.unina2.it/default.asp?r=501
PostPosted:Fri Jul 08, 2005 1:39 pm
by Zeus
Wow, congrats man. Good work
PostPosted:Fri Jul 08, 2005 3:06 pm
by Nev
Are you going? And if so will you bring me back some good seafood pasta?
Just kidding. Gratz man. By the way, what was your thesis on? I never asked...I assume it had something to do with this whole high-performance computing thing. I'd love to have a discussion with you about it sometime, I could probably learn a bunch.
PostPosted:Fri Jul 08, 2005 6:34 pm
by Eric
My brothers shall not have died in vein!!!!
PostPosted:Sun Jul 10, 2005 6:02 pm
by SineSwiper
Nice. Is this something you've been working on all year?
PostPosted:Sun Jul 10, 2005 6:13 pm
by Imakeholesinu
*boom boom boom* HEADSHOT HEADSHOT HEADSHOT!!!
PostPosted:Sun Jul 10, 2005 7:01 pm
by SineSwiper
Come on, dude. Wrong thread.
PostPosted:Mon Jul 11, 2005 1:14 am
by Ishamael
Congrats, dude! Do you have a link to the dissertation?
PostPosted:Mon Jul 11, 2005 9:46 am
by Kupek
This has been basically what I've been working on for the past year. Here's a link to the thesis itself. The conference paper is a shorter version of this. The thesis format requires double-spaced, which I find annoying to read. I'll probably eventually generate a single-space version of the thesis which is easier on the eyes.
http://www.cs.wm.edu/~scotts/thesis.pdf
PostPosted:Mon Jul 11, 2005 12:53 pm
by Nev
I am going to try to read this over the next week or so - looks interesting.
To those who say you need a master's in computer science to understand a thesis written by a master's-level computer science student, I say, bah!
PostPosted:Mon Jul 11, 2005 1:56 pm
by Zeus
Man, i know on-off, that's about it. So most of this is likely to go over my head. But I'll give it a shot when I have a bit more time
PostPosted:Mon Jul 11, 2005 8:37 pm
by Kupek
I think it's understandable by someone with a decent grounding in CS. Even without a CS background, you can probably get a feel for what we're trying to achieve.
About the writing itself - you might notice the tone is a bit different than my normal writing style. For the thesis itself, I extended the paper we submitted for the conference I mentioned above. That paper had a lot added by my advisor and the other PhD I mention in my acknolwedgements.
Don't worry if you get lost in the introduction. The gist of it is that we're moving towards processors that can support multiple threads at once, so we need programming tools, languages and libraries to exploit that parallelism. (The rest of that is support for that assertion.) Then I explain what we're trying to achieve.
Feel free to skip the related work. If you interested, read it last. I say this because the intended audience (other academics) will be more familar with the related work than mine; most people here, however, won't be familiar with those other languages and libraries.
And if you get lost in the design, you might want to try going through the programming examples first to get a feel for how it all fits together.
PostPosted:Mon Jul 11, 2005 9:00 pm
by Nev
Well, I'll be busy with my project (I have a feeling it may stretch out to another two months actually at this point though I'm rather hoping it doesn't), but I'll try to make time to get to it. I'd like to stay on top of the whole parallel-processing thing, since I've heard processor designers have hit some sort of physical limit on how fast individual processor cores can go. As a brief aside, does anyone else who's heard of this know what it's about? I seem to remember hearing that it was either heat-related or speed-of-light related, but I could be talking out of my ass on this one...
PostPosted:Mon Jul 11, 2005 10:25 pm
by Kupek
Previously, processors were made more powerful by finding ways to up the processor frequency. At the same time, you need to exploit instruction level parallelism (ILP) in order to keep the processor instruction pipelines full.
ILP takes advantage of the fact that not all instructions have dependencies on each other. So if you have a stream of instructions, and it contains a load, that load must wait until the memory hierarchy (L1 cache, L2 cache, main memory, disk) can satisfy that load. This takes time. During that time, however, instructions which are not dependent on that load can start executing. Or for another example, if you have a division, a multiplication and then a bunch of additions and they don't have any data in common, you can start them all at the same time so that the other functional units of the processor are not idle while the division and multiplications take place.
Without these pipelines, even if the clock frequency is fast, the processor will sit idle most of the time as it waits for operations to finish.
Doing all of these instructions in parallel is called pipelining, and the deeper the pipeline is allowed to go, the more instructions that can execute in parallel. However, the longer the pipeline gets, the longer the physical distance the signals have to travel on the chip to carry the information.
Each of these stages in the pipeline must take place during a clock frequency. As the clock frequency gets faster, the less time there is for each stage. There is a fundamental limit to how fast information can be transmitted - the speed of light. When you're dealing with a clock frequency of 3GHz and the distances are measued in nanometers, then the speed of light can indeed become a limiting factor.
So. We increase the performance by increasing the clock frequency. And we fully utliize the chip by having deep instruction pipelines. The fundamental problem is that as we deepen the instruction pipeline and increase the clock frequency, information has farther to travel in shorter time. Eventually, there will be a point where information can't travel any faster.
Enter SMT (Intel's implementation is Hyper-Threading) and CMP (commonly called <i>dual core</i>). If this trend does indeed continue, then even more than before, clock frequency won't be the most important factor in the peformance of a chip.
PostPosted:Mon Jul 11, 2005 11:42 pm
by SineSwiper
Anybody know what clock frequency would be the physical limitation? It seems like it would be somewhat easy to calculate. Figured somebody would have written up an estimate on that.
PostPosted:Tue Jul 12, 2005 12:32 am
by Kupek
SineSwiper wrote:Anybody know what clock frequency would be the physical limitation? It seems like it would be somewhat easy to calculate. Figured somebody would have written up an estimate on that.
It's not that simple. The depth of the instruction pipeline (which determines the distance; longer pipelines means more information has to go across greater distances) and the size of the transistors are also variables.
You could also let micro-operations take longer than a clock cycle, but as I understand it, these makes chip designs much more complex.
PostPosted:Tue Jul 12, 2005 12:47 am
by Nev
Something I've wondered (and I'm mostly talking out of my ass here) is whether or not one could use a third dimension to add more processing power to a chip. Since all current processors I know of are printed on silicon dies and so necessarily are in two dimensions, if you could somehow construct a way to have a fully three-dimensional chip design (where the current can flow "up" and "down" instead of just in two dimensions), could you have a more complex or more powerful chip design?
PostPosted:Tue Jul 12, 2005 1:03 am
by Garford
Quite an interesting read. I'll read it more in detail when I have the time.
UoW's focus has been on security and more security and reading a thesis on something other then security is good
PostPosted:Tue Jul 12, 2005 1:06 am
by Nev
Oooo, I want to talk to you, too. I want to learn about security from a university perspective as well...
PostPosted:Tue Jul 12, 2005 9:42 am
by Kupek
Mental wrote:Something I've wondered (and I'm mostly talking out of my ass here) is whether or not one could use a third dimension to add more processing power to a chip. Since all current processors I know of are printed on silicon dies and so necessarily are in two dimensions, if you could somehow construct a way to have a fully three-dimensional chip design (where the current can flow "up" and "down" instead of just in two dimensions), could you have a more complex or more powerful chip design?
I don't know. That's outside of my (basic) architecture knowledge. I have a computer scientist's understanding of a processor - you'd need to talk to an engineer.
Although the first thing that comes to mind is feasability of mass producing something like that.
PostPosted:Tue Jul 12, 2005 9:50 am
by Agent 57
Kupek wrote:I don't know. That's outside of my (basic) architecture knowledge. I have a computer scientist's understanding of a processor - you'd need to talk to an engineer.
And looky looky, we got one right here! (I actually started writing the following post before you posted yours, Kup - it's nice the way things work out sometimes, isn't it?)
The answer to your question, Mental, is "I don't think so", because those silicon dies you're talking about are already three-dimensional in a way. (It's been a long time since my VLSI design course in college, but I'll see if I remember it well enough to explain.)
See, computer chips are constructed by putting layers of silicon substrate on top of each other and adding/removing electrons from areas of the different layers to create positive and negative areas - and thus transistors. Then, the designer puts different combinations of transistors together to create basic gates (AND/OR/NOT, etc.), and then different combinations of gates together to create more complicated circuits (multipliers, etc.).
Thus, the designer ends up with a "mask" which tells the die creator which layers/areas of the silicon are positive, negative, ground, etc. - it ends up looking like a huge mess of colored lines crossing each other a zillion times over. (Oh, and by the way, the necessary width of those lines is what people are talking about when they mention the "nanometer process" of the chip - i.e. in a 90-nm chip, a charged line is a minimum of 90 nm wide. Thus the smaller the process, the more transistors/gates/circuits you can fit on a chip.)
Anyway, the point is is that computer chip silicon is three-dimensional already. Putting another set of layers on top of an existing silicon die is most likely well-nigh impossible - I have no idea how you'd connect them, etc. - but I will be the first to admit that this is not my area of expertise so I don't know what advances have been made. However, the fact that they haven't started doing this yet is a good clue that it is in fact impossible, as I suspect.
PostPosted:Tue Jul 12, 2005 11:53 am
by Garford
Mental wrote:Oooo, I want to talk to you, too. I want to learn about security from a university perspective as well...
Nah, I'm literally the only research student in the faculty that is not doing pure computer/network security related stuff here. My research area is in eBusiness applications. All your servlet, .Net bunch of nonsense
The libraries/electronic archives are all full of security stuff, inevitably I will end up on some security or hashing related thesis while searching for info as security is a major component in eBuisness.
God knows how much HAVAL, MD-5 etc information I have ended up reading……
Think we can start gathering the geek force hidden in here and take over the world or something. We have nerds in way too many different technological fields, ranging from ISPs to engineering
PostPosted:Tue Jul 12, 2005 12:46 pm
by Nev
With Microsoft's legendary reputation for security and professionalism in their internet-based applications, I doubt you guys could do it....
Bwaaaaah!!!! (Monkeys fly out of Mental's butt) Bwaaahahahah.
There are days when I doubt Microsoft is even really trying anymore when it comes to security. If that's what your university specializes in, I dunno if you could take over the world, but I bet you could take over a few computers.
PostPosted:Tue Jul 12, 2005 10:38 pm
by Garford
Oops, I mean "hidden in here" as in, "hidden in the shrine".
Anyway, Microsoft is surprising doing very well when it comes to cost/performence/security etc at the mid-high range scale of eBusiness/enterprise solutions, but that's another debate
PostPosted:Thu Jul 14, 2005 2:14 am
by Ishamael
Interesting looking thesis. I probably won't have time to read the whole thing until this weekend.
Parallel processing is definitely the wave of the future. The only problem of course is, not all things are parallizable. So I see limited gains in that area, with most of the advancements being pushed by game programming where graphics specialized hardware is common place, phsyics specialized hardware is being developed and AI specialized hardware may become popular too.
Looking forward to reading the paper.
PostPosted:Thu Jul 14, 2005 9:22 am
by Kupek
Ishamael wrote:
Parallel processing is definitely the wave of the future. The only problem of course is, not all things are parallizable.
Yup. The sort of work I did might still remain firmly in the domain of high performance computing. It's not necessarily worth the effort to multithread a typical desktop application, particularly fine grained multithreading. But, all production OS kernels have been multithreaded for a while, so overal system throughput can still improve with the emerging hardware.
Ishamael wrote:So I see limited gains in that area, with most of the advancements being pushed by game programming where graphics specialized hardware is common place, phsyics specialized hardware is being developed and AI specialized hardware may become popular too.
Hrm. I think you're severely underestimating the contributions of scientific and high performance computing, which have much higher computing requirements and have been using parallel processing for decades.
PostPosted:Thu Jul 14, 2005 10:48 am
by Nev
Garford wrote:Oops, I mean "hidden in here" as in, "hidden in the shrine".
Anyway, Microsoft is surprising doing very well when it comes to cost/performence/security etc at the mid-high range scale of eBusiness/enterprise solutions, but that's another debate
I hope you mean "doing very well" as in "selling a lot of copies". See under my post on the board yesterday about a complete backdoor in their color management module...
PostPosted:Fri Jul 15, 2005 1:30 am
by Ishamael
Kupek wrote:
Ishamael wrote:So I see limited gains in that area, with most of the advancements being pushed by game programming where graphics specialized hardware is common place, phsyics specialized hardware is being developed and AI specialized hardware may become popular too.
Hrm. I think you're severely underestimating the contributions of scientific and high performance computing, which have much higher computing requirements and have been using parallel processing for decades.
I've worked on and with some scientific computing apps. Most high end computing these days is driven by the biological sciences (field I worked in), whereas back in the old days it was physics. There are lots of parallizable things in those areas.
But I guess I was referring to home systems really putting some of this stuff into overdrive. Graphics used to be the same way. For decades, only universities, and and governments organizations utilized these technologies for things like World War 3 simulations. But then companies like 3dfx took these technologies from those settings and put them into the mainstream. I expect the same thing to happen with specialized chip hardware in private industry. It's already happening with some of the new multi-core chips coming out.