Mike Swaim
Posts: 13
Nickname: swami
Registered: Apr, 2004
|
|
Re: The Holistic Approach to Software Engineering as a Way to Handle Complexity
|
Posted: Dec 2, 2010 6:46 AM
|
|
> Well, I did correct/clarify/narrow the scope a few posts > later. My point, which I've made in earlier threads here > and other places (and well documented by Denning and > Dennis), is that we're not in new territory with these > machines, and that we've not had much luck leveraging them > in the past when the attempt was to use them in the niche > venues to which they are suited.
Sure. Parallel architectures have been around since the 60s. Parallel desktop computers have probably been around since the late 80's. As for leveraging parallelism, I know of a couple of success stories back in the 80s and early 90s.
> Assuming that we can > leverage them in the "most user" common desktop, in a way > that the "most user" will notice favorably, through some > magic new language is a bold assumption.
I think that this is true for several reasons. 1) Most users aren't taxing their hardware anyway. 2) Many problems don't lend themselves to easy parallelization.
> Some have described uses (often functions within > applications), well known, none of which amount to a hill > of beans in terms of justifying the millions of common > desktop cpu's that Intel/AMD need to ship. The next > Win/Office release tap dance no longer does it.
Doesn't matter. They're already shipping it as pretty much the "standard" hardware. You're going to pick it up on the next hardware refresh, if you haven't already.
> The "most > user" class of user has seen, and will continue to see, > only marginal benefit from this variant of Moore's Law. > There needs to be a Killer App for these processors, > , relevant to the "most user" space. Ain't happenin'.
Once again, it doesn't matter. They're already shipping it. Even the i3 processor is dual core. I would like to point out that it does provide more of a benefit than the handful of new instructions that Intel/AMD add every couple of years that require new compilers (or programming in assembly) to even use.
> Hoping/believing that some new language/compiler will be > created which leverages existing procedural source > (Office, for instance), or even new code, onto these > chips, with an attendant burst in performance, is kind of > like believing in the Tooth Fairy. That ain't happenin' > either. I don't think that anyone's making that claim. (At least I'm not.) What I have seen is approaches to make parallizing code easier.
> For those who wish to embrace multi-core coding and > parallel/concurrent languages, yes there are known venues > where it is happenin'. And it's server/OS stuff and > number crunching.
Here I disagree. Desktop apps can (but not always will) benefit from parallel design, even on single core machines.
> I wonder how many posters actually read the Denning/Dennis > essay?
I had read it a while ago, and found the Burroughs design interesting. Unfortunately, I suspect that it's largely irrelevant, thanks to the current state of the industry.
|
|