Summary
Java and C# are almost identical programming languages. Boring repetition that lacks innovation. Where is the revolutionary programming language that will stop the redundancy?
Advertisement
The industry is spinning in circles inventing one "curly brace" language after another - C++, Java, C#, ... The popular object-oriented programming languages of today fatally remind us of the heyday of procedural languages: in the 60ies, a cluster of very similar languages (Fortran, PL/I, COBOL, Algol) dominated the IT business, until the advent of C changed the world. Finally, a language invented by programmers for programmers! C was a revolution; Java and C# are just evolution. Where is the C language of our times?
Hardly anybody will claim that Java or C# are revolutionary programming languages that changed the way we write programs. Especially the new kid on the block - C# - is so extremely redundant that one wonders why both Java and C# are in existence. Do we really need yet another curly brace language? Why would anybody want to explore a "novel" language that offers the old set of language features? Yes, there are some minor differences between C# and Java: C# does not have an equivalent to Java's nonstatic inner classes; Java does not treat events and delegates as first-class members of a type; C# has user-defined value types. Fine! But the list of commonalities is significantly longer than the list of differences. C# borrowed a lot from Java - and vice versa. Now that C# supports boxing and unboxing, we'll have a very similar feature in Java. Have we desperately been waiting for autoboxing in Java? Certainly not. Perhaps, we've been eagerly awaiting Java generics, which - surprise, surprise - look pretty much like generics in C#. Alas, if at least they were the same! But no, every language designer can't resist and must do a little different in order to demonstrate that his language feature is more convenient, more effecient, more powerful, ... well, just better ... than the corresponding one in the "other" language.
Is the existence of redundant languages any good? Does it help programmers in any way? Do we profit from having a choice between Java and C#? In contrast to the respective language designers we do not believe that one language is superior to the other. The fact that Java and C# are so similar is tiring and the fact that they are different makes it even worse.
Why is it that such redundant languages are invented and promoted? Is it because Java and C# are corporate languages? Okay, both respective comporations will instantly deny that their languages are proprietary. After all C# is an ECMA standard and Java is enhanced through its community process. Yet, without Sun and Microsoft neither language would exist.
Corporate languages are often developed under the pressure of competition, at the risk of throwing out a new feature although it's still immature. Corporate languages carry around a lot of baggage because they must be backward, forward, or otherwise compatible to some piece of legacy code. Deficiencies once introduced can never be taken back. Think of the miserable Cloneable interface in Java that does not allow you to call a clone() method; it's the standing joke in every Java class and the example for a highly debatable interface. Or take the inconsistency between comparison and equality: we can support several sorting orders via comparators, but we can only have one notion of equality via equals(). Why? This is painfully inconsistent, causes quite a bit of headache at times and will never be repaired, because it is so for "historical reasons". Such a tribute to its legacy is typical for corporate languages; they grow and grow and become more and more unwieldy. Is C# any different? Slightly, but not fundamentally.
In essence, Java and C# feel like PL/I and consorts: PL/I was an attempt to combine the best features of Fortran, COBOL and ALGOL and was developed by George Radin of IBM in 1964. Java was an attempt to combine the best features of C++ and Smalltalk and was developed by James Gosling of Sun Microsystems in 1995. C# is an attempt to combine the best features of C++, Visual Basic and Java and was developed by Anders Hejlsberg of Microsoft in 2000. Will the next language be an attempt to combine ... ?
In the early '70ies the dominance of PL/I and its look-alikes ended with the advent of C. The C programming language was everything but a deliberately invented corporate language based on combinations of existing ideas. Quite the converse. C was a by-product of the Unix development and was created on a tiny machine as a tool to improve a meager programming environment. Seemingly it covered the essential needs of many programmers at that time without trying to supply too much. It was successful to an extent far surpassing any expectations and changed the way programs were written.
Will there be a revolutionary language like C again - invented by programmers rather than corporations? A language that will be fun and exciting to use - free of baggage, convenient, consistent, comprehesible? What would it look like? Do we need such an innovation, or are we happy anyway and cannot even think of a language better than Java or C#?
Maybe you're already sick of hearing about it, but I'd warmly suggest you test-drive Python (or Ruby, or god forbid, even Perl ;-). It's not exactly revolutionary as far as programming language theory is concerned, but neither was C.
If you are going to argue that a programming language "revolution" is needed, I would like to hear more about what is wrong with the languages of today (other than they are "not revolutionary") and more ideas for what future languages could do that would make them truly superior. To make the next step we need a paradigm shift that will break us out of the "curly brace" confines and bring programming to the next higher level of abstraction. Are ideas such as dynamic typing or aspect-oriented programming revolutionary?
Richard Gabriel has some interesting thoughts (http://www.dreamsongs.com/Feyerabend/Feyerabend.html) but they seem to be in the realm of the distant future. On the other hand, Paul Graham has some good ideas (http://www.paulgraham.com/hundred.html) but they don't appear drastically different from the 20th-century paradigm of programming. What will allow the next revolutionary language bring a truly higher level of productivity? I think perhaps one possibility is the integration of persistence into the core of the language, so the "memory" transcends the lifetime of the process and the program "runs" indefinitely.
If you are going to argue that a programming language "revolution" is needed, I would like to hear more about what is wrong with the languages of today (other than they are "not revolutionary") and more ideas for what future languages could do that would make them truly superior. To make the next step we need a paradigm shift that will break us out of the "curly brace" confines and bring programming to the next higher level of abstraction. Are ideas such as dynamic typing or aspect-oriented programming revolutionary?
Richard Gabriel has some interesting thoughts (http://www.dreamsongs.com/Feyerabend/Feyerabend.html) but they seem to be in the realm of the distant future. On the other hand, Paul Graham has some good ideas (http://www.paulgraham.com/hundred.html) but they don't appear drastically different from the 20th-century paradigm of programming. What will allow the next revolutionary language bring a truly higher level of productivity? I think perhaps one possibility is the integration of persistence into the core of the language, so the "memory" transcends the lifetime of the process and the program "runs" indefinitely.
> I would like to hear more about > what is wrong with the languages of today
Of all the activities imaginable, computer programming is the one in which computers _should_ have the greatest productivity impact. And yet compared to activities that have enjoyed huge productivity gains via computers in the past 20 years (say, image manipulation), the productivity gains in computer programming are trivial. Give a 500MHz P3 to one professional graphics designer and a 3GHz P4 to another and compare their productivity: you _will_ see a productivity difference, because this is a task / profession which has managed to leverage the computer itself. Give the same disparate hardware to two comparably talented programmers and what productivity difference will you see? None, or so little difference as to be immeasurable.
Similarly, give two designers the current feature set of a preferred professional tool (let's say, Photoshop) and the feature set of that tool 5 years ago, and you'll see a difference. In programming? Doubtful (with the notable exception of a refactoring IDE such as IDEA).
More concretely, "computer programs" are almost invariably viewed as a series of linear text streams that are converted in some way into machine instructions in some O(lines of source code) manner. 99+% of the world's code is written imperatively. Object-orientation, which is almost universally accepted as the preferred structuring mechanism for software systems, has turned out not to be universally superior for learning, comprehension, or reuse.
Persistence, business rules, interfaces: all are areas in which the way we specify, create, and maintain systems _invariably_ trade off productivity with maintainability. If you want to do things fast, you might have some chance to use a tool that presents the problem as something other than a text stream (i.e., you might be able to use a visual builder). But those tools _invariably_ create overly-coupled representations of the solution.
Pattern matching is absolutely fundamental to human problem-solving, but where's the computer-support for pattern matching in the task of software development? That is, why can't a programming language _leverage_ the fact that the vast majority of computer programs are built from examples?
Test-driven design and functional programming: The whole world of TDD is based on either minimizing side-effects or making them explicit. Well, if you program side-effect free, you should have programmatic support, i.e., use a functional language, which has all sorts of implications for behind-the-scenes implementation. And if you _rely_ on side-effects, the world of unit-testing tools is in conflict with language provisions for visibility (although in .NET, at least, you can get around visibility with sufficient security permissions).
Typing, multithreading, resource management: In all of these areas, there's an enormous gap between standard and best practices. Just as managed memory and built-in exception mechanisms are for most programmers effective "solutions" to common problems, these things should be part of the development / deployment infrastructure.
Sheesh, I haven't even started on the issue of multiple representations and semi-structured data....
Two of his most compelling suggestions, to me at least, are a language that handles XML as a first-class datatype, and a (to be determined) language for distributed systems. Both are hot topics right now, and both aren't handled well by existing solutions. XSL and DOM are pretty unwieldy, and Java isn't that big an improvement over solutions from C/C++, apart from the ability to download proxy code.
Selecting two languages that are noteworthy primarily for their popularity, and then complaining that there is no innovation, is like looking at the NY Times paperback bestseller list and complaining there is no great literature.
When you ask, "what next?", are we to suppose that Java and C# reprsent the highest rung in a steady, linear evolution of languages? Users of Lisp, Smalltalk, Haskell, Ruby, Eiffel, Dylan, among others, might say otherwise.
Might be better to view Java and C# as dead ends on the evolutionary tree, take a step back, and see what else is already out there.
> compared to activities that have enjoyed huge > productivity gains via computers in the past 20 years > (say, image manipulation), the productivity gains in > computer programming are trivial.
This due to fundamental difference in the nature of programming vs. hardware. Fred Brooks argues that there can be no forseeable order of magnitude improvement in productivity due to the sheer complexity of the programming task, to capture all the requirements which satisfy human needs, not simple laws of nature. This paper is from 1986 but still quite pertinent.
However, I do agree that significant, if not revolutionary, progress can be made. What I would like is to find out more ideas about not only what specifically is wrong today but how exactly could new languages overcome these deficiencies in the forseeable future.
> Typing, multithreading, resource management: In all of > these areas, there's an enormous gap between standard and > best practices.
Can you provide any information to what these best practices are?
Maybe eiffel will be a consideration. Although it's maintained by a company, but its inventer, Bertrand Meyer, is a really programmer. And it has many excellent ideas, such as Design by Contract.
> Maybe eiffel will be a consideration. > [..] it has many excellent ideas, > such as Design by Contract.
Yes, but Eiffel has been around almost as long as C++. As a previous post cited, there have been many languages with productivity-enhancing features over the years: Lisp/Scheme, Smalltalk, Objective-C, ML/SML, Haskell, Tcl, Ocaml, Dylan, Eiffel, Python, Ruby, Lua ... just as a small sample. Except for Python and at one time Tcl, few have even approached the success of C, C++, Java, C# or Perl.
Why, I'm not entirely sure. Maybe they were too far from what was considered a "normal" language: "funny syntax" (do ... end instead of {...}), "unusual" paradigms (functional programming or multimethods), or similar capabilities to other languages (Obj-C vs. C++, Ruby vs. Python vs. Perl), Or maybe languages have to piggyback on another success: UNIX came with C, C++ was the better C, Java was the better C++ for the Web, C# is the better Java for .NET, and so on. (Perl, of course, combined several UNIX tools, with applications for sysadmins and web developers.)
In any case, the "next C" probably won't win on cool features or technical superiority. I doubt it will even win due to some marketing blitz from a big company. Like C or Perl, it will be something that sneaks in under the door, looks familiar and unthreatening, and solves some near-universal need that nobody even recognized until it came along.
> Except for Python and at one time Tcl, few have > even approached the success of C, C++, Java, C# or > Perl.
BTW, I'm intentionally leaving out Lisp and Smalltalk, as well as COBOL or Fortran. We're considering "the next C", and while all those languages had huge followings at one time, the number of C programmers eventually overwhelmed them.
> Will there be a revolutionary language like C again - > invented by programmers rather than corporations? A > language that will be fun and exciting to use - free of > baggage, convenient, consistent, comprehesible? What would > it look like? Such a language does it exist: Ruby! It really brings back the fun to programming. Give it a try (http://www.ruby-lang.org).
> Do we need such an innovation, or are we > happy anyway and cannot even think of a language better > than Java or C#? The pragamatic programmers say: "Learn a new programming language every year", but I think, the problem is, that many people are not willing to do this. It's the old "I've done this successfully for years. It cannot be easier or better!". Additionally, companies like Sun and Microsoft promote their own oldfashioned languages in a way that other companies use for selling cars or cold beverages. It's hard for the new kids on the block to compete ...
> Of all the activities imaginable, computer programming is > the one in which computers _should_ have the greatest > productivity impact. And yet compared to activities that > have enjoyed huge productivity gains via computers in the > past 20 years (say, image manipulation), the productivity > gains in computer programming are trivial. Give a 500MHz > P3 to one professional graphics designer and a 3GHz P4 to > another and compare their productivity: you _will_ see a > productivity difference, because this is a task / > profession which has managed to leverage the computer > itself. Give the same disparate hardware to two comparably > talented programmers and what productivity difference will > you see? None, or so little difference as to be > immeasurable.
Yup. That's because graphics is in many ways a production environment. What drives increases in programmer productivity is conceptual advances. Few people would argue that C gives a large increase in productivity over assembley; similarly, few would argue that Python and other scripting languages give productivity improvements over C. Of course, the Lispers will proudly point out that Lisp has the same advantages over C despite being almost two decades older -- the problem with Lisp is that it requires conforming your brain to Lisp, rather than the other way around.
Consider this: what improvement does a fiction writer get from switching from 500MHz to 3GHz?
> Maybe you're already sick of hearing about it, but I'd > warmly suggest you test-drive Python (or Ruby, or god > forbid, even Perl ;-). It's not exactly revolutionary as > far as programming language theory is concerned, but > neither was C. >
I have to agree with Jarno. Python is older than Java and C# but still better. "better" here means: easier to learn, more expressive, both procedural and object-oriented. I'm programming quite only in Java now, but I found it a too much verbose language compared to Python. The real difference is that Java and C# are better supported from the industry, so they have more IDEs and tools.
> This due to fundamental difference in the nature of > programming vs. hardware. Fred Brooks argues that there > can be no forseeable order of magnitude improvement in > productivity due to the sheer complexity of the > programming task, to capture all the requirements which > satisfy human needs, not simple laws of nature.... > > http://www-inst.eecs.berkeley.edu/~maratb/readings/NoSilverBullet.html
A true classic, but I don't agree with your interpretation. I think Brooks makes an irrefutable point that we will never see single-cause order-of-magnitude leaps in software productivity or doubling-every-year productivity gains over the course of any extended period. But I don't read it as saying that there are fundamental limits to software productivity. Surely the productivity represented by "<HTML><HEAD><TITLE>A Web page</TITLE></HEAD><BODY>Hello, Anyone!</BODY></HTML>" (and, absolutely, the infrastructure supporting what it represents!) is several orders of magnitude beyond 1968 productivity rates.
Flat View: This topic has 48 replies
on 4 pages
[
1234
|
»
]