Vorlath is down on GC:
Nowadays, there is a lot of talk about memory management and garbage collectors. I personally think that garbage collectors are a bad thing. If you can't keep track of your data to delete it then how are you going to track it to do something useful? It amazes me that people would say that programmer time would be better spent on more useful tasks. What can be more useful than proper management of your data? This is lazyness plain and simple. It's like anything else in programming like checking error codes, trapping exceptions, using proper syntax, learning new API's, etc. No one wants to do it. But just because you don't want to do it doesn't mean it shouldn't be done.
Lazy? Hardly. GC is simply better for the vast majority of applications. Why is that? As you build a larger application, you'll have a tremendous number of objects flying around in and between modules. The management issue will get to be more and more troublesome, as proper encapsulation techniques will get in the way of the global knowledge necessary to properly manage memory.
So what ends up happening? Developers in large C/C++ projects end up building their own half-baked GC themselves. I say "half baked", because it almost certainly isn't going to be as efficient as the systems in Smalltalk, Lisp, C#, or Java - those were all built by people with deep knowledge of the field. It's not that application developers are stupid (heck, I'm one!) - it's just that GC is not inside the problem domain they work with. Why Vorlath wants to make it one is a mystery. Heres what he says:
Whenever I hear someone say that garbage collection saves time, I know these are beginners. It's not an insult. You just have to keep practicing. If deleting your data is slowing you down, chances are you need more experience. Now, there's a difference between someone who absolutely needs a garbage collector and someone who organises his data using the stack or automatic pointers. These are two different areas completely. I personally don't even think about deleting code. It's a normal part of programming that doesn't take any more or less time than anything else. And you know why I don't even think about it? Because it's all part of managing your data. If your data is organised, it's not even an issue.
Maybe if you have something close to a photographic memory, you can keep track of who's responsibility it should be (and when) to kill of a given object. Not being able to track such things isn't "laziness" - it's a matter of getting overwhelmed by complexity. I'll make a simple minded analogy - Chess is a relatively simple game, but - at any given moment - there are tons of possible moves, and the possibilities expand as you also consider the possible moves of your opponent. There are only a handful of master who can play with (and sometimes beat) software specifically built to track all of those possible moves (and give valuations to them). Are the rest of us lazy because we simply can't track as well as a master, or the custom software?
In Vorlath's world, I guess so. Where does this sort of thing take him? Well, he gets to that:
There's something else I want to discuss and it's having programs continue to run after their state has been corrupted. I, for one, would rather it come crashing down instantly. That way, I can fix it right away. Having software keep executing with a corrupt state where you can't easily trace the problem is not advancing the way we write software. This is a step backwards. If you absolutely need your software to keep running, have redundant systems.
If you make a mistake in freeing a pointer, you can end up with corruption, and have no idea where said corruption is - so crashing is the best possible result. If you use GC, you can have memory leaks (because you have strong holds on objects that you shouldn't have), but the sort of corruption possible with pointers won't come up - unless we are talking about GC added onto a language with pointers. In a fully managed language, you won't get that.
What Vorlath really wants is a small priesthood of experts - but even there, he's unrealistic:
There is a frightning trend going on where people graduate college or university and don't know how a computer works, yet they have their CS or Computer Engineering degree. A clear understanding of how memory works, paging and protection are critical. Also critical is a good understanding of the stack and different calling mechanisms. This will explain why certain language are the way they are, especially C.
The problem with C (the one he's pointing to, anyway) is because of the hardware knowledge and assumptions of the original designers. It was built to be a high level assembler, with the notions learned from the hardware they were familiar with. If I'm writing an RSS aggregator, I don't need to know or care how memory works at the hardware level. I merely need to let people know what kind of minimums the application expects. The stacks and different calling mechanisms? Those can differ across hardware and languages - how specific is this priesthood going to be? This guy's vision of the field is pretty darn blinkered.