Summary
In 25 years or so, we'll look at the current morass as only a small step above assembly-language programming. Here's what I think programming will be like then.
Advertisement
Further in the future than what I'm describing here, our IDEs might just hack into our brainstems to produce a Matrix-like world within which we manipulate programming ideas. There are a number of breakthroughs required for that to happen, and breakthroughs are difficult to predict, and their fallout is even more difficult to know.
Here's how far forward I can see now. Most of these points really just address problems that we have now, so I'm not being particularly clairvoyant. But for some reason humans seem to accept their problems and limitations. Perhaps it's just because of the effort necessary to change, and the fact that the toys we have now are still somewhat new so we're transfixed by how shiny they are, not noticing the problems they don't solve and, worse, how they limit us. I encountered this at the beginning of my career, programming hardware with assembly language while yearning to move to so-called "higher level languages" like C. There was a lot of suspicion and resistance, and that may be the throttle, the limiting factor, when trying to move the technology of programming forward, rather than the technology itself. (You could also argue that the suspicion and resistance is learned, from too many companies trying to make a quick buck by promising magic).
There is a significant and growing class of programming problems that static languages can't solve. Programming is going to become more and more dynamic. This doesn't mean that code analysis tools are not valuable; far from it. But we will discover (are discovering) that dynamic analysis is more powerful than static analysis, so things will continue to move in that direction.
Objects will manage their own processes; right now we call these "active objects" but in the future this idea will be incorporated into the basic idea of an object (you can't really say that an object models reality unless it has its own process).
Even if Moore's law gets a reboot from new technology, we've started down the path of multiple cores and farms of machines, and that won't stop. There are lots of problems that, as much as a faster single processor might be helpful, a bunch of fast processors will be better. And I would also argue that tons of processors is a better model of the world.
The problem is that currently, parallelism is virtually impossible to get right (I won't re-argue this here, I've done it elsewhere). While it's theoretically possible that a handful of experts exist that can deal with some level of concurrent complexity, there is always a limit to what those people can manage. And they are rare, and parallelism is becoming common.
What I mean by "stupidly parallel" is that a brand-new student programmer can create objects that run in parallel without bugs, while knowing little or nothing about parallel programming. It will just be part of the atmosphere of programming that parallelism happens and you don't have to tie your brain in knots to get it right.
Eventually the idea that we have to "store something to disk" will go away; the difference between memory and disk will be seen as an arbitrary artifact of the past. You'll just have data, and the data will be accessible for as long as you need it. It will live on your machine and in the cloud, as necessary, and you won't have to think about whether it's being garbage collected or swapped.
You'll just make objects and use them, and the objects will contain data as necessary.
This is really just a continuation of the previous point. Programmers won't have to think about storage, period. Right now there's a lot of time spent on issues that should be automated, and this is one of them.
Tests are a form of program analysis. We know that test coverage produces better programs, but we're still on the leading edge of how to create tests. JUnit has traditionally required too much boilerplate and rote work, and that has slowed the test creation process (I like Python's Nose, which uses reflection to minimize coding).
Eventually we'll be able to create tools that recognize patterns in the code you write, and automatically apply tests to those patterns. The only tests you'll have to write are the ones that aren't in the database, and when you write them they'll go into the database so no one has to write them again.
The reason I call it "swarm" is that I imagine it like a swarm of bees. You create an object (which I think will still be the basic granularity of component in the future; it will be much more than today's objects but we'll still think in terms of objects), and the tests will swarm over that object looking for places to apply themselves (the same way that chemicals look for the appropriate receptors in your brain). You'll only need to add tests for the empty spaces. Testing will become faster, more thorough, and far more automated.
I initially wrote about this idea in a science fiction story. A robot receives ideas visually, by signals passing through the eyes (no physical contact). The ideas -- new code, basically -- pass into a kind of limbo where they are analyzed for suspicious content. I think not only swarm testing would come into play, but logical testing, checking whether an object does what it says it will do. Only after it is thoroughly tested will it be incorporated into the program.
Note that this not only helps in system integration for your own code, but allows much greater use of off-the-shelf components as well.
Data stores, too, will just become another kind of object, integrated into the objects where they are used. In confluence with the aforementioned transparent storage, data stores are just objects where you can store data and later ask for it. You won't have to re-code every time you need a new one, just slap it in without a lot of effort. In addition, you won't have to think ahead of time about scaling issues for your data store. There will be no rewrite when going from a small data store to a large one. Data stores will become a non-issue.
The point of data storage is that sometime later we want to query it. But we get lost in the details of how it should be structured rather than keeping this fundamental idea in mind.
Progress has been made in this area. The transition from hierarchical data to SQL was a big step. Programming systems like LINQ and SQLAlchemy have provided better abstractions to separate the programmer from the underlying structure of the data. You're able to make a query that the engine dynamically optimizes. Who knows, in the future querying your database efficiently might end up being as easy as doing a Google search.
The future object incorporates the concept of "component" and will be the basic unit of code. It manages its own processes and comes with its own swarm of tests. The interface between components and other components (thus, components and systems) becomes universal. Adding a new component to a system becomes as easy as adding ingredients to a recipe. Component discovery will have its own search engine.
I think you'll be able to throw components into a soup, and they'll wire themselves together, negotiating or asking for advice when two components have overlapping functionality. The result will just be a larger object/component, that you'll either use in a bigger system or as a standalone application. Naturally, swarm testing applies at all levels.
UIs are stored and categorized in a way that it makes it easy to search for them. UIs capture and display data, so the intermediate connection to the main program is a data store, which that program queries and updates. This is basically a more sophisticated form of MVC, but without miring you in all the redundant low-level boilerplate every time you create a UI -- there's no reason we should have to repeat ourselves constantly this way, and no reason that programmers shouldn't benefit from great UIs without having to be UI experts.
Most of the time you'll be able to select UIs from a list and paste them into your system. If you need to make a new one, or modify an existing one, the result becomes a new entry in the UI store.
This follows from the other features that I've described above, but it's worth emphasizing. When you write a program, it will work in the small and in the large without modification. We currently spend way too much effort on scaling issues, which should be transparent, and eventually it will be.
The collection of features described above solves problems of program complexity, especially those that appear when a program gets large and sophisticated. Programming in the future will eliminate the concept of scale, so that adding features to an existing program will be unaffected by how big or complex that program is.
Actually I think programming 25 years from now will not be very different from what we see now. Argument: not much has changed in the last 25 years and there is even less reason to suspect a change in the future, as the field mature.
Thanks for a thought-provoking article! I have some comments and questions:
- I really like the idea of stupidly parallel objects. If objects are by default "active", then should the compiler or the interpreter have to figure out what are the implicit synchronization points in the program (e.g. when needing the return value of a method call) and insert them on our behalf? Is that always possible, or will we need to provide the developer with some explicit, hopefully high-level, constructs to do this?
- There's no reason indeed (other than performance, but then again we are looking into the future here) why a stupidly parallel object shouldn't also be stupidly persistent. Serialize the object (locally or even on the cloud) as soon as it is created, and transparently (and concurrently, of course) keep the states in sync, a la Google Gears. Every application will end up with a default ability to save and restore its session, i.e. all of its objects. Imagine this applied to an entire operating system...
- The "swarm" idea seems also pretty feasible to me. Each "bee" is basically a stupidly parallel aspect, looking to apply itself, again concurrently, on a given pointcut. You could use it, as you mentioned, for testing or security purposes, but I guess also for the transparent serialization process in the bullet above.
- The seamless data store stuff looks like a natural consequence of the transparent serialization. The notion of public or private modifiers could be extended to mean whether you are willing to share your (serialized) object with other applications. I foresee "Windows registry hell" if we don't do it right, though! ;)
- Query-based data is already here, with frameworks like Rails or Django, and there's no reason why the seamless data store couldn't be queried the same way.
- What I'm more skeptical about is the effortless system integration part. We've been researching this stuff since forever (blackboards and tuple-spaces, multi-agent systems, semantic web services, etc.) and we're nowhere close to an environment where pieces find each other and "just fit" syntactically and semantically. What I predict is that in 25 years this will be as hot a research topic as it has been in the past 25 years, and will keep plenty of researchers busy. ;)
I just wonder who will develop/implement these when you basically imply that everything will be easier, therefore the younger programmers will know a lot less about what's needed. And it's already happening.
Great topic! I think visualising code of a large project is a major challenge which will be tackled in the future. The aim of the visualization should be that the new programmer will rapidly be familiar with the project.
Dynamic vs. Static: Future languages will certainly be dynamically typed. But in other ways they will be less dynamic. We will notice that name dynamicity (i.e. dynamically adding a method to a class by string name of the method, setting a variable by string name) was a bad idea. A lot of metaprogramming in dynamic languages (e.g. Ruby) is currently based on this dynamic name handing. We will move away from this and instead use names only as a help for programmers, not something that exists at runtime. Metaprogramming will happen at compile time, not at run time.
To the dynamic type system we add a code analysis system that statically checks assertions & pre- & postconditions by using abstract interpretation + theorem provers (much like what Microsoft is doing with Spec#). This will give much stronger and more useful guarantees than current type systems, but it will be optional (that is, you can choose to not write assertions and preconditions)
Parallelism: A few years from now we will wonder what the fuss was about. Parallelism turns out to be much easier than we thought. Most programs will be fast enough without parallelism (e.g. most GUI apps). The ones that need parallelism will turn out to be easily parallelisable. Nearly all parallel programs will be coded with either futures (or with closely related constructs as found in Cilk) or high level parallel constructs (parallel map, parallel reduce).
Persistence and Transparency: I think you're right here.
Testing: General purpose tests are a fantasy. How is the computer supposed to know what you want your program to do? The most it can do is throw inputs at your code and check if it throws an exception. Instead I think testing will be based on randomised property checking like Quickcheck. In addition to that we will have automated theorem provers that try to prove that the property holds for all inputs (related to static assertion checking like in Spec#)
I'll add one of my own here: string based code is going away, but names (variable names, function names) are not (contrary to what visual languages try to achieve with programming by stringing together pictures).
This might happen within modules, but an independent module still needs a well-defined interface (or rather, a module is defined by its semantic+programmatic interface). This becomes particularly important with capability- or type-safety- based security as it relies on shenanigans on one side of an interface being unable to propagate to the other side.
But, what makes a good module / unit size? Good practice seems to favor small, loosely-coupled units, which are really not so different from small, independent modules. On the very small scale, say within a function, what benefits does dynamic programming give that functional programming and type inference doesn't give? If dynamic programming is squeezed on both the small and large side like this, how much room for it is there really?
> Stupidly parallel objects
Any given degree of parallelism that can be safely implemented by a clueless n00b can also be implemented by a reasonably smart compiler. The only avaialbe advance here is in languages that provide a way to express "I don't care what order these things happen in." with compilers that statically check for shared objects and require them to be annotated as thread-safe (for the second part, there are some ideas at http://bartoszmilewski.wordpress.com/ as suggestions for things to do with the D language).
Building any sort of parallelism where actual ongoing communication is required will always remain "difficult", in that you'll need maybe a couple hundred hours of study to be able to get it right. But, sufficient reusability may make this less common.
> Persistent diskless environment
We're even seeing some hardware progress in this direction, with mram and phase-change ram.
> Transparency between local and cloud
This is limited by needing completely reliable always-on internet connections of sufficient speed everywhere.
Programmers *will* have to deal with either explicit storage management (*this* data is local, *this* data is on the 'net) or automatically reconciling multiple versions of the same object (which could all have changed in arbitrary ways). Neither is transparent to the user, and the latter still requires a layer of code that does the former.
> Swarm testing
This seems completely bogus. The point of testing is to make sure that the code does what it's supposed to do; if the computer can determine what the code is supposed to do, it can also implement the "dwim" instruction and remove the need to write the code in the first place.
> Security via suspicious systems
A full "does what it says it will do" check again requires a "dwim" instruction. Toning it down a little, it sounds like current virus scanners.
> Effortless data stores
We already have "objects where you can store data and later ask for it". The difficulty lies in *how* you ask for it, and what it *means* to store and ask for it.
ACID compliance provides certain meanings, and comes with certain costs. Eventual consistency provides a different set of meanings, and comes with lower costs. Asking for objects by name has certain (low) costs when you store the data, being able to ask for objects by their attributes has different (higher) costs when you store the data. You can ignore these differences for very small data sets, but as you scale up you *need* to care. Possibly the storage features dependent on "how you ask for it" can be determined automatically by analysis of your api usage, but "what it means" needs to be specified by the programmer.
> Query-based data
This ties in to the previous point, it's one particular combination of "how you ask for it" and "what it means to ask for it".
> Reusability of a vast scale
The basic unit will be the "module", or something like a .NET assembly. Some may contain a single class, but others will be a collection of associated classes that conceptually go together.
> Effortless System Integration
This requires fully taxonomizing all implemented functionality. I don't think it would play well with small customizations.
> Reusable UIs
I think maybe our large widget toolkits might be about as close to this as is feasible. Again this comes down to the question of meaning, and some business rules.
> Effortlessly Scalable
O(n) and O(2^n) will never scale the same. When the first has a significantly higher constant, you *must* know the size of your data in order to choose well.
> Built-in Evolvability
We have this already to a decent extent, well-written modular systems are easy to extend.
Extremely dynamic: it is not true that dynamic languages can solve more problems than static languages; static languages are Turing complete and can solve all the problems solvable by dynamic languages. Besides that, I agree that languages will be dynamic, but only on the outside: internally, they be will be translated on the fly to their static equivalent.
Stupidly parallel objects: it's not possible to create parallel programs without bugs, because of the halting problem. Furthermore, parallel programming is not that hard with active objects. Besides that, objects will truly be parallel, each one being its own thread. I am already on that track in my own applications, having used the active objects paradigm in a simulator program with great success.
Persistent diskless environment: agreed, but the need to define transactions (all succeed or all fails) and the need to clean out garbage will not go away.
Transparency between local and cloud: agreed again, but the cloud will never be as fast as the local, so there should always be the option to use the local instead of the cloud. I imagine it's all a matter of caching and synchronization.
Swarm testing: it's never gonna happen. Testing has largely to do with math; proving the correctness of a piece of code is not about applying a pattern, i.e. pattern matching doesn't work in this case.
Security via suspicious systems: not gonna happen unless a solution to the general AI problem is found.
Effortless data stores: agreed. We are late for this, it's desperately needed.
Query-based data: the solution to this, which I've proposed in another thread, is to not store the data in any structured format; data should only be structured when queried.
Reusability on a vast scale: not gonna happen, unless we all agree on a specific ABI.
Effortless System Integration: not gonna happen, because the above is not gonna happen.
Reusable UIs: coincidentally, I have tried to do this in the project I mentioned above (that uses the active objects). I succeeded, in the context of the application: writing dialogs became so easy as to invoke specific functions that bind data objects to widgets. All this with c++ and Qt 3.
Effortlessly Scalable: it will happen as soon as we can add circuits to a computer systems and automatically have more processing power to ourselves...as it is done on Star Trek TNG and later shows. For this to happen, we should forget the model "data travel on a bus from/to memory/cpu" to the model "code travels on a bus from/to memory+cpu hybrids". This will also solve the parallelism issue, since each memory chip will also be a cpu.
Built-in Evolvability: not gonna happen, unless we discover that P = NP.
Good article, if only to remind us our limitations, both social and technical.
> Testing: General purpose tests are a fantasy. How > is the computer supposed to know what you want your > program to do? The most it can do is throw inputs at your > code and check if it throws an exception. Instead I think > testing will be based on randomised property checking like > Quickcheck. In addition to that we will have automated > theorem provers that try to prove that the property holds > for all inputs (related to static assertion checking like > in Spec#)
The way I envision this is as attaching regression tests: You write code that does something, then some form of static analysis / DB search / whatever find unit tests for what is likely to be the crucial parts of your implementation, and those stay with it.
> I'll add one of my own here: string based code is going > away, but names (variable names, function names) are not > (contrary to what visual languages try to achieve with > programming by stringing together pictures).
People have said that we'd lose string based code for three decades; I don't believe them. I think we might lose *file* based code, but I'm not even sure of that - we're still using something that is quite page-like for writing text even though we've been doing it for a thousand years.
Bruce, I continue to be puzzled by your endless love for dynamically typed languages ("DTL"), but this statement really takes the cake:
> There is a significant and growing class of programming > problems that static languages can't solve
First of all, we're talking about Turing complete languages, so this statement makes no sense at all: all the problems solvable by one class of languages are by definition solvable by the other.
Maybe you mean that certain problems can be solved more elegantly with DTL than statically typed languages, but even such a statement is extremely subjective and I find no evidence to support it in the current trends, where the most active and popular languages (Java, C# and C++) are all statically typed and DTL continue to occupy a niche, despite being in existence for several decades...
> Actually I think programming 25 years from now will not be > very different from what we see now. Argument: not much > has changed in the last 25 years and there is even less > reason to suspect a change in the future, as the field > mature.
Exactly.
Flat View: This topic has 88 replies
on 6 pages
[
123456
|
»
]