Summary:
In this interview with Artima, Andrius Strazdauskas, Gary Duncanson, and Daniel Brookshier of No Magic discuss the goals of Model Driven Architecture, or MDA, and explain why they think it can improve programmer productivity and software quality.
The ability to add new comments in this discussion is temporarily disabled.
Most recent reply: March 8, 2008 4:21 AM by
Ivan
|
In this interview with Artima, Andrius Strazdauskas, Gary Duncanson, and Daniel Brookshier of No Magic discuss the goals of Model Driven Architecture, or MDA, and explain why they think it can improve programmer productivity and software quality. Please check out this JavaOne 07 interview: http://www.artima.com/lejava/articles/javaone_2007_no_magic.htmlWhat is your opinion of Model-Driven Architecture and the technique of code generation in general? In what ways have you used MDA or code generation in the past that has, or has not, worked out well?
|
|
|
My general understanding is that MDA can be easily used to automate code generation of the 'technical concerns'. Some technical concerns are harder than others. For example, generating thread-safe code from a model would be much harder than generating code that does one well-defined task really well e.g., object/process liveliness monitoring code. Once you decide upon a general technique you want to use to monitor liveliness of a process, the same code can be generated again and again taking configuration parameters from the model. Note that this is not a library, in which case you include headers, link with it and so on. When MDA is applied in this style, you don't think about the process monitoring concern while coding your business logic. A recently published research paper talks about monitoring code generation in the context of component oriented systems, where the idea of a 'container' fits very well with the idea of abstracting away technical concerns. http://www.dre.vanderbilt.edu/~sutambe/documents/pubs/isas07.pdf
|
|
|
Generating thread safe code isn't hard at all, just don't use global mutable state, e.g. a sane concurrency model.
|
|
|
> What is your opinion of Model-Driven Architecture and the > technique of code generation in general? In what ways have > you used MDA or code generation in the past that has, or > has not, worked out well? The biggest problem I've seen in the past was that round trip engineering was non-existent or very weak/unstable. The code that was generated was unreadable. I've seen so many classes that have roseid comments but rose is no longer used. The short of it was that using the tool was more trouble than it was worth. It's encouraging to hear that MDA is no longer trying to do eliminate code but I still have a couple of concerns. 1. My experience is that sequence diagrams are a monumental waste of developer time. It's the slowest way I can think of to write code. 2. Raising the abstraction does not imply graphical representation. This seems to be the biggest flaw in the MDA theology. I have a related comment here: http://www.artima.com/lejava/articles/javaone_2007_david_intersimone.htmlI know there is a standard file structure for UML (more than one probably) but is it something people go into an edit? GUI is not alway better than text based. I use a graphical file view for most normal file management tasks. But if I have to do one thing to 10,000 files, I'm not going to do it in a GUI. Text based languages are, in general, much more powerful than GUI based representations. I don't know if this has to be the case but whenever I've used a tool that had limited or no support for text based editing, I end up clicking clicking clicking all day long to accomplish a task that could be done with a short script or command. A long time ago, I spent an entire day with a whiteout container taped to the O on my keyboard. I'm not saying this will happen with these MDA tools but I want a text based representation anyway.
|
|
|
> I think the key thing about MDA, Model Driven Architecture, > is the model represents the code, and the code is produced. > It's generated. So far I agree. But as not all code is created equal, he developer's task now is to define the rules that generate code from models. But this is an area where support is almost nonexistant. You could write your on XSLT to transform XMI to Code. But XMI spec is huge and concrete XMI varies from tool vendor to vendor (it's great to have standards, isn't it ;-) so this is hard work. Or you could wait until some tools support QVT (which has OMG's blessing). But QVT is as spec is also huge [see http://www.omg.org/docs/ptc/05-11-01.pdf](and not finalized as far as I know) so I doubt that this will be much easier. My solution so far: use my own small (textual) modelling language and write my own code generators (I use Lisp/Scheme). I know this approach does not scale to large projects but for the small to medium sized projects I found it very productive. B. Neppert
|
|
|
1. Graphical interfaces are not necessarily better than textual ones. I work with an ETL tool that is graphical: IBM WebSphere DataStage. The nice thing about it is that understanding programs written in it is easy. A screenshot of a program gives you an instantaneous high-level view of what it is doing. The bad thing is that the tools to manipulate the code are worse than windows notepad for code editing. Many of these GUI tools look fancy on the outside but lack good copy and paste, search, replace, refactoring, formatting, etc. At a more fundamental level they usually don’t support much abstraction either. You can’t just say: this thing is the same as over there just use that. You have to recreate everything again.
2. Model-Driven Architecture. I am a believer in Model-Driven Development and Architecture in the sense that you define your terms first and create a good relational database with that. The rest is usually just CRUD with various degrees of presentation fanciness.
BTW does anybody know of a good tool that can automatically generate good CRUD code given a database schema? I am talking about a tool that understands the whole relationship graph of a database and allows creating edit and display of data from multiple tables at the same time in a sensible way. Most of business systems I have seen start out as basic CRUD and evolve from there. It would be nice to be able to have good tools specialized in generating maintainable code and reverse engineering for just that.
|
|
|
> 1. Graphical interfaces are not necessarily better than > textual ones. I work with an ETL tool that is graphical: > IBM WebSphere DataStage. The nice thing about it is that > understanding programs written in it is easy. A screenshot > of a program gives you an instantaneous high-level view of > what it is doing. The bad thing is that the tools to > manipulate the code are worse than windows notepad for > code editing. Many of these GUI tools look fancy on the > outside but lack good copy and paste, search, replace, > refactoring, formatting, etc. At a more fundamental level > they usually don’t support much abstraction either. You > can’t just say: this thing is the same as over there just > use that. You have to recreate everything again.
Exactly. The sometimes stated assumption that GUI tools inherently improve productivity is patently false. Often I find GUIs to cause very significant productivity decreases. Usually development in GUIs ends up requiring a much larger amount of mindless drudgery.
We have a GUI based tool that we use right now and not only does it slow me down a lot, it's really hard to figure out what things do because everything is buried in stupid little text boxes and dialogs. Instead of just scanning a developers work for errors, you must click on a billion little buttons to see what they've done. I ended up writing a style-sheet to convert their inscrutable XML export into something that we could work with. Actually, once exported to a usable format, I was able automate a lot of what is done in reviews.
GUIs have their place but it's clear to me that they should play a secondary role to text representations. The idea that GUI is always better is an outdated idea that's as stale as the slogan "the power of GUI at the speed of DOS" on an old software box near my desk.
|
|
|
> idea that GUI is always better is an outdated idea that's
If GUIs were so supperior to text we would be writing in Egyptian hieroglyphic logograms instead of the Latin alphabet.
|
|
|
> The code that was generated was unreadable.
The Model is your 'source' code while the code is generated. The goal of MDA is that you may generate your business rules independently by the platform/application committed. A class diagram will be transformed into a POJO object, EJB, copy cobol, xml schema etc. you may redefine you framework without rewrite a hundred of business rules by hand, you have only to change the transformer. Through uml paradigms static/dynamic diagrams, you trace the behavior of entire system. You give to the business unit the power of define rules themselves. The MDA approach is good both in small than in large application domain with different features of course (you have to justify the work of MDA development), but first model you application, model the data, keep MDA in mind. Use simple xml or your own model files definition or XMI or a model framework (EMF), but the concept is the same.
Yes, the generated code will be unreadable but as the definition it is generated, untouchable (despite you may use jmerge for merging generated and hand-modified code).
|
|
|
> > The code that was generated was unreadable. > > The Model is your 'source' code while the code is > generated. > The goal of MDA is that you may generate your business > rules independently by the platform/application > committed. > A class diagram will be transformed into a POJO object, > EJB, copy cobol, xml schema etc. you may redefine you > framework without rewrite a hundred of business rules by > hand, you have only to change the transformer. > Through uml paradigms static/dynamic diagrams, you trace > the behavior of entire system. > You give to the business unit the power of define rules > themselves. > The MDA approach is good both in small than in large > application domain with different features of course (you > have to justify the work of MDA development), but first > model you application, model the data, keep MDA in mind. > Use simple xml or your own model files definition or XMI > or a model framework (EMF), but the concept is the same. > > Yes, the generated code will be unreadable but as the > definition it is generated, untouchable (despite you may > use jmerge for merging generated and hand-modified code).
Yeah, I get the theory. The problem is when you need to debug the system or realize that your MDA tool is crap, as we did.
|
|
|
James, thanks for always keeping it real. I wish I were rich so I could put you in charge. ;-)
|
|
|
The main goal of the MDA is generate code from a model designed from CASE framework.
There are several frameworks that can do this job , but i really want to know which functionalities a MDA Framework must have to be consider a good one .
|
|
|
First, I think MDA is highly overrated. I believe it is useful in situations where the model and what the generated code is supposed to do is fairly straightforward. The often quoted example of hibernate, Spring DAO, domain object falls into this category. But for more complex things, MDA is completely useless. If the model gets complex it is hardly understandable, especially with graphical models. MDA works for things that are fairly declarative in nature. As soon as imperative stuff sneaks in, the model becomes overloaded.
A huge disadvantage I see with all MDA tools I've seen is diff support. With code I can make a diff between two versions and immediately understand what has changed. With a graphical model (that is probably stored in XML on disk) this is much harder, sometimes even possible.
Additionally, in some cases code generation is superfluous. I worked at a company that has a graphical designer for (business) processes. Instead of generating code from the model, the model was executed (interpreted) directly.
One advantage of the model-driven approach (whether using code generation or not) is that the documentation stays up-to-date with the code. In the example I've mentioned this was a huge advantage because before the documentation was always hopelessly updated. I do not blame anybody for that. Updating twice the same thing (at least semantically) is just too much to demand of developers.
And I don't believe this marketing talk that the developer has to write almost no code and that his life gets easier. In many cases the life gets harder, because he has to learn yet another programming environment (the model editor).
|
|
|
well yes that is the theory, and debuging support in many tools is horrible (and I have my doubts about companies making it difficult on purpose so that they can charge for consulting services instead of the client being able to do it all himself, but thats a whole new discussion) but I see a few possible solutions that could improve the ease of debugging. If you could target the source node/object in the model (Ecore,or some other) you would know at which location to look for: 1. logical/semantic errors in the model/metamodel 2. errors in the code generator templates (bad code being generated)
so on system exceptions or when you see something wrong (or have doubts about the state of the running application) you would need a link to the state/location in the model that the generated code is executing at (this is the target of the source node in the model that I mentioned above).
to do that you would embed ID's of model objects attributes/classes/instances in the code and then get to it either through log mining (building a Lucene index up for example and then harvesting) or through message sending (for real-time, analysis), or if you want to be fancy do some code injection when compiling the code (in case of java and JVM bytecodes).
Text/code wise this can easily be embeded in every generated class/method via the templates when the code is generated; code injection I havent messed with in practice but Im sure there are frameworks/tools that are simple to utilize.
once you get this match you just query the model for that match and then highlight the nodes in the model.
Im guessing a variation of this method is what is used in MetaEdit+ which has a cool feature that you can run the application and the model diagram will highlight the nodes/elements that are currently being executed (when the relation with the real code is used)
please if you find false assumptions in my thinking above let me know, as I havent gotten to this part of the archtiecture for my own toolset but its coming up, Im working on other core parts and a few metamodels first, to automate some of my development.
|
|
|
ATL - Atlas Transformation Language (I think thats what it stands for) is pretty elegant and it might not have been approved by the MDA Group but then again who cares, they're being accepted as the model-to-model transformation language in the Eclipse modeling project.
There are a lot of transformations already created for various metamodels so its examples to learn from, and it also means its debugged more than other frameworks.
Its not just a language however but a model transofmration virtual machine, with its own model oriented instruction set, so it not only runs ATL but can also run other transormation languages that MDA ends up accepting for their QVT request for proposal. Very flexible, and a more longterm outlook which is great.
language is cool to work with, and not too complicated. sufficient plug-in for eclipse available as well for debugging and coding. talks of a modeling environment being put together as well, so you can model other model-transformations (talk about abstraction ;) ).
XMI is not something that I personally take very seriously. Its overrated, and something that I think missed its goal. Just the fact that there are a bunch of slight variations between most tool vendors kills the point. ECore is safe and productive, and evolves on need as oppose to a bureaucracy moving like a grandmother thats not in shape.
|
|