Summary
Writing tests before writing code is a key tenet of extreme programming: Write your tests before you write your code. I find myself violating this XP rule very often. A bit of psychology suggests that test-first development may actually stifle creative flow.
Advertisement
A key XP tenet is to write unit tests first, and then the simplest code that passes
those tests. As the Extreme Rules
state,
"When you create your tests first, before the code, you will find it much
easier and faster to create your code... Creating a unit test helps a developer to
really consider what needs to be done. Requirements are nailed down firmly by tests."
I find myself violating this tenet of XP very often. I do test all my code, but often
only after I've already written that code. Why does this one XP rule seem so out of
place to me?
I may have come close to the answer in the work of the famed psychologist Mihaly Csikszentmihalyi,
professor and former chair of the psychology department at the University of Chicago.
Csikszentmihalyi (pronounced ME-high CHICK-sent-me-high-ee) is best known for his
decades-long research into the key causes of happiness, and for his seminal book on
Creativity. For that work, he interviewed several hundred scientists,
engineers, writers, artists, philosophers who shared the distinction of having
achieved remarkable feats of originality in their work, advancing their respective
fields. Csikszentmihalyi wanted to find out what these people had in common that
contributed to their exceptional creativity.
One shared thread: Their experience of "creative flow" during their peak mental work.
Being in a state of flow means
"being completely involved in an activity for its own
sake. The ego falls away. Time flies. Every action, movement, and thought follows
inevitably from the previous one, like playing jazz. Your whole being is involved,
and you're using your skills to the utmost."
Csikszentmihalyi identified four ingredients that must be present to experience flow.
The activity must present just the right amount of challenge: If the activity
is too hard, we become frustrated; if too easy, boredom ensues.
There must be clear and unambiguous goals in mind.
There needs to be clear-cut and immediate feedback about the activity's success.
Finally, our focus during the activity must center on the present - the activity itself.
Writing code in a certain way can help us get in a state of flow. For flow to occur,
we must work on a chunk of problem at the right complexity level - a problem that's
neither trivial, nor overly complex. We must have a clear objective for what we're
working on -- for instance a specific, well-defined use case. To receive immediate
feedback, we must unit test the code as we write it. Finally, it should be possible
for these small pieces of functionality to work on their own, without having to go
through major integration steps.
Most XP practices facilitate these conditions - with the exception of test-first
coding. In my experience, writing tests first often interrupts flow. I can think of a
couple of reasons:
The granularities of what's easily testable and what's a sufficiently challenging
chunk of problem often mismatch. In general, I find that testing works for small
pieces of functionality the best. But working on problems at a higher level, say at
the level of 5 of 6 test cases, challenges us more.
The focus shifts to passing the tests from solving the use case. So we may have a
situation where all tests pass, but the use case still doesn't work, or is not
complete.
For instance, you may be working on a Web app controller. That controller serves a
specific use case, say, adding a new user to the system. The controller must perform
a set of validation steps first. If all checks out, it must save the new user, and
redirect to a confirmation page. If errors occur, it must communicate those errors
back to the user. What I often like to do is just write all that logic in one sitting
- focusing on the problem as a whole. Only when the code is done do I like to write
tests for that code.
In creating the tests, I often find the need to refactor the initial code into
smaller methods that are easier to test. But those small methods serve mainly the
purpose of easier testing, and perhaps better comprehension of the code by other
developers - they are a secondary artifact, created after the initial deed of writing
the code to perform that focused, well-defined task. Refactoring and testing are less
creative than writing the initial code and, instead, are more "mechanical"
activities.
I like to think of refactoring and testing as editing. When writing an article, most
writers first create an outline, then write the content, typically in a few sittings,
and without regard to mistakes, typos, sentence structure, etc. Only then does an
author spend most of his time on editing, fine-tuning, and polishing the article.
"Writer's block" often occurs when a writer intermingles editing with the initial
"brainstorming" and creation of content. Most writers would agree that editing is a
very different kind of activity from coming up with the initial content.
Looking back at some of the more complex systems I worked on, I always developed the
trickiest part of the code that way - in one sitting, just getting it to work from
beginning to end. I can often get a lot of work done that way, and produce a lot of
functionality in a short period of time. In other words, that is a very creative
time period. Then I spend some time on testing and refactoring.
I am curious if my experience is shared by others: Do you really write your
tests first?
You say, "Looking back at some of the more complex systems I worked on, I always developed the trickiest part of the code that way - in one sitting, just getting it to work from beginning to end. I can often get a lot of work done that way, and produce a lot of functionality in a short period of time."
How did you know you had it done and working if you hadn't written tests?
I suppose what XP would really suggest you do with your 'draft' is to throw it away, and then write the tests and enough code to pass them. That gives you the benefit of working through the problem, seeing the pitfalls, and being able to write the tests for them. Your design can then be the simplest that satisfies task, and the tests and your code will reflect that. Your 'draft' could be though of as a spike solution.
Unless you have a pretty "small" system, having humans to check all the existing use-cases is very time consuming. Automated testing is not the only way, but is the best way to cut down the testing time, and to enable some of the testing phase to be concurrent with development.
Now, to the main point of the article: TDD and Pair Programming are the two Xp practices that cost more to "adapt" to, and some people just can't work that way.
During TDD, the "flow interruption" happens when you just don't know what you want next, or how to test what you want (and yes, fellow Xpers, that actually happens). In that case, I recommend code, code and code, experimenting and exploring the problem space. After you "think" the code work, you can either throw the code away and restart TDD now that you know what to test, or retrofit it with test and refactor (I prefer the former).
As usually the flow "ends" when we end the activity, we're not losing it by doing that.
Maintaining flow during TDD is easier when you're pair programming. Your partner can help you keep from getting off-target, and can provide ideas when you're stumped.
> Unless you have a pretty "small" system, having humans to > check all the existing use-cases is very time consuming. > Automated testing is not the only way, but is the best way > to cut down the testing time, and to enable some of the > testing phase to be concurrent with development. > I wasn't suggesting that humans should check all the existing use cases. Steven Newton had asked, "How did you know you had it done and working if you hadn't written tests?" I'm pointing out that automated tests aren't the only way to know you're done and the code is working. I was imagining Frank coding some functionality and then testing it by hand as he flows through his creative zone, i.e., testing what he just wrote, not all the existing use cases.
I didn't say I didn't test the code - just that I tested the code after I had written the crucial pieces of functionality. I completely agree that unit testing in critical, and that without unit testing one cannot ascertain the correctness of the code.
My only issue is with the idea to write tests *first*.
> How did you know you had it done and working if you hadn't > written tests?
I have had similar experiences where writing tests first seemed to interrupt my flow. It depends on what kind of system I am working on. However, more often than not I work on systems where the tests are more important than code and writing the tests first helps you come up with the design where these tests can live. It is often more difficult to add tests after the fact.
If the spike solution works, why not keep that code and refactor it to make it more testable? If something works, why not use use it, why throw it away?
> I suppose what XP would really suggest you do with your > 'draft' is to throw it away, and then write the tests and > enough code to pass them.
Inherently your spike solution is a hack, because you've just sat down and asked the questions in code, and you've proven that you're on the right track, and presumably you've spotted some of the pitfalls. Refactoring your spike solution and adding tests will allow you to improve the design, and give you some confidence, but as you refactor oppurtunities for bugs to creep in arise. By throwing the spike solution away, and using what you have learnt from it to write the tests, you can have much greater confidence in the final code.
> If the spike solution works, why not keep that code and > refactor it to make it more testable? If something works, > why not use use it, why throw it away?
I really do write tests first, but sometimes I wonder if this is always good for system architecture. Well, in most cases it is. The basic flow is like that:
1. write a code (for example, controller). And just realize that existing API is insufficient. 2. think a bit about possible extension and make a decision 3. write a test for this new desired method. 4. write a code for this test. 5. review the code and refactor on a "lower level" (rename, extract method, ...) 6. after awhile, review API and refactor on a higher level (new interfaces, classes)
I don't think that this process really suppresses my creativity. Refactoring is a key.
Steve Tooke wrote: <Refactoring your spike solution and adding tests will allow you to improve the design, and give you some confidence, but as you refactor oppurtunities for bugs to creep in arise. By throwing the spike solution away, and using what you have learnt from it to write the tests, you can have much greater confidence in the final code.>
So let me get this straight: refactoring can introduce bugs. OK, that's fair enough, any code change can introduce bugs. [But you *do* have the tests you developed with your hacked solution to help keep you on line....] So, to avoid introducing bugs in this way, you throw away the hacked solution and the associated tests, and re-write the lot!
<spock>Illogical, Captain.</spock>
How can you write new code and tests with less risk of introducing bugs than by refactoring an existing solution with tests already in place? ;-)
Flat View: This topic has 41 replies
on 3 pages
[
123
|
»
]