> Unless I'm missing something (I just flipped through the > FitNesse website, so it's entirely possible), FitNesse > seems like a watered-down version of good automated > testing practices, with the twist that the customer is > supposed to be involved. I think this type of testing is > critical to building any non-trivial system, and have used > it on many project w/o the help of a framework. > > But it's still neither "requirements" nor "acceptance > tests." > > Why? > > It's too low level. It tests whether the system behaves > as specified (very important!), not that the specified > behavior supports/achieves the objectives associated with > building the system. > > IMHO, the purpose of acceptance tests is more to validate > the requirements (against objectives) than to validate > that the system meets the requirements. > > As for the tests being requirements, I think they are > again too low level to be sufficient. They specify the > "what" so precisely that the "how" becomes implicit in > their definition.
Yes, those are working at very different levels.
There's a difference between an acceptance test and a business test. A business test is something that's measurable by the business to validate the outcome of the use of the system. E.g., "revenue increases 20% year over year". That's very different from an acceptance test for the functionality of the system.
One of the problems discussing something like ASD is that there isn't a single authoritative definition of it.
Having said that, I do think that in some cases ASD has been sold to developers (at least implicty) as a methodology that requires less formal documentation. This is an important selling point because it's developers that most dislike creating documentation.
Since there is no way to know in general whether organizations using pre-ASD methods are producing more documentation than they really need, then you can't claim that ASD will save time or money on documentation. Yet ASD is often pitched as if it will.
> I need to sit down and write this up at some point, but > the idea has been rattling around my head for a while. > George Lakoff explains the divide between liberals and > d conservatives through the family model frame: liberals > correspond to a nurturant parent model and conservatives > to a strict father model. I believe the same metaphor can > do a lot to explain the split between agilistas and > CMM'ers. And I think the fact that I haven't said who is > which, but you know it anyway, suggest that it's a very > effective metaphor.
Or CMM is leftist, authoritarian like Stalin, Castro, Kim Jung Il, Hitler, Mugabe....
I think the discussion here lost track a bit. The question is if it's possible to ensure security in an agile process.
The answer for me is: Sure! When you have a look at the OWASP top 10 or the whole guide, you see that it's just a set of requirements and best practices. Why not model those as tests? It's way easier that way since you can be sure that the requirements are met even if the software changes.
With some design documents you can never be sure what the people really did during development. I don't know any other process that requires that much discipline.
I think the problem is that security experts usually don't know much about agile software development. As lot's of software developers and managers do. Most of them think that it means "do what ever you feel like" and "chaos is OK!" where in reality, the opposit is the case.
I think we can turn the question around and ask "How does agile development improve security?".
TDD helps making sure that your classes/method act the way you expect them to. This can help in reducing the number of bugs in your software, which in turn improves security. Of course TDD does not necesserily remove design flaws (no silver bullett).
Automatic regression testing helps make sure that the classes continue to act the way that was originally specified. If a test fails, you know that either a requirements of the class/method has changed or some of the altered code is erronous.
If you want TDD to help improve security, you need to put security into your tests. This means writing tests specifying the behaviour of a class/method in case of extreme parameter values.
As Mr. Kübeck suggested, I think using the OWASP top 10 as a set of requirements/tests is a good idea. Input validation is easy to write as tests for your business objects. And I also think web testing tools like fitnesse can be used to test for XSS faults. The character set that causes XSS is fairly small, and writing tests that inject these characters into the different parameters of a web page and checks if the result is escaped, should be fairly easy.
What do Security and Agile have to do with one another? Nothing. When security capabilities are important deliverables in an eXtreme Programming project the security specs get added in as user stories and acceptance tests. Same goes for formal documentation. It's just another story that takes time and resources to create.
And what does security have to do with formal documentation anyway? Again nothing. How specifically would formal documentation verify that the developers are producing secure software? It can't. The only way to validate the software is secure is to put the software through comprehensive penetration testing. With a complete suite of automated penetration tests (aka security acceptance tests that the security team is responsible for writing) you could determine how secure the software is compared against the organizations' security specifications. It's important that the penetration testing is automated in order to make sure the tests are repeatable and to disclose the full extent of what was tested. All too often I have seen audit-passable, formal security documentation, that was neither verifiable nor repeatable.
Flat View: This topic has 35 replies
on 3 pages
[
«
|
123
]