Summary
Security professionals have long regarded agile development processes with suspicion, in spite of their reputation for improving software quality. I report on a panel discussion at JavaPolis confronting agile processes with security engineering.
Advertisement
I intended/promised to write this report some time ago, but I have been pre-occupied, after my day job, with setting up a secure application development course.
Here is a report at last on the JavaPolis panel discussion on secure agility/agile security held on December 16th. The panel consisted of Wouter Joosen, professor at K.U. Leuven and co-founder of Ubizen, Konstantin Beznosov, assistant professor at the University of British Colombia and author of some of the very few published papers on the topic and Dirk Dussart, previously security auditor at PwC, currently senior architect at the Belgian Post Group. Moderator was yours truly.
Unfortunately, the confrontation was mellowed as one of the panel members, Pascal Van Cauwenbergh, could not be present as he had an emergency. Pascal is a leading agile luminary in the low countries and his radical, uncompromising insistence on agile orthodoxy would surely have made the debate more heated.
In the event, the panel seemed to agree on all matters of substance, even though it sometimes took a while to clear up some misunderstandings and align terminology. So I believe I am reporting the consensus reached, although I have not cleared the text with the panelists.
Received wisdom has it that security should be assured by good controls that rely on documentation and audits. Hence agile processes do not seem applicable as short iterations and short releases would incur an unacceptable overhead. However, in practice, development organisations do not put such assurance procedures in place. On the other hand, a great deal of security flaws can be traced to bad quality software. Since agile methods have shown to improve quality, their adoption is likely to significantly improve application security.
Next the question of security requirements gathering was broached. In agile processes. requirements are traditionally expressed as user stories. These specify how the user interacts with the system in order to create value; they are akin to use cases. In the security community, abuse cases, tales of how systems fail to observe security guarantees, have arisen. At the very least, the user story concept would need to be extended to encompass abuse cases. Even so, this would not comprehensively cover security requirements since these are often pervasive system qualities, not limited to specific scenario's.
Agile methods emphasize collective responsibility and discourage specialisation within a team. So the panel considered the role of the security expert in an agile process. Are security experts external to team needed at all? While application security would benefit most from lifting the level of security awareness and skills of team members, it was felt that external experts continued to have a role. On the one hand, they could be used to train, coach or mentor the team. Auditing, on the other hand, remains an essential part of security assurance practice. Audit must, by definition, be performed by an outsider.
Agile planning has 2 drivers: business value and effort estimations. The planning game seeks to maximize business value with a given effort budget. The unit of planning granularity tends to be the user story. Since user stories are not able to adequately capture all security requirements, it seems that the traditional agile planning game would need tweaking to reliably plan the implementation of security features. Since security qualities add business value, the principle of prioritizing and planning according to how much value given features add, stands.
Good post. I've got a couple (well, three) comments. First, Agile is not equal to XP. Most of the comments seemed to be directed to XP, while there are a number of named methodologies in the Agile space, including Crystal Clear, Scrum, DSDM, FDD and others.
Dealing with constraint requirements isn't well documented in the XP literature, so I'm not surprised that the panel members didn't know how it was done. You write a story and mark it "Constraint". Then it has to be evaluated as part of every other story. It may spawn substories of its own, of course. It may also spawn acceptance tests in each story. See "User Stories Applied" by Mike Cohn for a discussion of that and many other issues.
This is the first I've heard of abuse cases, but the concept certainly makes sense. I'd handle them by writing an executable acceptance test (using FIT/FitNesse) where the expected result is failure to access the system, or whatever the safe result is supposed to be.
"On the other hand, a great deal of security flaws can be traced to bad quality software. Since agile methods have shown to improve quality, their adoption is likely to significantly improve application security."
I was hoping that I would get to read some debate, arguments and see some conclusions evolving. Frankly, it was rather disappointing to just read a conclusion like this that is too generic to be useful.
It is mentioned here that agile methods have shown to improve quality but there is no mention of metrices of quality. Application Security is one of the metric that forms quality and is not something that follows from quality. Hence, if a method/approach improves the application security, we can say that it also improves the quality but not vice versa.
You can indeed argue that if security improves, quality improves and hence quality follows from improved security, not the other way round. What was implicit in the JavaPolis panel observation, on the other hand, is, I believe, the following: software quality is inversely correlated with the number of bugs in the application. Bugs are not first and foremost failings in the security guarantees that a program offers, but simply areas where the program behaves different to expectations. Hence they can become security vulnerabilities; attackers may find ways in which they can turn such unexpected behavior to their advantage.
This seems like a strange kind of logic: an empty program is from a security perspective secure, but the quality is ... ? Security is only one aspect of quality!!! By making an application more secure, it is not guarantueed that the overall quality improves -- if by improving the security you manage to break some user requirements. The opposite is also not true: you can improve the overall quality of a program by decreasing the security of the program!