Every time I create a new rails project I usually put off writing tasks to analyze the code's quality 'cause it takes time and time is, you know, finite. So I've decided to extract some code into a rails plugin which I call metric_fu.
It's a bunch of rake tasks that produce reports on code coverage (using Rcov), cyclomatic complexity (using Saikuro), flog scores (using Flog), and rails stats (using 'rake stats'). It knows if it's being run inside a CruiseControl.rb build and puts the output in the Custom Build Artifacts folder so when you view a build you see this:
The coverage report is your standard rcov report:
Flog output is thrown into an html file:
At the end metric_fu calculates the average flog score per method: You might want to check out my previous posts on what to do with a Flog report: The Method Hit List and When You Should Ignore Metrics
Saikuro's output is the same as always: (I changed the warning and error levels for this pic -- more on how I did that later)
And 'rake stats' is always useful:
So how do you get all these reports? 1. install Flog sudo gem install flog
2. install rcov sudo gem install rcov
3. install metric_fu ruby script/plugin install \ http://metric-fu.rubyforge.org/svn/tags/REL_0_5_1/metric_fu/ (in the base of your rails app)
4. rake metrics:all
Which should work fine if you have standard Rails testing and you like my defaults. But what if you use a combination of RSpec and stock Rails testing? Then you can insert this into your Rakefile:
namespace :metrics do TEST_PATHS_FOR_RCOV = ['spec/**/*_spec.rb', 'test/**/*_test.rb'] end
The namespace isn't strictly necessary, but I like it for intentional purposes. Multiple paths are useful if, like on my last project, you need to be specific about which tests to run as some tests go after external services (and the people who manage them get cranky if you hammer 'em a lot).
If you also want Rcov to sort by lines of code (loc) and have more aggressive cyclomatic complexity settings then do this: