A suite of tests in which each test represents one scenario of a feature.
FeatureSpec
is intended for writing tests that are "higher level" than unit tests, for example, integration
tests, functional tests, and acceptance tests. You can use FeatureSpec
for unit testing if you prefer, however.
Here's an example:
import org.scalatest.FeatureSpec import org.scalatest.GivenWhenThen import scala.collection.mutable.Stackclass StackFeatureSpec extends FeatureSpec with GivenWhenThen {
feature("The user can pop an element off the top of the stack") {
info("As a programmer") info("I want to be able to pop items off the stack") info("So that I can get them in last-in-first-out order")
scenario("pop is invoked on a non-empty stack") {
given("a non-empty stack") val stack = new Stack[Int] stack.push(1) stack.push(2) val oldSize = stack.size
when("when pop is invoked on the stack") val result = stack.pop()
then("the most recently pushed element should be returned") assert(result === 2)
and("the stack should have one less item than before") assert(stack.size === oldSize - 1) }
scenario("pop is invoked on an empty stack") {
given("an empty stack") val emptyStack = new Stack[String]
when("when pop is invoked on the stack") then("NoSuchElementException should be thrown") intercept[NoSuchElementException] { emptyStack.pop() }
and("the stack should still be empty") assert(emptyStack.isEmpty) } } }
A FeatureSpec
contains feature clauses and scenarios. You define a feature clause
with feature
, and a scenario with scenario
. Both
feature
and scenario
are methods, defined in
FeatureSpec
, which will be invoked
by the primary constructor of StackFeatureSpec
.
A feature clause describes a feature of the subject (class or other entity) you are specifying
and testing. In the previous example,
the subject under specification and test is a stack. The feature being specified and tested is
the ability for a user (a programmer in this case) to pop an element off the top of the stack. With each scenario you provide a
string (the spec text) that specifies the behavior of the subject for
one scenario in which the feature may be used, and a block of code that tests that behavior.
You place the spec text between the parentheses, followed by the test code between curly
braces. The test code will be wrapped up as a function passed as a by-name parameter to
scenario
, which will register the test for later execution.
A FeatureSpec
's lifecycle has two phases: the registration phase and the
ready phase. It starts in registration phase and enters ready phase the first time
run
is called on it. It then remains in ready phase for the remainder of its lifetime.
Scenarios can only be registered with the scenario
method while the FeatureSpec
is
in its registration phase. Any attempt to register a scenario after the FeatureSpec
has
entered its ready phase, i.e., after run
has been invoked on the FeatureSpec
,
will be met with a thrown TestRegistrationClosedException
. The recommended style
of using FeatureSpec
is to register tests during object construction as is done in all
the examples shown here. If you keep to the recommended style, you should never see a
TestRegistrationClosedException
.
Each scenario represents one test. The name of the test is the spec text passed to the scenario
method.
The feature name does not appear as part of the test name. In a FeatureSpec
, therefore, you must take care
to ensure that each test has a unique name (in other words, that each scenario
has unique spec text).
When you run a FeatureSpec
, it will send Formatter
s in the events it sends to the
Reporter
. ScalaTest's built-in reporters will report these events in such a way
that the output is easy to read as an informal specification of the subject being tested.
For example, if you ran StackFeatureSpec
from within the Scala interpreter:
scala> (new StackFeatureSpec).execute()
You would see:
Feature: The user can pop an element off the top of the stack As a programmer I want to be able to pop items off the stack So that I can get them in last-in-first-out order Scenario: pop is invoked on a non-empty stack Given a non-empty stack When when pop is invoked on the stack Then the most recently pushed element should be returned And the stack should have one less item than before Scenario: pop is invoked on an empty stack Given an empty stack When when pop is invoked on the stack Then NoSuchElementException should be thrown And the stack should still be empty
A test fixture is objects or other artifacts (such as files, sockets, database
connections, etc.) used by tests to do their work. You can use fixtures in
FeatureSpec
s with the same approaches suggested for Suite
in
its documentation. The same text that appears in the test fixture
section of Suite
's documentation is repeated here, with examples changed from
Suite
to FeatureSpec
.
If a fixture is used by only one test, then the definitions of the fixture objects can
be local to the test function, such as the objects assigned to stack
and emptyStack
in the
previous StackFeatureSpec
examples. If multiple tests need to share a fixture, the best approach
is to assign them to instance variables. Here's a (very contrived) example, in which the object assigned
to shared
is used by multiple test functions:
import org.scalatest.FeatureSpecclass ArithmeticFeatureSpec extends FeatureSpec {
// Sharing immutable fixture objects via instance variables val shared = 5
feature("Integer arithmetic") {
scenario("addition") { val sum = 2 + 3 assert(sum === shared) }
scenario("subtraction") { val diff = 7 - 2 assert(diff === shared) } } }
In some cases, however, shared mutable fixture objects may be changed by tests such that
they need to be recreated or reinitialized before each test. Shared resources such
as files or database connections may also need to be created and initialized before,
and cleaned up after, each test. JUnit offers methods setUp
and
tearDown
for this purpose. In ScalaTest, you can use the BeforeAndAfterEach
trait,
which will be described later, to implement an approach similar to JUnit's setUp
and tearDown
, however, this approach often involves reassigning var
s
between tests. Before going that route, you should consider some approaches that
avoid var
s. One approach is to write one or more create-fixture methods
that return a new instance of a needed object (or a tuple or case class holding new instances of
multiple objects) each time it is called. You can then call a create-fixture method at the beginning of each
test that needs the fixture, storing the fixture object or objects in local variables. Here's an example:
import org.scalatest.FeatureSpec import scala.collection.mutable.ListBufferclass MyFeatureSpec extends FeatureSpec {
// create objects needed by tests and return as a tuple def createFixture = ( new StringBuilder("ScalaTest is "), new ListBuffer[String] )
feature("The create-fixture approach") {
scenario("shared fixture objects are mutated by a test") { val (builder, lbuf) = createFixture builder.append("easy!") assert(builder.toString === "ScalaTest is easy!") assert(lbuf.isEmpty) lbuf += "sweet" }
scenario("test gets a fresh copy of the shared fixture") { val (builder, lbuf) = createFixture builder.append("fun!") assert(builder.toString === "ScalaTest is fun!") assert(lbuf.isEmpty) } } }
If different tests in the same FeatureSpec
require different fixtures, you can create multiple create-fixture methods and
call the method (or methods) needed by each test at the begining of the test. If every test requires the same set of
mutable fixture objects, one other approach you can take is make them simply val
s and mix in trait
OneInstancePerTest
. If you mix in OneInstancePerTest
, each test
will be run in its own instance of the FeatureSpec
, similar to the way JUnit tests are executed.
Although the create-fixture and OneInstancePerTest
approaches take care of setting up a fixture before each
test, they don't address the problem of cleaning up a fixture after the test completes. In this situation,
one option is to mix in the BeforeAndAfterEach
trait.
BeforeAndAfterEach
's beforeEach
method will be run before, and its afterEach
method after, each test (like JUnit's setUp
and tearDown
methods, respectively).
For example, you could create a temporary file before each test, and delete it afterwords, like this:
import org.scalatest.FeatureSpec import org.scalatest.BeforeAndAfterEach import java.io.FileReader import java.io.FileWriter import java.io.Fileclass FileIoFeatureSpec extends FeatureSpec with BeforeAndAfterEach {
private val FileName = "TempFile.txt" private var reader: FileReader = _
// Set up the temp file needed by the test override def beforeEach() { val writer = new FileWriter(FileName) try { writer.write("Hello, test!") } finally { writer.close() }
// Create the reader needed by the test reader = new FileReader(FileName) }
// Close and delete the temp file override def afterEach() { reader.close() val file = new File(FileName) file.delete() }
feature("Reading and writing files") {
scenario("reading from a temp file") { var builder = new StringBuilder var c = reader.read() while (c != -1) { builder.append(c.toChar) c = reader.read() } assert(builder.toString === "Hello, test!") }
scenario("reading first char of a temp file") { assert(reader.read() === 'H') }
scenario("no fixture is passed") { assert(1 + 1 === 2) } } }
In this example, the instance variable reader
is a var
, so
it can be reinitialized between tests by the beforeEach
method.
Although the BeforeAndAfterEach
approach should be familiar to the users of most
test other frameworks, ScalaTest provides another alternative that also allows you to perform cleanup
after each test: overriding withFixture(NoArgTest)
.
To execute each test, Suite
's implementation of the runTest
method wraps an invocation
of the appropriate test method in a no-arg function. runTest
passes that test function to the withFixture(NoArgTest)
method, which is responsible for actually running the test by invoking the function. Suite
's
implementation of withFixture(NoArgTest)
simply invokes the function, like this:
// Default implementation protected def withFixture(test: NoArgTest) { test() }
The withFixture(NoArgTest)
method exists so that you can override it and set a fixture up before, and clean it up after, each test.
Thus, the previous temp file example could also be implemented without mixing in BeforeAndAfterEach
, like this:
import org.scalatest.FeatureSpec import org.scalatest.BeforeAndAfterEach import java.io.FileReader import java.io.FileWriter import java.io.Fileclass FileIoFeatureSpec extends FeatureSpec {
private var reader: FileReader = _
override def withFixture(test: NoArgTest) {
val FileName = "TempFile.txt"
// Set up the temp file needed by the test val writer = new FileWriter(FileName) try { writer.write("Hello, test!") } finally { writer.close() }
// Create the reader needed by the test reader = new FileReader(FileName)
try { test() // Invoke the test function } finally { // Close and delete the temp file reader.close() val file = new File(FileName) file.delete() } }
feature("Reading and writing files") {
scenario("reading from a temp file") { var builder = new StringBuilder var c = reader.read() while (c != -1) { builder.append(c.toChar) c = reader.read() } assert(builder.toString === "Hello, test!") }
scenario("reading first char of a temp file") { assert(reader.read() === 'H') }
scenario("no fixture is passed") { assert(1 + 1 === 2) } } }
If you prefer to keep your test classes immutable, one final variation is to use the
FixtureFeatureSpec
trait from the
org.scalatest.fixture
package. Tests in an org.scalatest.fixture.FixtureFeatureSpec
can have a fixture
object passed in as a parameter. You must indicate the type of the fixture object
by defining the Fixture
type member and define a withFixture
method that takes a one-arg test function.
(A FixtureFeatureSpec
has two overloaded withFixture
methods, therefore, one that takes a OneArgTest
and the other, inherited from Suite
, that takes a NoArgTest
.)
Inside the withFixture(OneArgTest)
method, you create the fixture, pass it into the test function, then perform any
necessary cleanup after the test function returns. Instead of invoking each test directly, a FixtureFeatureSpec
will
pass a function that invokes the code of a test to withFixture(OneArgTest)
. Your withFixture(OneArgTest)
method, therefore,
is responsible for actually running the code of the test by invoking the test function.
For example, you could pass the temp file reader fixture to each test that needs it
by overriding the withFixture(OneArgTest)
method of a FixtureFeatureSpec
, like this:
import org.scalatest.fixture.FixtureFeatureSpec import java.io.FileReader import java.io.FileWriter import java.io.Fileclass MySuite extends FixtureFeatureSpec {
type FixtureParam = FileReader
def withFixture(test: OneArgTest) {
val FileName = "TempFile.txt"
// Set up the temp file needed by the test val writer = new FileWriter(FileName) try { writer.write("Hello, test!") } finally { writer.close() }
// Create the reader needed by the test val reader = new FileReader(FileName)
try { // Run the test using the temp file test(reader) } finally { // Close and delete the temp file reader.close() val file = new File(FileName) file.delete() } }
feature("Reading and writing files") {
scenario("reading from a temp file") { reader => var builder = new StringBuilder var c = reader.read() while (c != -1) { builder.append(c.toChar) c = reader.read() } assert(builder.toString === "Hello, test!") }
scenario("reading first char of a temp file") { reader => assert(reader.read() === 'H') }
scenario("no fixture is passed") { () => assert(1 + 1 === 2) } } }
It is worth noting that the only difference in the test code between the mutable
BeforeAndAfterEach
approach shown here and the immutable FixtureFeatureSpec
approach shown previously is that two of the FixtureFeatureSpec
's test functions take a FileReader
as
a parameter via the "reader =>
" at the beginning of the function. Otherwise the test code is identical.
One benefit of the explicit parameter is that, as demonstrated
by the "no fixture passed
" scenario, a FixtureFeatureSpec
test need not take the fixture. So you can have some tests that take a fixture, and others that don't.
In this case, the FixtureFeatureSpec
provides documentation indicating which
tests use the fixture and which don't, whereas the BeforeAndAfterEach
approach does not.
(If you have want to combine tests that take different fixture types in the same FeatureSpec
, you can
use MultipleFixtureFeatureSpec.)
If you want to execute code before and after all tests (and nested suites) in a suite, such
as you could do with @BeforeClass
and @AfterClass
annotations in JUnit 4, you can use the beforeAll
and afterAll
methods of BeforeAndAfterAll
. See the documentation for BeforeAndAfterAll
for
an example.
Sometimes you may want to run the same test code on different fixture objects. In other words, you may want to write tests that are "shared"
by different fixture objects.
To accomplish this in a FeatureSpec
, you first place shared tests (i.e., shared scenarios) in
behavior functions. These behavior functions will be
invoked during the construction phase of any FeatureSpec
that uses them, so that the scenarios they contain will
be registered as scenarios in that FeatureSpec
.
For example, given this stack class:
import scala.collection.mutable.ListBufferclass Stack[T] {
val MAX = 10 private var buf = new ListBuffer[T]
def push(o: T) { if (!full) o +: buf else throw new IllegalStateException("can't push onto a full stack") }
def pop(): T = { if (!empty) buf.remove(0) else throw new IllegalStateException("can't pop an empty stack") }
def peek: T = { if (!empty) buf(0) else throw new IllegalStateException("can't pop an empty stack") }
def full: Boolean = buf.size == MAX def empty: Boolean = buf.size == 0 def size = buf.size
override def toString = buf.mkString("Stack(", ", ", ")") }
You may want to test the Stack
class in different states: empty, full, with one item, with one item less than capacity,
etc. You may find you have several scenarios that make sense any time the stack is non-empty. Thus you'd ideally want to run
those same scenarios for three stack fixture objects: a full stack, a stack with a one item, and a stack with one item less than
capacity. With shared tests, you can factor these scenarios out into a behavior function, into which you pass the
stack fixture to use when running the tests. So in your FeatureSpec
for stack, you'd invoke the
behavior function three times, passing in each of the three stack fixtures so that the shared scenarios are run for all three fixtures.
You can define a behavior function that encapsulates these shared scenarios inside the FeatureSpec
that uses them. If they are shared
between different FeatureSpec
s, however, you could also define them in a separate trait that is mixed into
each FeatureSpec
that uses them.
For example, here the nonEmptyStack
behavior function (in this case, a
behavior method) is defined in a trait along with another
method containing shared scenarios for non-full stacks:
import org.scalatest.FeatureSpec import org.scalatest.GivenWhenThen import org.scalatestexamples.helpers.Stacktrait FeatureSpecStackBehaviors { this: FeatureSpec with GivenWhenThen =>
def nonEmptyStack(createNonEmptyStack: => Stack[Int], lastItemAdded: Int) {
scenario("empty is invoked on this non-empty stack: " + createNonEmptyStack.toString) {
given("a non-empty stack") val stack = createNonEmptyStack
when("empty is invoked on the stack") then("empty returns false") assert(!stack.empty) }
scenario("peek is invoked on this non-empty stack: " + createNonEmptyStack.toString) {
given("a non-empty stack") val stack = createNonEmptyStack val size = stack.size
when("peek is invoked on the stack") then("peek returns the last item added") assert(stack.peek === lastItemAdded)
and("the size of the stack is the same as before") assert(stack.size === size) }
scenario("pop is invoked on this non-empty stack: " + createNonEmptyStack.toString) {
given("a non-empty stack") val stack = createNonEmptyStack val size = stack.size
when("pop is invoked on the stack") then("pop returns the last item added") assert(stack.pop === lastItemAdded)
and("the size of the stack one less than before") assert(stack.size === size - 1) } }
def nonFullStack(createNonFullStack: => Stack[Int]) {
scenario("full is invoked on this non-full stack: " + createNonFullStack.toString) {
given("a non-full stack") val stack = createNonFullStack
when("full is invoked on the stack") then("full returns false") assert(!stack.full) }
scenario("push is invoked on this non-full stack: " + createNonFullStack.toString) {
given("a non-full stack") val stack = createNonFullStack val size = stack.size
when("push is invoked on the stack") stack.push(7)
then("the size of the stack is one greater than before") assert(stack.size === size + 1)
and("the top of the stack contains the pushed value") assert(stack.peek === 7) } } }
Given these behavior functions, you could invoke them directly, but FeatureSpec
offers a DSL for the purpose,
which looks like this:
scenariosFor(nonEmptyStack(stackWithOneItem, lastValuePushed)) scenariosFor(nonFullStack(stackWithOneItem))
If you prefer to use an imperative style to change fixtures, for example by mixing in BeforeAndAfterEach
and
reassigning a stack
var
in beforeEach
, you could write your behavior functions
in the context of that var
, which means you wouldn't need to pass in the stack fixture because it would be
in scope already inside the behavior function. In that case, your code would look like this:
scenariosFor(nonEmptyStack) // assuming lastValuePushed is also in scope inside nonEmptyStack scenariosFor(nonFullStack)
The recommended style, however, is the functional, pass-all-the-needed-values-in style. Here's an example:
import org.scalatest.FeatureSpec import org.scalatest.GivenWhenThen import org.scalatestexamples.helpers.Stackclass StackFeatureSpec extends FeatureSpec with GivenWhenThen with FeatureSpecStackBehaviors {
// Stack fixture creation methods def emptyStack = new Stack[Int]
def fullStack = { val stack = new Stack[Int] for (i <- 0 until stack.MAX) stack.push(i) stack }
def stackWithOneItem = { val stack = new Stack[Int] stack.push(9) stack }
def stackWithOneItemLessThanCapacity = { val stack = new Stack[Int] for (i <- 1 to 9) stack.push(i) stack }
val lastValuePushed = 9
feature("A Stack is pushed and popped") {
scenario("empty is invoked on an empty stack") {
given("an empty stack") val stack = emptyStack
when("empty is invoked on the stack") then("empty returns true") assert(stack.empty) }
scenario("peek is invoked on an empty stack") {
given("an empty stack") val stack = emptyStack
when("peek is invoked on the stack") then("peek throws IllegalStateException") intercept[IllegalStateException] { stack.peek } }
scenario("pop is invoked on an empty stack") {
given("an empty stack") val stack = emptyStack
when("pop is invoked on the stack") then("pop throws IllegalStateException") intercept[IllegalStateException] { emptyStack.pop } }
scenariosFor(nonEmptyStack(stackWithOneItem, lastValuePushed)) scenariosFor(nonFullStack(stackWithOneItem))
scenariosFor(nonEmptyStack(stackWithOneItemLessThanCapacity, lastValuePushed)) scenariosFor(nonFullStack(stackWithOneItemLessThanCapacity))
scenario("full is invoked on a full stack") {
given("an full stack") val stack = fullStack
when("full is invoked on the stack") then("full returns true") assert(stack.full) }
scenariosFor(nonEmptyStack(fullStack, lastValuePushed))
scenario("push is invoked on a full stack") {
given("an full stack") val stack = fullStack
when("push is invoked on the stack") then("push throws IllegalStateException") intercept[IllegalStateException] { stack.push(10) } } } }
If you load these classes into the Scala interpreter (with scalatest's JAR file on the class path), and execute it, you'll see:
scala> (new StackFeatureSpec).execute() Feature: A Stack is pushed and popped Scenario: empty is invoked on an empty stack Given an empty stack When empty is invoked on the stack Then empty returns true Scenario: peek is invoked on an empty stack Given an empty stack When peek is invoked on the stack Then peek throws IllegalStateException Scenario: pop is invoked on an empty stack Given an empty stack When pop is invoked on the stack Then pop throws IllegalStateException Scenario: empty is invoked on this non-empty stack: Stack(9) Given a non-empty stack When empty is invoked on the stack Then empty returns false Scenario: peek is invoked on this non-empty stack: Stack(9) Given a non-empty stack When peek is invoked on the stack Then peek returns the last item added And the size of the stack is the same as before Scenario: pop is invoked on this non-empty stack: Stack(9) Given a non-empty stack When pop is invoked on the stack Then pop returns the last item added And the size of the stack one less than before Scenario: full is invoked on this non-full stack: Stack(9) Given a non-full stack When full is invoked on the stack Then full returns false Scenario: push is invoked on this non-full stack: Stack(9) Given a non-full stack When push is invoked on the stack Then the size of the stack is one greater than before And the top of the stack contains the pushed value Scenario: empty is invoked on this non-empty stack: Stack(9, 8, 7, 6, 5, 4, 3, 2, 1) Given a non-empty stack When empty is invoked on the stack Then empty returns false Scenario: peek is invoked on this non-empty stack: Stack(9, 8, 7, 6, 5, 4, 3, 2, 1) Given a non-empty stack When peek is invoked on the stack Then peek returns the last item added And the size of the stack is the same as before Scenario: pop is invoked on this non-empty stack: Stack(9, 8, 7, 6, 5, 4, 3, 2, 1) Given a non-empty stack When pop is invoked on the stack Then pop returns the last item added And the size of the stack one less than before Scenario: full is invoked on this non-full stack: Stack(9, 8, 7, 6, 5, 4, 3, 2, 1) Given a non-full stack When full is invoked on the stack Then full returns false Scenario: push is invoked on this non-full stack: Stack(9, 8, 7, 6, 5, 4, 3, 2, 1) Given a non-full stack When push is invoked on the stack Then the size of the stack is one greater than before And the top of the stack contains the pushed value Scenario: full is invoked on a full stack Given an full stack When full is invoked on the stack Then full returns true Scenario: empty is invoked on this non-empty stack: Stack(9, 8, 7, 6, 5, 4, 3, 2, 1, 0) Given a non-empty stack When empty is invoked on the stack Then empty returns false Scenario: peek is invoked on this non-empty stack: Stack(9, 8, 7, 6, 5, 4, 3, 2, 1, 0) Given a non-empty stack When peek is invoked on the stack Then peek returns the last item added And the size of the stack is the same as before Scenario: pop is invoked on this non-empty stack: Stack(9, 8, 7, 6, 5, 4, 3, 2, 1, 0) Given a non-empty stack When pop is invoked on the stack Then pop returns the last item added And the size of the stack one less than before Scenario: push is invoked on a full stack Given an full stack When push is invoked on the stack Then push throws IllegalStateException
One thing to keep in mind when using shared tests is that in ScalaTest, each test in a suite must have a unique name.
If you register the same tests repeatedly in the same suite, one problem you may encounter is an exception at runtime
complaining that multiple tests are being registered with the same test name.
In a FeatureSpec
there is no nesting construct analogous to Spec
's describe
clause.
Therefore, you need to do a bit of
extra work to ensure that the test names are unique. If a duplicate test name problem shows up in a
FeatureSpec
, you'll need to pass in a prefix or suffix string to add to each test name. You can pass this string
the same way you pass any other data needed by the shared tests, or just call toString
on the shared fixture object.
This is the approach taken by the previous FeatureSpecStackBehaviors
example.
Given this FeatureSpecStackBehaviors
trait, calling it with the stackWithOneItem
fixture, like this:
scenariosFor(nonEmptyStack(stackWithOneItem, lastValuePushed))
yields test names:
empty is invoked on this non-empty stack: Stack(9)
peek is invoked on this non-empty stack: Stack(9)
pop is invoked on this non-empty stack: Stack(9)
Whereas calling it with the stackWithOneItemLessThanCapacity
fixture, like this:
scenariosFor(nonEmptyStack(stackWithOneItemLessThanCapacity, lastValuePushed))
yields different test names:
empty is invoked on this non-empty stack: Stack(9, 8, 7, 6, 5, 4, 3, 2, 1)
peek is invoked on this non-empty stack: Stack(9, 8, 7, 6, 5, 4, 3, 2, 1)
pop is invoked on this non-empty stack: Stack(9, 8, 7, 6, 5, 4, 3, 2, 1)
A FeatureSpec
's tests may be classified into groups by tagging them with string names.
As with any suite, when executing a FeatureSpec
, groups of tests can
optionally be included and/or excluded. To tag a FeatureSpec
's tests,
you pass objects that extend abstract class org.scalatest.Tag
to methods
that register tests, test
and ignore
. Class Tag
takes one parameter, a string name. If you have
created Java annotation interfaces for use as group names in direct subclasses of org.scalatest.Suite
,
then you will probably want to use group names on your FeatureSpec
s that match. To do so, simply
pass the fully qualified names of the Java interfaces to the Tag
constructor. For example, if you've
defined Java annotation interfaces with fully qualified names, com.mycompany.groups.SlowTest
and
com.mycompany.groups.DbTest
, then you could
create matching groups for FeatureSpec
s like this:
import org.scalatest.Tagobject SlowTest extends Tag("com.mycompany.groups.SlowTest") object DbTest extends Tag("com.mycompany.groups.DbTest")
Given these definitions, you could place FeatureSpec
tests into groups like this:
import org.scalatest.FeatureSpecclass ArithmeticFeatureSpec extends FeatureSpec {
// Sharing fixture objects via instance variables val shared = 5
feature("Integer arithmetic") {
scenario("addition", SlowTest) { val sum = 2 + 3 assert(sum === shared) }
scenario("subtraction", SlowTest, DbTest) { val diff = 7 - 2 assert(diff === shared) } } }
This code marks both tests, "addition" and "subtraction," with the com.mycompany.groups.SlowTest
tag,
and test "subtraction" with the com.mycompany.groups.DbTest
tag.
The primary run
method takes a Filter
, whose constructor takes an optional
Set[String]
s called tagsToInclude
and a Set[String]
called
tagsToExclude
. If tagsToInclude
is None
, all tests will be run
except those those belonging to tags listed in the
tagsToExclude
Set
. If tagsToInclude
is defined, only tests
belonging to tags mentioned in the tagsToInclude
set, and not mentioned in tagsToExclude
,
will be run.
To support the common use case of “temporarily” disabling a test, with the
good intention of resurrecting the test at a later time, FeatureSpec
provides registration
methods that start with ignore
instead of scenario
. For example, to temporarily
disable the test named addition
, just change “scenario
” into “ignore
,” like this:
import org.scalatest.FeatureSpecclass ArithmeticFeatureSpec extends FeatureSpec {
// Sharing fixture objects via instance variables val shared = 5
feature("Integer arithmetic") {
ignore("addition") { val sum = 2 + 3 assert(sum === shared) }
scenario("subtraction") { val diff = 7 - 2 assert(diff === shared) } } }
If you run this version of ArithmeticFeatureSpec
with:
scala> (new ArithmeticFeatureSpec).execute()
It will run only subtraction
and report that addition
was ignored:
Feature: Integer arithmetic Scenario: addition !!! IGNORED !!! Scenario: subtraction
One of the parameters to the primary run
method is a Reporter
, which
will collect and report information about the running suite of tests.
Information about suites and tests that were run, whether tests succeeded or failed,
and tests that were ignored will be passed to the Reporter
as the suite runs.
Most often the reporting done by default by FeatureSpec
's methods will be sufficient, but
occasionally you may wish to provide custom information to the Reporter
from a test.
For this purpose, an Informer
that will forward information to the current Reporter
is provided via the info
parameterless method.
You can pass the extra information to the Informer
via its apply
method.
The Informer
will then pass the information to the Reporter
via an InfoProvided
event.
Here's an example:
import org.scalatest.FeatureSpecclass ArithmeticFeatureSpec extends FeatureSpec {
feature("Integer arithmetic") {
scenario("addition") { val sum = 2 + 3 assert(sum === 5) info("Addition seems to work") }
scenario("subtraction") { val diff = 7 - 2 assert(diff === 5) } } }
If you run this ArithmeticFeatureSpec
from the interpreter, you will see the following message
included in the printed report:
Feature: Integer arithmetic Scenario: addition Addition seems to work
One use case for the Informer
is to pass more information about a scenario to the reporter. For example,
the GivenWhenThen
trait provides methods that use the implicit info
provided by FeatureSpec
to pass such information to the reporter. Here's an example:
import org.scalatest.FeatureSpec import org.scalatest.GivenWhenThenclass ArithmeticSpec extends FeatureSpec with GivenWhenThen {
feature("Integer arithmetic") {
scenario("addition") {
given("two integers") val x = 2 val y = 3
when("they are added") val sum = x + y
then("the result is the sum of the two numbers") assert(sum === 5) }
scenario("subtraction") {
given("two integers") val x = 7 val y = 2
when("one is subtracted from the other") val diff = x - y
then("the result is the difference of the two numbers") assert(diff === 5) } } }
If you run this FeatureSpec
from the interpreter, you will see the following messages
included in the printed report:
scala> (new ArithmeticFeatureSpec).execute() Feature: Integer arithmetic Scenario: addition Given two integers When they are added Then the result is the sum of the two numbers Scenario: subtraction Given two integers When one is subtracted from the other Then the result is the difference of the two numbers
A pending test is one that has been given a name but is not yet implemented. The purpose of pending tests is to facilitate a style of testing in which documentation of behavior is sketched out before tests are written to verify that behavior (and often, the before the behavior of the system being tested is itself implemented). Such sketches form a kind of specification of what tests and functionality to implement later.
To support this style of testing, a test can be given a name that specifies one
bit of behavior required by the system being tested. The test can also include some code that
sends more information about the behavior to the reporter when the tests run. At the end of the test,
it can call method pending
, which will cause it to complete abruptly with TestPendingException
.
Because tests in ScalaTest can be designated as pending with TestPendingException
, both the test name and any information
sent to the reporter when running the test can appear in the report of a test run. (In other words,
the code of a pending test is executed just like any other test.) However, because the test completes abruptly
with TestPendingException
, the test will be reported as pending, to indicate
the actual test, and possibly the functionality, has not yet been implemented.
You can mark tests as pending in a FeatureSpec
like this:
import org.scalatest.FeatureSpecclass ArithmeticFeatureSpec extends FeatureSpec {
// Sharing fixture objects via instance variables val shared = 5
feature("Integer arithmetic") {
scenario("addition") { val sum = 2 + 3 assert(sum === shared) }
scenario("subtraction") (pending) } }
(Note: "(pending)
" is the body of the test. Thus the test contains just one statement, an invocation
of the pending
method, which throws TestPendingException
.)
If you run this version of ArithmeticFeatureSpec
with:
scala> (new ArithmeticFeatureSpec).execute()
It will run both tests, but report that subtraction
is pending. You'll see:
Feature: Integer arithmetic Scenario: addition Scenario: subtraction (pending)
One difference between an ignored test and a pending one is that an ignored test is intended to be used during a significant refactorings of the code under test, when tests break and you don't want to spend the time to fix all of them immediately. You can mark some of those broken tests as ignored temporarily, so that you can focus the red bar on just failing tests you actually want to fix immediately. Later you can go back and fix the ignored tests. In other words, by ignoring some failing tests temporarily, you can more easily notice failed tests that you actually want to fix. By contrast, a pending test is intended to be used before a test and/or the code under test is written. Pending indicates you've decided to write a test for a bit of behavior, but either you haven't written the test yet, or have only written part of it, or perhaps you've written the test but don't want to implement the behavior it tests until after you've implemented a different bit of behavior you realized you need first. Thus ignored tests are designed to facilitate refactoring of existing code whereas pending tests are designed to facilitate the creation of new code.
One other difference between ignored and pending tests is that ignored tests are implemented as a test tag that is
excluded by default. Thus an ignored test is never executed. By contrast, a pending test is implemented as a
test that throws TestPendingException
(which is what calling the pending
method does). Thus
the body of pending tests are executed up until they throw TestPendingException
. The reason for this difference
is that it enables your unfinished test to send InfoProvided
messages to the reporter before it completes
abruptly with TestPendingException
, as shown in the previous example on Informer
s
that used the GivenWhenThen
trait. For example, the following snippet in a FeatureSpec
:
feature("Integer arithmetic") {scenario("addition") { given("two integers") when("they are added") then("the result is the sum of the two numbers") pending } // ...
Would yield the following output when run in the interpreter:
Feature: Integer arithmetic Scenario: addition (pending) Given two integers When they are added Then the result is the sum of the two numbers
Class used via an implicit conversion to enable any two objects to be compared with
===
in assertions in tests.
Assert that an Option[String]
is None
.
Assert that an Option[String]
is None
.
If the condition is None
, this method returns normally.
Else, it throws TestFailedException
with the String
value of the Some
included in the TestFailedException
's
detail message.
This form of assert
is usually called in conjunction with an
implicit conversion to Equalizer
, using a ===
comparison, as in:
assert(a === b)
For more information on how this mechanism works, see the documentation for
Equalizer
.
the Option[String]
to assert
Assert that an Option[String]
is None
.
Assert that an Option[String]
is None
.
If the condition is None
, this method returns normally.
Else, it throws TestFailedException
with the String
value of the Some
, as well as the
String
obtained by invoking toString
on the
specified message
,
included in the TestFailedException
's detail message.
This form of assert
is usually called in conjunction with an
implicit conversion to Equalizer
, using a ===
comparison, as in:
assert(a === b, "extra info reported if assertion fails")
For more information on how this mechanism works, see the documentation for
Equalizer
.
the Option[String]
to assert
An objects whose toString
method returns a message to include in a failure report.
Assert that a boolean condition, described in String
message
, is true.
Assert that a boolean condition, described in String
message
, is true.
If the condition is true
, this method returns normally.
Else, it throws TestFailedException
with the
String
obtained by invoking toString
on the
specified message
as the exception's detail message.
the boolean condition to assert
An objects whose toString
method returns a message to include in a failure report.
Assert that a boolean condition is true.
Assert that a boolean condition is true.
If the condition is true
, this method returns normally.
Else, it throws TestFailedException
.
the boolean condition to assert
Implicit conversion from Any
to Equalizer
, used to enable
assertions with ===
comparisons.
Implicit conversion from Any
to Equalizer
, used to enable
assertions with ===
comparisons.
For more information on this mechanism, see the documentation for </code>Equalizer</code>.
Because trait Suite
mixes in Assertions
, this implicit conversion will always be
available by default in ScalaTest Suite
s. This is the only implicit conversion that is in scope by default in every
ScalaTest Suite
. Other implicit conversions offered by ScalaTest, such as those that support the matchers DSL
or invokePrivate
, must be explicitly invited into your test code, either by mixing in a trait or importing the
members of its companion object. The reason ScalaTest requires you to invite in implicit conversions (with the exception of the
implicit conversion for ===
operator) is because if one of ScalaTest's implicit conversions clashes with an
implicit conversion used in the code you are trying to test, your program won't compile. Thus there is a chance that if you
are ever trying to use a library or test some code that also offers an implicit conversion involving a ===
operator,
you could run into the problem of a compiler error due to an ambiguous implicit conversion. If that happens, you can turn off
the implicit conversion offered by this convertToEqualizer
method simply by overriding the method in your
Suite
subclass, but not marking it as implicit:
// In your Suite subclass override def convertToEqualizer(left: Any) = new Equalizer(left)
the object whose type to convert to Equalizer
.
Executes the test specified as testName
in this Suite
with the specified configMap
, printing
results to the standard output.
Executes the test specified as testName
in this Suite
with the specified configMap
, printing
results to the standard output.
This method implementation calls run
on this Suite
, passing in:
testName
- Some(testName)
reporter
- a reporter that prints to the standard outputstopper
- a Stopper
whose apply
method always returns false
filter
- a Filter
constructed with None
for tagsToInclude
and Set()
for tagsToExclude
configMap
- the specified configMap
Map[String, Any]
distributor
- None
tracker
- a new Tracker
This method serves as a convenient way to execute a single test, passing in some objects via the configMap
, especially from
within the Scala interpreter.
Note: In ScalaTest, the terms "execute" and "run" basically mean the same thing and
can be used interchangably. The reason this convenience method and its three overloaded forms
aren't named run
is described the documentation of the overloaded form that
takes no parameters: execute().
the name of one test to run.
a Map
of key-value pairs that can be used by the executing Suite
of tests.
Executes the test specified as testName
in this Suite
, printing results to the standard output.
Executes the test specified as testName
in this Suite
, printing results to the standard output.
This method implementation calls run
on this Suite
, passing in:
testName
- Some(testName)
reporter
- a reporter that prints to the standard outputstopper
- a Stopper
whose apply
method always returns false
filter
- a Filter
constructed with None
for tagsToInclude
and Set()
for tagsToExclude
configMap
- an empty Map[String, Any]
distributor
- None
tracker
- a new Tracker
This method serves as a convenient way to run a single test, especially from within the Scala interpreter.
Note: In ScalaTest, the terms "execute" and "run" basically mean the same thing and
can be used interchangably. The reason this convenience method and its three overloaded forms
aren't named run
is described the documentation of the overloaded form that
takes no parameters: execute().
the name of one test to run.
Executes this Suite
with the specified configMap
, printing results to the standard output.
Executes this Suite
with the specified configMap
, printing results to the standard output.
This method implementation calls run
on this Suite
, passing in:
testName
- None
reporter
- a reporter that prints to the standard outputstopper
- a Stopper
whose apply
method always returns false
filter
- a Filter
constructed with None
for tagsToInclude
and Set()
for tagsToExclude
configMap
- the specified configMap
Map[String, Any]
distributor
- None
tracker
- a new Tracker
This method serves as a convenient way to execute a Suite
, passing in some objects via the configMap
, especially from within the Scala interpreter.
Note: In ScalaTest, the terms "execute" and "run" basically mean the same thing and
can be used interchangably. The reason this convenience method and its three overloaded forms
aren't named run
is described the documentation of the overloaded form that
takes no parameters: execute().
a Map
of key-value pairs that can be used by the executing Suite
of tests.
Executes this Suite
, printing results to the standard output.
Executes this Suite
, printing results to the standard output.
This method implementation calls run
on this Suite
, passing in:
testName
- None
reporter
- a reporter that prints to the standard outputstopper
- a Stopper
whose apply
method always returns false
filter
- a Filter
constructed with None
for tagsToInclude
and Set()
for tagsToExclude
configMap
- an empty Map[String, Any]
distributor
- None
tracker
- a new Tracker
This method serves as a convenient way to execute a Suite
, especially from
within the Scala interpreter.
Note: In ScalaTest, the terms "execute" and "run" basically mean the same thing and
can be used interchangably. The reason this convenience method and its three overloaded forms
aren't named run
is because junit.framework.TestCase
declares a run
method
that takes no arguments but returns a junit.framework.TestResult
. That
run
method would not overload with this method if it were named run
,
because it would have the same parameters but a different return type than the one
defined in TestCase
. To facilitate integration with JUnit 3, therefore,
these convenience "run" methods are named execute
. In particular, this allows trait
org.scalatest.junit.JUnit3Suite
to extend both org.scalatest.Suite
and
junit.framework.TestCase
, which enables the creating of classes that
can be run with either ScalaTest or JUnit 3.
Expect that the value passed as expected
equals the value passed as actual
.
Expect that the value passed as expected
equals the value passed as actual
.
If the actual
value equals the expected
value
(as determined by ==
), expect
returns
normally. Else, expect
throws an
TestFailedException
whose detail message includes the expected and actual values.
the expected value
the actual value, which should equal the passed expected
value
Expect that the value passed as expected
equals the value passed as actual
.
Expect that the value passed as expected
equals the value passed as actual
.
If the actual
equals the expected
(as determined by ==
), expect
returns
normally. Else, if actual
is not equal to expected
, expect
throws an
TestFailedException
whose detail message includes the expected and actual values, as well as the String
obtained by invoking toString
on the passed message
.
the expected value
An object whose toString
method returns a message to include in a failure report.
the actual value, which should equal the passed expected
value
The total number of tests that are expected to run when this Suite
's run
method is invoked.
The total number of tests that are expected to run when this Suite
's run
method is invoked.
This trait's implementation of this method returns the sum of:
testNames
List
, minus the number of tests marked as ignoredexpectedTestCount
on every nested Suite
contained in
nestedSuites
a Filter
with which to filter tests to count based on their tags
Throws TestFailedException
, with the passed
Throwable
cause, to indicate a test failed.
Throws TestFailedException
, with the passed
Throwable
cause, to indicate a test failed.
The getMessage
method of the thrown TestFailedException
will return cause.toString()
.
a Throwable
that indicates the cause of the failure.
Throws TestFailedException
, with the passed
String
message
as the exception's detail
message and Throwable
cause, to indicate a test failed.
Throws TestFailedException
, with the passed
String
message
as the exception's detail
message and Throwable
cause, to indicate a test failed.
A message describing the failure.
A Throwable
that indicates the cause of the failure.
Throws TestFailedException
, with the passed
String
message
as the exception's detail
message, to indicate a test failed.
Throws TestFailedException
, with the passed
String
message
as the exception's detail
message, to indicate a test failed.
A message describing the failure.
Throws TestFailedException
to indicate a test failed.
Throws TestFailedException
to indicate a test failed.
The groups
methods has been deprecated and will be removed in a future version of ScalaTest.
The groups
methods has been deprecated and will be removed in a future version of ScalaTest.
Please call (and override) tags
instead.
Intercept and return an exception that's expected to be thrown by the passed function value.
Intercept and return an exception that's expected to
be thrown by the passed function value. The thrown exception must be an instance of the
type specified by the type parameter of this method. This method invokes the passed
function. If the function throws an exception that's an instance of the specified type,
this method returns that exception. Else, whether the passed function returns normally
or completes abruptly with a different exception, this method throws TestFailedException
.
Note that the type specified as this method's type parameter may represent any subtype of
AnyRef
, not just Throwable
or one of its subclasses. In
Scala, exceptions can be caught based on traits they implement, so it may at times make sense
to specify a trait that the intercepted exception's class must mix in. If a class instance is
passed for a type that could not possibly be used to catch an exception (such as String
,
for example), this method will complete abruptly with a TestFailedException
.
the function value that should throw the expected exception
an implicit Manifest
representing the type of the specified
type parameter.
the intercepted exception, if it is of the expected type
A List
of this Suite
object's nested Suite
s. If this Suite
contains no nested Suite
s,
this method returns an empty List
. This trait's implementation of this method returns an empty List
.
A List
of this Suite
object's nested Suite
s. If this Suite
contains no nested Suite
s,
this method returns an empty List
. This trait's implementation of this method returns an empty List
.
Throws TestPendingException
to indicate a test is pending.
Throws TestPendingException
to indicate a test is pending.
A pending test is one that has been given a name but is not yet implemented. The purpose of pending tests is to facilitate a style of testing in which documentation of behavior is sketched out before tests are written to verify that behavior (and often, the before the behavior of the system being tested is itself implemented). Such sketches form a kind of specification of what tests and functionality to implement later.
To support this style of testing, a test can be given a name that specifies one
bit of behavior required by the system being tested. The test can also include some code that
sends more information about the behavior to the reporter when the tests run. At the end of the test,
it can call method pending
, which will cause it to complete abruptly with TestPendingException
.
Because tests in ScalaTest can be designated as pending with TestPendingException
, both the test name and any information
sent to the reporter when running the test can appear in the report of a test run. (In other words,
the code of a pending test is executed just like any other test.) However, because the test completes abruptly
with TestPendingException
, the test will be reported as pending, to indicate
the actual test, and possibly the functionality it is intended to test, has not yet been implemented.
Note: This method always completes abruptly with a TestPendingException
. Thus it always has a side
effect. Methods with side effects are usually invoked with parentheses, as in pending()
. This
method is defined as a parameterless method, in flagrant contradiction to recommended Scala style, because it
forms a kind of DSL for pending tests. It enables tests in suites such as FunSuite
or Spec
to be denoted by placing "(pending)
" after the test name, as in:
test("that style rules are not laws") (pending)
Readers of the code see "pending" in parentheses, which looks like a little note attached to the test name to indicate
it is pending. Whereas "(pending())
looks more like a method call, "(pending)
" lets readers
stay at a higher level, forgetting how it is implemented and just focusing on the intent of the programmer who wrote the code.
Execute the passed block of code, and if it completes abruptly, throw TestPendingException
, else
throw TestFailedException
.
Execute the passed block of code, and if it completes abruptly, throw TestPendingException
, else
throw TestFailedException
.
This method can be used to temporarily change a failing test into a pending test in such a way that it will
automatically turn back into a failing test once the problem originally causing the test to fail has been fixed.
At that point, you need only remove the pendingUntilFixed
call. In other words, a
pendingUntilFixed
surrounding a block of code that isn't broken is treated as a test failure.
The motivation for this behavior is to encourage people to remove pendingUntilFixed
calls when
there are no longer needed.
This method facilitates a style of testing in which tests are written before the code they test. Sometimes you may
encounter a test failure that requires more functionality than you want to tackle without writing more tests. In this
case you can mark the bit of test code causing the failure with pendingUntilFixed
. You can then write more
tests and functionality that eventually will get your production code to a point where the original test won't fail anymore.
At this point the code block marked with pendingUntilFixed
will no longer throw an exception (because the
problem has been fixed). This will in turn cause pendingUntilFixed
to throw TestFailedException
with a detail message explaining you need to go back and remove the pendingUntilFixed
call as the problem orginally
causing your test code to fail has been fixed.
a block of code, which if it completes abruptly, should trigger a TestPendingException
Runs this suite of tests.
Runs this suite of tests.
If testName
is None
, this trait's implementation of this method
calls these two methods on this object in this order:
runNestedSuites(report, stopper, tagsToInclude, tagsToExclude, configMap, distributor)
runTests(testName, report, stopper, tagsToInclude, tagsToExclude, configMap)
If testName
is defined, then this trait's implementation of this method
calls runTests
, but does not call runNestedSuites
. This behavior
is part of the contract of this method. Subclasses that override run
must take
care not to call runNestedSuites
if testName
is defined. (The
OneInstancePerTest
trait depends on this behavior, for example.)
Subclasses and subtraits that override this run
method can implement them without
invoking either the runTests
or runNestedSuites
methods, which
are invoked by this trait's implementation of this method. It is recommended, but not required,
that subclasses and subtraits that override run
in a way that does not
invoke runNestedSuites
also override runNestedSuites
and make it
final. Similarly it is recommended, but not required,
that subclasses and subtraits that override run
in a way that does not
invoke runTests
also override runTests
(and runTest
,
which this trait's implementation of runTests
calls) and make it
final. The implementation of these final methods can either invoke the superclass implementation
of the method, or throw an UnsupportedOperationException
if appropriate. The
reason for this recommendation is that ScalaTest includes several traits that override
these methods to allow behavior to be mixed into a Suite
. For example, trait
BeforeAndAfterEach
overrides runTests
s. In a Suite
subclass that no longer invokes runTests
from run
, the
BeforeAndAfterEach
trait is not applicable. Mixing it in would have no effect.
By making runTests
final in such a Suite
subtrait, you make
the attempt to mix BeforeAndAfterEach
into a subclass of your subtrait
a compiler error. (It would fail to compile with a complaint that BeforeAndAfterEach
is trying to override runTests
, which is a final method in your trait.)
an optional name of one test to run. If None
, all relevant tests should be run.
I.e., None
acts like a wildcard that means run all relevant tests in this Suite
.
the Reporter
to which results will be reported
the Stopper
that will be consulted to determine whether to stop execution early.
a Filter
with which to filter tests based on their tags
a Map
of key-value pairs that can be used by the executing Suite
of tests.
an optional Distributor
, into which to put nested Suite
s to be run
by another entity, such as concurrently by a pool of threads. If None
, nested Suite
s will be run sequentially.
a Tracker
tracking Ordinal
s being fired by the current thread.
A user-friendly suite name for this Suite
.
A user-friendly suite name for this Suite
.
This trait's
implementation of this method returns the simple name of this object's class. This
trait's implementation of runNestedSuites
calls this method to obtain a
name for Report
s to pass to the suiteStarting
, suiteCompleted
,
and suiteAborted
methods of the Reporter
.
this Suite
object's suite name.
A Map
whose keys are String
tag names to which tests in this Spec
belong, and values
the Set
of test names that belong to each tag. If this FeatureSpec
contains no tags, this method returns an empty Map
.
A Map
whose keys are String
tag names to which tests in this Spec
belong, and values
the Set
of test names that belong to each tag. If this FeatureSpec
contains no tags, this method returns an empty Map
.
This trait's implementation returns tags that were passed as strings contained in Tag
objects passed to
methods test
and ignore
.
An immutable Set
of test names. If this FeatureSpec
contains no tests, this method returns an
empty Set
.
An immutable Set
of test names. If this FeatureSpec
contains no tests, this method returns an
empty Set
.
This trait's implementation of this method will return a set that contains the names of all registered tests. The set's
iterator will return those names in the order in which the tests were registered. Each test's name is composed
of the concatenation of the text of each surrounding describer, in order from outside in, and the text of the
example itself, with all components separated by a space. For example, consider this FeatureSpec
:
import org.scalatest.FeatureSpecclass StackSpec extends FeatureSpec { feature("A Stack") { scenario("(when not empty) must allow me to pop") {} scenario("(when not full) must allow me to push") {} } }
Invoking testNames
on this Spec
will yield a set that contains the following
two test name strings:
"A Stack (when not empty) must allow me to pop" "A Stack (when not full) must allow me to push"
Executes the block of code passed as the second parameter, and, if it
completes abruptly with a ModifiableMessage
exception,
prepends the "clue" string passed as the first parameter to the beginning of the detail message
of that thrown exception, then rethrows it.
Executes the block of code passed as the second parameter, and, if it
completes abruptly with a ModifiableMessage
exception,
prepends the "clue" string passed as the first parameter to the beginning of the detail message
of that thrown exception, then rethrows it. If clue does not end in a white space
character, one space will be added
between it and the existing detail message (unless the detail message is
not defined).
This method allows you to add more information about what went wrong that will be reported when a test fails. Here's an example:
withClue("(Employee's name was: " + employee.name + ")") { intercept[IllegalArgumentException] { employee.getTask(-1) } }
If an invocation of intercept
completed abruptly with an exception, the resulting message would be something like:
(Employee's name was Bob Jones) Expected IllegalArgumentException to be thrown, but no exception was thrown
A suite of tests in which each test represents one scenario of a feature.
FeatureSpec
is intended for writing tests that are "higher level" than unit tests, for example, integration tests, functional tests, and acceptance tests. You can useFeatureSpec
for unit testing if you prefer, however. Here's an example:A
FeatureSpec
contains feature clauses and scenarios. You define a feature clause withfeature
, and a scenario withscenario
. Bothfeature
andscenario
are methods, defined inFeatureSpec
, which will be invoked by the primary constructor ofStackFeatureSpec
. A feature clause describes a feature of the subject (class or other entity) you are specifying and testing. In the previous example, the subject under specification and test is a stack. The feature being specified and tested is the ability for a user (a programmer in this case) to pop an element off the top of the stack. With each scenario you provide a string (the spec text) that specifies the behavior of the subject for one scenario in which the feature may be used, and a block of code that tests that behavior. You place the spec text between the parentheses, followed by the test code between curly braces. The test code will be wrapped up as a function passed as a by-name parameter toscenario
, which will register the test for later execution.A
FeatureSpec
's lifecycle has two phases: the registration phase and the ready phase. It starts in registration phase and enters ready phase the first timerun
is called on it. It then remains in ready phase for the remainder of its lifetime.Scenarios can only be registered with the
scenario
method while theFeatureSpec
is in its registration phase. Any attempt to register a scenario after theFeatureSpec
has entered its ready phase, i.e., afterrun
has been invoked on theFeatureSpec
, will be met with a thrownTestRegistrationClosedException
. The recommended style of usingFeatureSpec
is to register tests during object construction as is done in all the examples shown here. If you keep to the recommended style, you should never see aTestRegistrationClosedException
.Each scenario represents one test. The name of the test is the spec text passed to the
scenario
method. The feature name does not appear as part of the test name. In aFeatureSpec
, therefore, you must take care to ensure that each test has a unique name (in other words, that eachscenario
has unique spec text).When you run a
FeatureSpec
, it will sendFormatter
s in the events it sends to theReporter
. ScalaTest's built-in reporters will report these events in such a way that the output is easy to read as an informal specification of the subject being tested. For example, if you ranStackFeatureSpec
from within the Scala interpreter:You would see:
Shared fixtures
A test fixture is objects or other artifacts (such as files, sockets, database connections, etc.) used by tests to do their work. You can use fixtures in
FeatureSpec
s with the same approaches suggested forSuite
in its documentation. The same text that appears in the test fixture section ofSuite
's documentation is repeated here, with examples changed fromSuite
toFeatureSpec
.If a fixture is used by only one test, then the definitions of the fixture objects can be local to the test function, such as the objects assigned to
stack
andemptyStack
in the previousStackFeatureSpec
examples. If multiple tests need to share a fixture, the best approach is to assign them to instance variables. Here's a (very contrived) example, in which the object assigned toshared
is used by multiple test functions:In some cases, however, shared mutable fixture objects may be changed by tests such that they need to be recreated or reinitialized before each test. Shared resources such as files or database connections may also need to be created and initialized before, and cleaned up after, each test. JUnit offers methods
setUp
andtearDown
for this purpose. In ScalaTest, you can use theBeforeAndAfterEach
trait, which will be described later, to implement an approach similar to JUnit'ssetUp
andtearDown
, however, this approach often involves reassigningvar
s between tests. Before going that route, you should consider some approaches that avoidvar
s. One approach is to write one or more create-fixture methods that return a new instance of a needed object (or a tuple or case class holding new instances of multiple objects) each time it is called. You can then call a create-fixture method at the beginning of each test that needs the fixture, storing the fixture object or objects in local variables. Here's an example:If different tests in the same
FeatureSpec
require different fixtures, you can create multiple create-fixture methods and call the method (or methods) needed by each test at the begining of the test. If every test requires the same set of mutable fixture objects, one other approach you can take is make them simplyval
s and mix in traitOneInstancePerTest
. If you mix inOneInstancePerTest
, each test will be run in its own instance of theFeatureSpec
, similar to the way JUnit tests are executed.Although the create-fixture and
OneInstancePerTest
approaches take care of setting up a fixture before each test, they don't address the problem of cleaning up a fixture after the test completes. In this situation, one option is to mix in theBeforeAndAfterEach
trait.BeforeAndAfterEach
'sbeforeEach
method will be run before, and itsafterEach
method after, each test (like JUnit'ssetUp
andtearDown
methods, respectively). For example, you could create a temporary file before each test, and delete it afterwords, like this:In this example, the instance variable
reader
is avar
, so it can be reinitialized between tests by thebeforeEach
method.Although the
BeforeAndAfterEach
approach should be familiar to the users of most test other frameworks, ScalaTest provides another alternative that also allows you to perform cleanup after each test: overridingwithFixture(NoArgTest)
. To execute each test,Suite
's implementation of therunTest
method wraps an invocation of the appropriate test method in a no-arg function.runTest
passes that test function to thewithFixture(NoArgTest)
method, which is responsible for actually running the test by invoking the function.Suite
's implementation ofwithFixture(NoArgTest)
simply invokes the function, like this:The
withFixture(NoArgTest)
method exists so that you can override it and set a fixture up before, and clean it up after, each test. Thus, the previous temp file example could also be implemented without mixing inBeforeAndAfterEach
, like this:If you prefer to keep your test classes immutable, one final variation is to use the
FixtureFeatureSpec
trait from theorg.scalatest.fixture
package. Tests in anorg.scalatest.fixture.FixtureFeatureSpec
can have a fixture object passed in as a parameter. You must indicate the type of the fixture object by defining theFixture
type member and define awithFixture
method that takes a one-arg test function. (AFixtureFeatureSpec
has two overloadedwithFixture
methods, therefore, one that takes aOneArgTest
and the other, inherited fromSuite
, that takes aNoArgTest
.) Inside thewithFixture(OneArgTest)
method, you create the fixture, pass it into the test function, then perform any necessary cleanup after the test function returns. Instead of invoking each test directly, aFixtureFeatureSpec
will pass a function that invokes the code of a test towithFixture(OneArgTest)
. YourwithFixture(OneArgTest)
method, therefore, is responsible for actually running the code of the test by invoking the test function. For example, you could pass the temp file reader fixture to each test that needs it by overriding thewithFixture(OneArgTest)
method of aFixtureFeatureSpec
, like this:It is worth noting that the only difference in the test code between the mutable
BeforeAndAfterEach
approach shown here and the immutableFixtureFeatureSpec
approach shown previously is that two of theFixtureFeatureSpec
's test functions take aFileReader
as a parameter via the "reader =>
" at the beginning of the function. Otherwise the test code is identical. One benefit of the explicit parameter is that, as demonstrated by the "no fixture passed
" scenario, aFixtureFeatureSpec
test need not take the fixture. So you can have some tests that take a fixture, and others that don't. In this case, theFixtureFeatureSpec
provides documentation indicating which tests use the fixture and which don't, whereas theBeforeAndAfterEach
approach does not. (If you have want to combine tests that take different fixture types in the sameFeatureSpec
, you can use MultipleFixtureFeatureSpec.)If you want to execute code before and after all tests (and nested suites) in a suite, such as you could do with
@BeforeClass
and@AfterClass
annotations in JUnit 4, you can use thebeforeAll
andafterAll
methods ofBeforeAndAfterAll
. See the documentation forBeforeAndAfterAll
for an example.Shared scenarios
Sometimes you may want to run the same test code on different fixture objects. In other words, you may want to write tests that are "shared" by different fixture objects. To accomplish this in a
FeatureSpec
, you first place shared tests (i.e., shared scenarios) in behavior functions. These behavior functions will be invoked during the construction phase of anyFeatureSpec
that uses them, so that the scenarios they contain will be registered as scenarios in thatFeatureSpec
. For example, given this stack class:You may want to test the
Stack
class in different states: empty, full, with one item, with one item less than capacity, etc. You may find you have several scenarios that make sense any time the stack is non-empty. Thus you'd ideally want to run those same scenarios for three stack fixture objects: a full stack, a stack with a one item, and a stack with one item less than capacity. With shared tests, you can factor these scenarios out into a behavior function, into which you pass the stack fixture to use when running the tests. So in yourFeatureSpec
for stack, you'd invoke the behavior function three times, passing in each of the three stack fixtures so that the shared scenarios are run for all three fixtures.You can define a behavior function that encapsulates these shared scenarios inside the
FeatureSpec
that uses them. If they are shared between differentFeatureSpec
s, however, you could also define them in a separate trait that is mixed into eachFeatureSpec
that uses them. For example, here thenonEmptyStack
behavior function (in this case, a behavior method) is defined in a trait along with another method containing shared scenarios for non-full stacks:Given these behavior functions, you could invoke them directly, but
FeatureSpec
offers a DSL for the purpose, which looks like this:If you prefer to use an imperative style to change fixtures, for example by mixing in
BeforeAndAfterEach
and reassigning astack
var
inbeforeEach
, you could write your behavior functions in the context of thatvar
, which means you wouldn't need to pass in the stack fixture because it would be in scope already inside the behavior function. In that case, your code would look like this:The recommended style, however, is the functional, pass-all-the-needed-values-in style. Here's an example:
If you load these classes into the Scala interpreter (with scalatest's JAR file on the class path), and execute it, you'll see:
One thing to keep in mind when using shared tests is that in ScalaTest, each test in a suite must have a unique name. If you register the same tests repeatedly in the same suite, one problem you may encounter is an exception at runtime complaining that multiple tests are being registered with the same test name. In a
FeatureSpec
there is no nesting construct analogous toSpec
'sdescribe
clause. Therefore, you need to do a bit of extra work to ensure that the test names are unique. If a duplicate test name problem shows up in aFeatureSpec
, you'll need to pass in a prefix or suffix string to add to each test name. You can pass this string the same way you pass any other data needed by the shared tests, or just calltoString
on the shared fixture object. This is the approach taken by the previousFeatureSpecStackBehaviors
example.Given this
FeatureSpecStackBehaviors
trait, calling it with thestackWithOneItem
fixture, like this:yields test names:
empty is invoked on this non-empty stack: Stack(9)
peek is invoked on this non-empty stack: Stack(9)
pop is invoked on this non-empty stack: Stack(9)
Whereas calling it with the
stackWithOneItemLessThanCapacity
fixture, like this:yields different test names:
empty is invoked on this non-empty stack: Stack(9, 8, 7, 6, 5, 4, 3, 2, 1)
peek is invoked on this non-empty stack: Stack(9, 8, 7, 6, 5, 4, 3, 2, 1)
pop is invoked on this non-empty stack: Stack(9, 8, 7, 6, 5, 4, 3, 2, 1)
Tagging tests
A
FeatureSpec
's tests may be classified into groups by tagging them with string names. As with any suite, when executing aFeatureSpec
, groups of tests can optionally be included and/or excluded. To tag aFeatureSpec
's tests, you pass objects that extend abstract classorg.scalatest.Tag
to methods that register tests,test
andignore
. ClassTag
takes one parameter, a string name. If you have created Java annotation interfaces for use as group names in direct subclasses oforg.scalatest.Suite
, then you will probably want to use group names on yourFeatureSpec
s that match. To do so, simply pass the fully qualified names of the Java interfaces to theTag
constructor. For example, if you've defined Java annotation interfaces with fully qualified names,com.mycompany.groups.SlowTest
andcom.mycompany.groups.DbTest
, then you could create matching groups forFeatureSpec
s like this:Given these definitions, you could place
FeatureSpec
tests into groups like this:This code marks both tests, "addition" and "subtraction," with the
com.mycompany.groups.SlowTest
tag, and test "subtraction" with thecom.mycompany.groups.DbTest
tag.The primary
run
method takes aFilter
, whose constructor takes an optionalSet[String]
s calledtagsToInclude
and aSet[String]
calledtagsToExclude
. IftagsToInclude
isNone
, all tests will be run except those those belonging to tags listed in thetagsToExclude
Set
. IftagsToInclude
is defined, only tests belonging to tags mentioned in thetagsToInclude
set, and not mentioned intagsToExclude
, will be run.Ignored tests
To support the common use case of “temporarily” disabling a test, with the good intention of resurrecting the test at a later time,
FeatureSpec
provides registration methods that start withignore
instead ofscenario
. For example, to temporarily disable the test namedaddition
, just change “scenario
” into “ignore
,” like this:If you run this version of
ArithmeticFeatureSpec
with:It will run only
subtraction
and report thataddition
was ignored:Informers
One of the parameters to the primary
run
method is aReporter
, which will collect and report information about the running suite of tests. Information about suites and tests that were run, whether tests succeeded or failed, and tests that were ignored will be passed to theReporter
as the suite runs. Most often the reporting done by default byFeatureSpec
's methods will be sufficient, but occasionally you may wish to provide custom information to theReporter
from a test. For this purpose, anInformer
that will forward information to the currentReporter
is provided via theinfo
parameterless method. You can pass the extra information to theInformer
via itsapply
method. TheInformer
will then pass the information to theReporter
via anInfoProvided
event. Here's an example:If you run this
ArithmeticFeatureSpec
from the interpreter, you will see the following message included in the printed report:One use case for the
Informer
is to pass more information about a scenario to the reporter. For example, theGivenWhenThen
trait provides methods that use the implicitinfo
provided byFeatureSpec
to pass such information to the reporter. Here's an example:If you run this
FeatureSpec
from the interpreter, you will see the following messages included in the printed report:Pending tests
A pending test is one that has been given a name but is not yet implemented. The purpose of pending tests is to facilitate a style of testing in which documentation of behavior is sketched out before tests are written to verify that behavior (and often, the before the behavior of the system being tested is itself implemented). Such sketches form a kind of specification of what tests and functionality to implement later.
To support this style of testing, a test can be given a name that specifies one bit of behavior required by the system being tested. The test can also include some code that sends more information about the behavior to the reporter when the tests run. At the end of the test, it can call method
pending
, which will cause it to complete abruptly withTestPendingException
. Because tests in ScalaTest can be designated as pending withTestPendingException
, both the test name and any information sent to the reporter when running the test can appear in the report of a test run. (In other words, the code of a pending test is executed just like any other test.) However, because the test completes abruptly withTestPendingException
, the test will be reported as pending, to indicate the actual test, and possibly the functionality, has not yet been implemented. You can mark tests as pending in aFeatureSpec
like this:(Note: "
(pending)
" is the body of the test. Thus the test contains just one statement, an invocation of thepending
method, which throwsTestPendingException
.) If you run this version ofArithmeticFeatureSpec
with:It will run both tests, but report that
subtraction
is pending. You'll see:One difference between an ignored test and a pending one is that an ignored test is intended to be used during a significant refactorings of the code under test, when tests break and you don't want to spend the time to fix all of them immediately. You can mark some of those broken tests as ignored temporarily, so that you can focus the red bar on just failing tests you actually want to fix immediately. Later you can go back and fix the ignored tests. In other words, by ignoring some failing tests temporarily, you can more easily notice failed tests that you actually want to fix. By contrast, a pending test is intended to be used before a test and/or the code under test is written. Pending indicates you've decided to write a test for a bit of behavior, but either you haven't written the test yet, or have only written part of it, or perhaps you've written the test but don't want to implement the behavior it tests until after you've implemented a different bit of behavior you realized you need first. Thus ignored tests are designed to facilitate refactoring of existing code whereas pending tests are designed to facilitate the creation of new code.
One other difference between ignored and pending tests is that ignored tests are implemented as a test tag that is excluded by default. Thus an ignored test is never executed. By contrast, a pending test is implemented as a test that throws
TestPendingException
(which is what calling thepending
method does). Thus the body of pending tests are executed up until they throwTestPendingException
. The reason for this difference is that it enables your unfinished test to sendInfoProvided
messages to the reporter before it completes abruptly withTestPendingException
, as shown in the previous example onInformer
s that used theGivenWhenThen
trait. For example, the following snippet in aFeatureSpec
:Would yield the following output when run in the interpreter: