Tuesday, October 6, 2015

Getting NUnit Test Parameters From a File (or Other Source)

NUnit has *a lot* of options. One of the options that I like is the ability to easily parameterize a test by using attributes. For an example of this, see Parameterized Tests with NUnit. During one of my recent presentations on testing, someone asked:
Is there was a way to get test parameters from a file or a database?
The answer is Yes!

Let's take a look at some of the different options that we have for providing test parameters for NUnit to use.

The Unit Under Test
Before looking at the tests, let's take a look at the method that we're testing. This code is taken from "Unit Testing Makes Me Faster", and you can download the code here: Session - Unit Testing Makes Me Faster.

Here's the method -- PassesLuhnCheck:

This method runs the Luhn algorithm against a potential credit card number. This doesn't tell us whether a credit card number is valid, but it tells us whether the card has the potential to be valid.

For example, let's think about validating a phone number. If it has 10 digits, then it has the potential to be a US phone number. But if it has "555" in the middle (like "714-555-1212"), then we know that it is not a valid number, since "555" is a fake exchange that is used in movies and television.

The Luhn algorithm does something similar with credit cards (only with arithmetic on the digits themselves). This is mainly to catch if someone transposes 2 digits when entering a card number.

You'll notice that I don't show the body of the method here. That's because it isn't important for what we're doing. We're doing some black box testing: valid numbers should return "true", invalid numbers should return "false".

So, let's look at some tests.

Parameters in Attributes
The first way to get parameters into our tests is by using attributes on the test methods themselves. This is what we saw in the previous article about parameterized tests. Here's our test for numbers that should pass the Luhn check.

This has a number of "TestCase" attributes. The parameter in the attribute is passed as a parameter to the test method -- the "testNumber". Then the test just runs the "PassesLuhnCheck" method and verifies that it returns true.

BTW, don't bother trying to buy things with these numbers on Amazon; these are test numbers provided by the credit card industry. They pass the Luhn check, but they are not valid accounts.

There is a similar test for invalid numbers:

And here are the results of our tests:

Notice that we get a test result for each of our test cases. By looking at the parameters, we can tell which number was used with each test. This is one of the cool things about how NUnit handles parameterized testing.

But we have some other options as well.

Test Parameters in Code
Instead of putting the test parameters in attributes, we can also create the parameters in code. To do this, we just need to create an object array with the parameters that we want to use. In our case, we only have 1 test parameter, so we can use a string array. Here's what that code looks like:

Here we have our same test numbers, but we've put them into a static string array. Then our test is attributed with "TestCaseSource" and we tell it what object to use for the parameters.

We can do something similar for our invalid numbers:

When we run the tests, we get the same results.

So far so good. But what other options do we have?

Test Parameters From a File
It would be really great if we could get our parameters out of a file (or database). For that, we'll need to parse a file. Here's what our test file looks like:

This is the "NumbersToTest.txt" file. The format isn't ideal, but it's not very difficult to parse in code, either. Instead of using a static string array, we'll use a static method that returns a string array. Here's our code:

Let's walk through the "MorePassingNumbers" method. Here we have a path to our test file. I just hard-coded this to make things easier. Then we read all the lines in the file. We skip the first line since that has the "VALID NUMBERS" label. Then we keep reading until we get to an empty line.

The end result is that we have a string array of our valid numbers. Then we just have to update our "TestCaseSource" attribute parameter to use "MorePassingNumbers".

We can do something similar with the invalid numbers:

This code is similar. The difference is that we skip everything in the file that comes before the "INVALID NUMBERS" label. Then we skip one more line (the label). Then we read to the end of the file.

After updating the "TestCaseSource" attribute, we see that we get similar results to what we had above. But this time, we're reading from a file:

Now that we're running code to get our parameters, we can get our parameters from wherever we want. Instead of opening a text file, we could make a database call -- although we need to be aware that we want to keep our dependencies light so that our tests have a good chance of running successfully.

We have some other options as well.

Using TestCaseData
We'll look at one more option. This gets interesting if we have our test parameters and expected results stored together. Instead of having 2 separate tests, we can have one. Then in our parameters, we also supply the expected result of the test.

For this, we'll use a class from NUnit called "TestCaseData". We need to create a public static class that has a public static property. This property should return an IEnumerable that consists of TestCaseData objects.

Here's an example of that:

If you're not familiar with the "yield return" statement, this just lets us create our own IEnumerable really easily. In this case, we're just using hard-coded data.

Each time we ask for the next item of our IEnumerable, it will return whatever is at the next "yield" statement.

Now let's look at the TestCaseData object. First we construct the object (using "new TestCaseData"), and the parameter is the parameter that we want to use for our test. If we had multiple test parameters, then we would pass them into the constructor here.

Then we use a fluent syntax to denote that we expect a particular return value based on this TestCaseData. So we can see that each of our test cases will expect a true or false value depending on the number.

The test that uses this data looks a bit different from the other tests we've looked at:

First notice the attribute. We use the TestCaseSource attribute. Then for the parameter we use "typeof" with the name of our class -- in this case "CreditCardTestCases". The next parameter is the name of the property that we want from that class -- "TestCases".

The test itself has a parameter (testNumber) like before, but the rest of the method is quite a bit different. We return a "bool" value here (instead of "void" like our other tests). And then the body of the method just runs our method under test and returns the result.

NUnit will look at the TestCaseData and pass in the parameter. Then it looks at the result value and makes sure that it matches the "Returns" part of the TestCaseData.

We only have 1 test now that handles both the valid and invalid numbers. And we get the expected results in our test explorer:

I'm not a big fan of having a single test because when I look at the test results, I can't tell whether I'm testing a valid number or an invalid number. When a test fails, I want to look at the test name (and parameter) to figure out what failed. In this case, we'd have to drill into the test results to see the expected result (true or false).

Getting Data From a File
As you can imagine, we can use this same technique to get data from a file or database. Let's see how we can get our test parameters and results from our text file:

This is a bit more complex that I'd like due to the nature of our text file. We get 2 different collections from the file -- one for the valid numbers and one for the invalid numbers. This uses that same parsing that we saw above (and it's not very efficient since we read the file 2 times). We could probably spend a bit of time to optimize this.

Then we have 2 "foreach" loops. Each of these has a "yield return" which will return the appropriate "TestCaseData" object. And the reason we have 2 loops is that we do not have the expected result in our file, so we need to hard-code that here.

Another option is to change the format of the text file so that it includes both the number and the expected result. This would be pretty easy to do, but for this simple example, I just wanted to keep the current file format. You can use your imagination for parsing a different file format (or even querying a database).

To use this in the test, we just need to update the property name in our attribute:

This now points to the "TestCasesFromFile" property that we just looked at. And as you can imagine, we still get the expected results from our test:

Again, I don't really like the idea of having the combined test, but I've talked to several people who do like to format their tests this way. If it works in your environment, and the members of your team understand it, then there's no reason to change it.

Wrap Up
NUnit is a very flexible testing framework. We've seen several ways to get parameters into our test, and we haven't even seen all of the options for that. For more information, be sure to check out the NUnit documentation: NUnit - TestCaseSource Attribute.

I'm always looking at what features are available in the tools that I use. I may not use all of them (in fact, I probably won't use all of them), but it's great to know that there's an easy way to solve a particular problem if it pops up in my code.

We should keep learning about the tools we use and the tools that are available to us. This will save us lots of heartache and will keep us from building things ourselves. It's always great to find that particular feature that is just what we need for a problem.

Happy Coding!


  1. This is probably one of the best ideas at getting your test cases source to match up with the test case expected results that I've seen. It's simple, powerful, and using a file or DB to store the test cases makes it simple to duplicate, expand the data, etc. Nice article!