I am trying to write tests using NUnit on an application utilizing BackgroundWorker. I have gone through your course and read some of the articles on your blog. I am wondering if you can give me any suggestions on how to do so?So let's explore this.
The code shown in this article is available on GitHub: jeremybytes/testing-backgroundworker-component. Specifically, look at the 01-IntegrationTests branch.
Application Overview
We'll start with our sample application that uses the BackgroundWorker component. Here's the functionality.
Application start:
Fresh Start |
When we click the "Start" button, it kicks off our long running process (using the BackgroundWorker component).
Process Running |
Here we can see the process running. The progress bar is updating, and we have a message "Iteration 21 of 50" that tells us how far along the process is. In addition, we can see the "Start" button is disabled, and the "Cancel" button is enabled.
If we let things run to completion, we get this output:
Process Complete |
Our background process only does one thing in our sample: it pauses for 100 ms. This means that if we have an input value of "50", our entire process takes 5 seconds (5000 ms) to complete.
If we press "Cancel" partway through the process, we get this output:
Process Canceled |
Code Overview
We won't look at all of the code here; just the parts that we need to look at for testing. For a full overview of the BackgroundWorker component and how it helps us offload work to another thread, take a look at the walk-through and articles online: Keep Your UI Responsive with the BackgroundWorker Component.
For more information on using the BackgroundWorker component with the MVVM design pattern, take a look a this article: BackgroundWorker Component and MVVM.
If you'd prefer a video overview, you can watch my Pluralsight course: Introduction to the .NET BackgroundWorker Component.
View Model Properties
We'll concentrate on testing through the view model today. This is the "MainWindowViewModel.cs" file that is part of the "DataProcessor" project (from the GitHub code).
Here are the properties from the view model:
These directly relate to the UI elements.
- Iterations is databound to the "Iterations" box on the UI
- ProgressPercentage is hooked up the progress bar
- Output is databound to the "Output" box
- StartEnabled determines whether the "Start" button is enabled
- CancelEnabled determines whether the "Cancel" button is enabled
In addition to these properties, there are 2 public methods:
These methods are called when the corresponding buttons are clicked in the UI. The "StartProcess" method kicks off the background process using the BackgroundWorker component, and the "CancelProcess" method lets the BackgroundWorker know that cancellation is requested.
Determining What to Test
When I think about adding tests to existing code, I think about *what* I want to test. Then I can move forward to determine *how* to test it.
In testing the view model, I want to check the behavior of the various methods and how the properties are updated. I test the public members of my classes since this is how my class exposes itself to the outside world.
So I really want to make sure that the properties are getting updated which means that we get the expected behavior from the application. (For the expected behavior, we can refer to the screenshots at the beginning of the article.)
Determining How to Test
Now we have to look at the best way to test this code. Ideally, I want granular unit tests that will verify the behavior of my view model and also the behavior of my background library. Unfortunately, we have some tight coupling between our view model and library, so granular unit tests will require some code modification.
Rather than modifying the code, we'll start out at a higher level of testing. These would be considered "integration tests" because we're really seeing how both our view model and library behave together. The reason that we're starting here is that we can write these tests *without* modifying the existing application.
This will give us a good starting point. Then from there, we can look at what code needs to change to make the unit tests easy to write.
Integration Tests
In addition to the "DataProcessor" project that we saw above, we have a separate project called "DataProcessor.Tests". This is a class library where we've added the NuGet packages for NUnit and the NUnit Test Adapter.
For more information on setting up NUnit and the Test Adapter, you can watch a minute or two of my TDD Basics video: TDD Basics @ 2:55.
Testing Output
Our tests are in the "MainViewModelTests.cs" file. Here's our first test:
The purpose of this test is to verify that the "Output" value is the same as the "Iterations" value if we let the background process run to completion.
In the "Arrange" section, we create an instance of our view model, and then we set the "Iterations" property to 5. As a reminder, we're using the same background library that our application uses. This means that our background process will run for 500 ms (approximately).
In the "Act" section, we kick off our background process, and then we wait. The "await Task.Delay(600)" method will pause operation for 600ms. (And note that since we're using "await", we also need to mark our test method as "async". This works just fine in NUnit and most other testing frameworks.)
This pause is very important. Since our background process is running on a separate thread, our test code will continue to run. Rather that moving immediately to the "Assert", we give the background process time to complete. Having tests with delays in them is definitely not ideal, but this gets us started on the road to testing.
In the "Assert" section, we verify our expectations: that the value of our "Output" (a string) matches the value of "Iterations" (converted to string).
Another Output Test
If the process gets canceled partway through, then we expect that the "Output" property will contain the string "Canceled". Here's a test for that:
For the "Act" section here, we start the process, wait 100 ms, then cancel the process. The reason for the first pause is that we want to give the background process a bit of a chance to run before we cancel it. The second pause (after the cancel) is to give the background process a chance to handle the cancellation request.
Then our assertion just verifies that our "Output" value is "Canceled".
Testing StartEnabled
To make sure that the "StartEnabled" property is getting set appropriately, we'll look at 3 different cases: (1) before the process is started, (2) while the process is running, and (3) after the process is completed.
The first case is pretty simple:
We just initialize our view model and check the value of the property (which should be "true").
Here's are test for the "running" state:
Here we start the process and then wait for 100 ms (reminder, our test case should take 500 ms to complete). Then we verify that the property is "false".
Lastly, we wait for completion:
Just as with our first "Output" test, we wait 600 ms to give our process time to complete. The first assertion (in the "Act" section) is a sanity check to verify the process is complete.
The second assertion (in the "Assert" section) is our verification that the "StartEnabled" goes back to the correct value ("true").
Testing CancelEnabled
The tests for the "CancelEnabled" property look just like the tests for the "StartEnabled" property -- except the expected state is reversed ("true" vs. "false"). We won't bother to look at the tests here, but you can see them in the code project.
Testing Progress
Our tests have been a bit less-than-ideal so far -- I don't like to have delays in my tests, particularly 1/2 second delays (those really add up). But our goal at this point is to get some useful tests out of our existing code.
The last property that I want to test is the "ProgressPercentage" (which is tied to the progress bar in the UI). Unfortunately, our current code doesn't give us enough information to let us know if the progress percentage value is accurate. Calculating that percentage is really a job for the background process, and in a future article, we'll look at modifying the code to make that testable.
What we *can* test with the current code is to make sure that the "ProgressPercentage" property gets updated the correct number of times.
For our test example, we have an "Iterations" value of 5. On each iteration, our progress is updated. Based on that, we would expect that the "ProgressPercentage" gets updated 5 times if we run to completion.
However, the last step of our process is to reset the progress bar back down to "0". So there's actually 1 more update that is part of our process. This means our "ProgressPercentage" should change 6 times in our test scenario.
Tracking Changes
So how do we track how many times a property has been changed? For this, I pulled out the "PropertyChangeTracker" project that I've been working on. This is a helper class that hooks up to the "INotifyPropertyChanged" event. Each time a property is changed, our tracker will know about it.
The existing tracker code knows how to track *when* a property changed. I added a new method to let us know *how many times* a property has been changed:
From the PropertyChangeTracker |
This simply counts the number of times that a property name appears in the internal notifications list.
The Test
Here's our test for the "ProgressPercentage" property. This initial test will make sure that the property gets updated 6 times if we let things run to completion:
We have a couple new times in our "Arrange" section. Since set an "expectedProgressCount" variable based on the "Iterations" property. This is the value we will use in our assertion.
Then we create an instance of the PropertyChangeTracker (for details on usage and purpose, see the original article).
In the "Act" section, we reset the tracker. This will clear out any notifications that may have already been received. Then we start the process and let it run to completion.
The last step is to get the "ChangeCount" out of the tracker. We use the "nameof()" expression here to make sure we don't have any typos in our parameter.
In the "Assert" section, we compare our expected value with the value that we pull out of the change tracker.
Progress and Cancellation
There is one more test for "ProgressPercentage", and that's when we cancel the process. Again, since we can't easily get details on progress without cracking open our code, we'll just do a basic sanity check.
If our process is canceled, then we expect that the "ProgressPercentage" property is updated *fewer* than 6 times. Yeah, it's not a lot of information, but we'll go ahead and put a test in for this:
We've seen the various parts in previous tests, so we won't go through them again.
Test Results and Concerns
These tests all come up green with our existing code:
These tests are better than not having any tests at all, and we are able to validate functionality of both our view model and our background library.
But we do have a few problems.
Look at the times. Many of our tests take over 500 ms to run -- this is half a second! Our very small test suite takes 7 seconds to run. The longer our tests take to run, the less frequently we run them.
Look at what we're testing. We're testing *both* the view model and library functions here. That means if one of our tests fail, we'll need to do some digging to figure out what part of the code is actually failing.
Look at the pauses. I really don't like the idea of having "Task.Delay()" in any of my tests. I would rather have tests that rely on something more deterministic (which is one reason why I use the PropertyChangeTracker object in other code). With our tests, they may sometimes fail if something takes a bit longer to run. Inconsistency is not good.
These problems could be fixed by modifying our code and focusing on unit tests
Unit Tests
With unit tests, we're looking at a single piece of functionality and verifying a single assumption. Let's look at a few things we could change to make this code easier to unit test.
First, add an interface for our background library. By adding an interface, we create a "seam" in our code. We can combine this with property injection to make it very easy to replace the real background library with a fake background library that we can use for testing. This would give us better isolation so that we could test our view model independently.
Next, move progress reporting to the background library. In the current code, our calculation of the progress percentage is happening in our BackgroundWorker method (in the view model). This calculation should be moved to the library the BackgroundWorker is calling.
If we combine this with additional progress properties in our library/interface, we can verify that progress is being calculated correctly.
Then, modify the code to make it easier to test cancellation. One of our problems is "start / pause / cancel / pause" in our tests. By adding an interface, we can easily create a fake library that takes no time at all to finish; this would eliminate our pauses waiting for completion. But with a little bit more modification, we can write tests that would verify cancellation without having the awkward pauses.
I'm not exactly sure what this code will look like. It will take a bit of experimentation to make sure that we're eliminating the problems mentioned above.
Look forward to these changes in a future article and a new branch in the GitHub project.
Wrap Up
When we're trying to add tests to existing code, it's often best to take small steps. By looking at what we *can* test without code modification, we can get some of the benefits of automated testing. And some valid tests are better than no valid tests. (Note: having a bunch of invalid tests is *worse* than having no tests at all.)
From there, we can start to look at the shortcomings of the tests we're able to easily make. Then we can think about the tests we really want to write and ask ourselves "Why is it hard to write this test?" From there, we can look at loosening the tight-coupling, breaking out features, and adding bits of information that will make it much easier to write our tests.
My goal is always to have code that is easy to read and maintain and also have tests that are easy to read and maintain. This isn't always something we get to overnight -- especially when we have existing code. But we can take small steps, and we'll eventually get to where we want to be.
Happy Coding!
Hey Jeremy,
ReplyDeleteJust found your blog – it's great, thanks a lot for such a hot content.
But here, in this article, you wrote about next improvements in terms of unit testing BackgroundWorker:
"Look forward to these changes in a future article and a new branch in the GitHub project."
I didn't found any further updates – maybe I missed something, or you could help with providing some additional info / links where to find more details on this topic?