We don't always get to start with new code; much of the time we're working with existing code and applications. Sometimes the code uses the Asynchronous Programming Model (APM) that relies on an IAsyncResult to deal with asynchronous methods. This methodology has always been a bit tricky to follow. Fortunately, we can convert those methods to the Task Asynchronous Pattern (TAP) by using a method that's included in Task (or more specifically the Task Factory).
Once we have a Task, we can take full advantage of the flexibility that Task affords us, including continuations, exception handling, and more.
For more information on using Task, check out the series of articles and videos available here: I'll Get Back to You: Task, Await, and Asynchronous Methods.
Let's take a look at some code that I developed while working on an actual application.
The code for this project is taken from the samples for "Clean Code: Homicidal Maniacs Read Code, Too!" available on GitHub: jeremybytes/clean-code-refactoring.
The Scenario: Asynchronous SOAP Proxy
I ran across this scenario while working as a contractor on a project. I worked on a client application that gets data from a series of SOAP services. The service proxies had already been generated for me, so I didn't have any control over them. Let's take a look at those proxy methods (well, not the same methods, but methods created the same way).
First, here's the service that we'll be using (PersonService.cs):
This is a pretty standard repository service. We're specifically looking at the "GetPeople" method that returns a list of Person objects.
To generate a proxy for a SOAP service, we right-click on our project references and select "Add Service Reference". This gives us the following dialog:
If we click on the "Advanced" button, we get the option to include asynchronous methods in our proxy class:
Notice that the radio button for "Generate asynchronous operations" is checked. This will generate APM methods that use IAsyncResult. (And notice, the option to "Generate task-based operations" is grayed-out. We won't go into the details of why this may or may not be available.)
For the application I was working with, the proxies had been generated with the APM methods. This gives us a pair of methods for each of our service methods.
Here are the "GetPeople" methods from the generated proxy class:
We have a pair of methods: "BeginGetPeople" and "EndGetPeople". These get tied together by passing the "IAsyncResult" return value from the "Begin" method as a parameter to the "End" method.
Unfortunately, using these methods directly is a bit complex. You can get an idea of it by looking at an example on MSDN. And that's one reason why we've mostly moved away from this pattern.
But again, sometimes we're stuck with code that has these methods, and we need to find a way to make use of them.
The Solution: Converting APM Methods to Task
Fortunately for us, there's a way to convert these difficult-to-use methods to a Task. Here's the code for that (from CatalogViewModel.cs):
For this, we call the "BeginGetPeople" method like we normally would. This gives us an "IAsyncResult" as a return value.
Then we use the "Task.Factory.FromAsync" method to convert this to a task. There are a number of overloads for this method. In this case, the first parameter is the "IAsyncResult" from our "Begin" method, and the second parameter is our "End" method.
Once we have this task, we can use it like any other Task. This includes adding a continuation when the asynchronous process completes:
Our Task returns a "List<Person>", and we can access this through the "Result" property just like we normally would. From there, we can perform whatever other operations we need.
Notice that we have this continuation marked as "NotOnFaulted", so this is our success path. There is a separate continuation to handle a faulted task (which means an exception was thrown during the asynchronous process).
As a side note, we're not using "await" here because this code was written in .NET 4.0 (before "await" was added). The code could not be upgraded to a newer version of .NET because the application needed to run on Windows XP machines (which only support up to .NET 4.0).
The entire method that uses this code is not very complex:
This is much easier than dealing with the APM methods directly (here's another MSDN example to show what's involved in that).
Wrap Up
Many times we're dealing with existing code that we don't have full control over. And that means we may have code which uses the Asynchronous Programming Model (APM) that passes around an IAsyncResult.
But rather than dealing with those methods directly, we can wrap them in a Task. This gives us access to the full power of using Task -- including continuations, exception handling, and much more.
And if we find that we have existing code that uses the APM methods (which can be difficult to debug and test), we can look at refactoring them to use Task instead. This also opens up the possibility of using the "await" operator to make our code even easier to read.
Just because we're working with existing code does not mean that we can't use more modern technologies. In the case of Task, we have a fairly easy way to update our APM methods to use Task instead.
Happy Coding!
Monday, March 21, 2016
Friday, March 11, 2016
Different Perspectives
In case you haven't figured it out yet, I'm a big believer in talking to developers with different experiences from mine. I was reminded of the importance of this of this last week when I ran into someone with a completely opposite viewpoint from mine.
I bumped into Troy Miles (@therockncoder) at a McDonalds prior to a user group. I've known Troy from several years; he has a *ton* of experience in mobile development and works for Cox Automotive (owners of Kelley Blue Book and Autotrader). He's usually carrying around several mobile devices and has done web, native, and hybrid development. If I have a question about mobile development, he's my first stop. Plus, he's also a really awesome guy.
I brought up Xamarin since it was in the news recently. And Troy said something unexpected:
Disclaimer: neither Troy nor I use Xamarin, so we're not qualified to give our own opinions on the product. This is just the impression we've received from other developers.
.NET Developers
My circle includes primarily .NET developers. The response that I've seen regarding Xamarin is very positive. Developers who love C# see the tool as an opportunity to get their code onto other devices (iOS and Android) while using a familiar environment.
Native Mobile Developers
Troy's circle includes primarily mobile developers -- specifically developers who are experienced at native development. The response that Troy's seen regarding Xamarin is mostly negative. Developers who are used to programming directly for the device see the tool as a layer of abstraction that gets in the way and takes away the fine-grained control.
Different Perspectives
This reinforces the importance of getting other perspectives. We all have different experiences that affect how we see the world. And the important part is that once we start sharing them, we can see each other's viewpoints.
I totally understand Troy's perspective. It makes sense that native developers could see things as a step backwards. And Troy understands my perspective. It makes sense that .NET developers could see things as a huge step forward. We both have another viewpoint to take into consideration for future discussions.
The more we talk to other developers who work in different areas and have various backgrounds, the more we'll be able to understand the options that are available to us. And we'll better be able to understand each other. With this in hand, we can come to good decisions -- together.
Happy Coding!
I bumped into Troy Miles (@therockncoder) at a McDonalds prior to a user group. I've known Troy from several years; he has a *ton* of experience in mobile development and works for Cox Automotive (owners of Kelley Blue Book and Autotrader). He's usually carrying around several mobile devices and has done web, native, and hybrid development. If I have a question about mobile development, he's my first stop. Plus, he's also a really awesome guy.
I brought up Xamarin since it was in the news recently. And Troy said something unexpected:
"Does anyone like using Xamarin?"The reason this surprised me is that I've heard an overwhelmingly positive response from developers talking about Xamarin.
Here I was talking to someone I'd known for several years and share a lot of the same ideas about approaches to coding, and yet, we had completely opposite views on this subject.So this made me curious, and I asked him why the folks he's talked to don't like it. And it turned out that we traveled in circles that had opposite views of the product.
Disclaimer: neither Troy nor I use Xamarin, so we're not qualified to give our own opinions on the product. This is just the impression we've received from other developers.
.NET Developers
My circle includes primarily .NET developers. The response that I've seen regarding Xamarin is very positive. Developers who love C# see the tool as an opportunity to get their code onto other devices (iOS and Android) while using a familiar environment.
Native Mobile Developers
Troy's circle includes primarily mobile developers -- specifically developers who are experienced at native development. The response that Troy's seen regarding Xamarin is mostly negative. Developers who are used to programming directly for the device see the tool as a layer of abstraction that gets in the way and takes away the fine-grained control.
Different Perspectives
This reinforces the importance of getting other perspectives. We all have different experiences that affect how we see the world. And the important part is that once we start sharing them, we can see each other's viewpoints.
I totally understand Troy's perspective. It makes sense that native developers could see things as a step backwards. And Troy understands my perspective. It makes sense that .NET developers could see things as a huge step forward. We both have another viewpoint to take into consideration for future discussions.
The more we talk to other developers who work in different areas and have various backgrounds, the more we'll be able to understand the options that are available to us. And we'll better be able to understand each other. With this in hand, we can come to good decisions -- together.
Happy Coding!
Wednesday, March 9, 2016
More TDD Videos
I've recently published two more videos that explore Test-Driven Development a bit more. If you're new to TDD, then you might want to start with TDD Basics in C#.
The latest videos use the rules for Conway's Game of Life as the problem to be solved. I ran across Conway's Game of Life many years ago when I was first getting involved with computers, and the patterns have intrigued me ever since.
Or watch it here:
To continue on with the code from Conway's Game of Life, we fix some bugs and test for exceptions.
Or watch it here:
For more articles on Conway's Game of Life and unit testing, take a look here: Coding Practice with Conway's Game of Life.
More videos on unit testing are on the way. Future topics will include using TDD with real-world applications, mocking, and testing asynchronous methods.
Happy Coding!
The latest videos use the rules for Conway's Game of Life as the problem to be solved. I ran across Conway's Game of Life many years ago when I was first getting involved with computers, and the patterns have intrigued me ever since.
TDD: Don't Turn Off Your BrainWatch the video on YouTube: TDD: Don't Turn Off Your Brain
Test-Driven Development (TDD) lets our code develop out of our tests. But this doesn't mean that we turn off our brain. We still need to make decisions on our design as we write our tests. In this video, we'll take some "bigger steps" with TDD to implement the rules for Conway's Game of Life. Along the way, we'll see the types of design decisions we need to keep in mind.
Or watch it here:
TDD Debugging & Testing ExceptionsWatch the video on YouTube: TDD Debugging & Testing Exceptions
Test-Driven Development (TDD) lets our code develop out of our tests. But it is also extremely useful when we have to debug existing code. When we have a bug, we can first write a failing unit test, then write the code to get that test to pass. In addition to debugging, in this video, we'll see how we can test for exceptions in our tests. There are several approaches with different advantages.
Or watch it here:
For more articles on Conway's Game of Life and unit testing, take a look here: Coding Practice with Conway's Game of Life.
More videos on unit testing are on the way. Future topics will include using TDD with real-world applications, mocking, and testing asynchronous methods.
Happy Coding!
Wednesday, March 2, 2016
Testing with the BackgroundWorker Component
I recently received a question regarding how to test code that uses the BackgroundWorker component:
The code shown in this article is available on GitHub: jeremybytes/testing-backgroundworker-component. Specifically, look at the 01-IntegrationTests branch.
Application Overview
We'll start with our sample application that uses the BackgroundWorker component. Here's the functionality.
Application start:
Initially, the "Start" button is enabled, and the "Cancel" button is disabled.
When we click the "Start" button, it kicks off our long running process (using the BackgroundWorker component).
Here we can see the process running. The progress bar is updating, and we have a message "Iteration 21 of 50" that tells us how far along the process is. In addition, we can see the "Start" button is disabled, and the "Cancel" button is enabled.
If we let things run to completion, we get this output:
The "Output" has our final value (which should match the "Iterations" value), and our "Start" and "Cancel" buttons are reset to their initial states.
Our background process only does one thing in our sample: it pauses for 100 ms. This means that if we have an input value of "50", our entire process takes 5 seconds (5000 ms) to complete.
If we press "Cancel" partway through the process, we get this output:
All of these UI elements are data bound to properties in a view model. This gives us a separate class that is easier to test.
Code Overview
We won't look at all of the code here; just the parts that we need to look at for testing. For a full overview of the BackgroundWorker component and how it helps us offload work to another thread, take a look at the walk-through and articles online: Keep Your UI Responsive with the BackgroundWorker Component.
For more information on using the BackgroundWorker component with the MVVM design pattern, take a look a this article: BackgroundWorker Component and MVVM.
If you'd prefer a video overview, you can watch my Pluralsight course: Introduction to the .NET BackgroundWorker Component.
View Model Properties
We'll concentrate on testing through the view model today. This is the "MainWindowViewModel.cs" file that is part of the "DataProcessor" project (from the GitHub code).
Here are the properties from the view model:
These directly relate to the UI elements.
In addition to these properties, there are 2 public methods:
These methods are called when the corresponding buttons are clicked in the UI. The "StartProcess" method kicks off the background process using the BackgroundWorker component, and the "CancelProcess" method lets the BackgroundWorker know that cancellation is requested.
Determining What to Test
When I think about adding tests to existing code, I think about *what* I want to test. Then I can move forward to determine *how* to test it.
In testing the view model, I want to check the behavior of the various methods and how the properties are updated. I test the public members of my classes since this is how my class exposes itself to the outside world.
So I really want to make sure that the properties are getting updated which means that we get the expected behavior from the application. (For the expected behavior, we can refer to the screenshots at the beginning of the article.)
Determining How to Test
Now we have to look at the best way to test this code. Ideally, I want granular unit tests that will verify the behavior of my view model and also the behavior of my background library. Unfortunately, we have some tight coupling between our view model and library, so granular unit tests will require some code modification.
Rather than modifying the code, we'll start out at a higher level of testing. These would be considered "integration tests" because we're really seeing how both our view model and library behave together. The reason that we're starting here is that we can write these tests *without* modifying the existing application.
This will give us a good starting point. Then from there, we can look at what code needs to change to make the unit tests easy to write.
Integration Tests
In addition to the "DataProcessor" project that we saw above, we have a separate project called "DataProcessor.Tests". This is a class library where we've added the NuGet packages for NUnit and the NUnit Test Adapter.
For more information on setting up NUnit and the Test Adapter, you can watch a minute or two of my TDD Basics video: TDD Basics @ 2:55.
Testing Output
Our tests are in the "MainViewModelTests.cs" file. Here's our first test:
The purpose of this test is to verify that the "Output" value is the same as the "Iterations" value if we let the background process run to completion.
In the "Arrange" section, we create an instance of our view model, and then we set the "Iterations" property to 5. As a reminder, we're using the same background library that our application uses. This means that our background process will run for 500 ms (approximately).
In the "Act" section, we kick off our background process, and then we wait. The "await Task.Delay(600)" method will pause operation for 600ms. (And note that since we're using "await", we also need to mark our test method as "async". This works just fine in NUnit and most other testing frameworks.)
This pause is very important. Since our background process is running on a separate thread, our test code will continue to run. Rather that moving immediately to the "Assert", we give the background process time to complete. Having tests with delays in them is definitely not ideal, but this gets us started on the road to testing.
In the "Assert" section, we verify our expectations: that the value of our "Output" (a string) matches the value of "Iterations" (converted to string).
Another Output Test
If the process gets canceled partway through, then we expect that the "Output" property will contain the string "Canceled". Here's a test for that:
For the "Act" section here, we start the process, wait 100 ms, then cancel the process. The reason for the first pause is that we want to give the background process a bit of a chance to run before we cancel it. The second pause (after the cancel) is to give the background process a chance to handle the cancellation request.
Then our assertion just verifies that our "Output" value is "Canceled".
Testing StartEnabled
To make sure that the "StartEnabled" property is getting set appropriately, we'll look at 3 different cases: (1) before the process is started, (2) while the process is running, and (3) after the process is completed.
The first case is pretty simple:
We just initialize our view model and check the value of the property (which should be "true").
Here's are test for the "running" state:
Here we start the process and then wait for 100 ms (reminder, our test case should take 500 ms to complete). Then we verify that the property is "false".
Lastly, we wait for completion:
Just as with our first "Output" test, we wait 600 ms to give our process time to complete. The first assertion (in the "Act" section) is a sanity check to verify the process is complete.
The second assertion (in the "Assert" section) is our verification that the "StartEnabled" goes back to the correct value ("true").
Testing CancelEnabled
The tests for the "CancelEnabled" property look just like the tests for the "StartEnabled" property -- except the expected state is reversed ("true" vs. "false"). We won't bother to look at the tests here, but you can see them in the code project.
Testing Progress
Our tests have been a bit less-than-ideal so far -- I don't like to have delays in my tests, particularly 1/2 second delays (those really add up). But our goal at this point is to get some useful tests out of our existing code.
The last property that I want to test is the "ProgressPercentage" (which is tied to the progress bar in the UI). Unfortunately, our current code doesn't give us enough information to let us know if the progress percentage value is accurate. Calculating that percentage is really a job for the background process, and in a future article, we'll look at modifying the code to make that testable.
What we *can* test with the current code is to make sure that the "ProgressPercentage" property gets updated the correct number of times.
For our test example, we have an "Iterations" value of 5. On each iteration, our progress is updated. Based on that, we would expect that the "ProgressPercentage" gets updated 5 times if we run to completion.
However, the last step of our process is to reset the progress bar back down to "0". So there's actually 1 more update that is part of our process. This means our "ProgressPercentage" should change 6 times in our test scenario.
Tracking Changes
So how do we track how many times a property has been changed? For this, I pulled out the "PropertyChangeTracker" project that I've been working on. This is a helper class that hooks up to the "INotifyPropertyChanged" event. Each time a property is changed, our tracker will know about it.
The existing tracker code knows how to track *when* a property changed. I added a new method to let us know *how many times* a property has been changed:
This simply counts the number of times that a property name appears in the internal notifications list.
The Test
Here's our test for the "ProgressPercentage" property. This initial test will make sure that the property gets updated 6 times if we let things run to completion:
We have a couple new times in our "Arrange" section. Since set an "expectedProgressCount" variable based on the "Iterations" property. This is the value we will use in our assertion.
Then we create an instance of the PropertyChangeTracker (for details on usage and purpose, see the original article).
In the "Act" section, we reset the tracker. This will clear out any notifications that may have already been received. Then we start the process and let it run to completion.
The last step is to get the "ChangeCount" out of the tracker. We use the "nameof()" expression here to make sure we don't have any typos in our parameter.
In the "Assert" section, we compare our expected value with the value that we pull out of the change tracker.
Progress and Cancellation
There is one more test for "ProgressPercentage", and that's when we cancel the process. Again, since we can't easily get details on progress without cracking open our code, we'll just do a basic sanity check.
If our process is canceled, then we expect that the "ProgressPercentage" property is updated *fewer* than 6 times. Yeah, it's not a lot of information, but we'll go ahead and put a test in for this:
We've seen the various parts in previous tests, so we won't go through them again.
Test Results and Concerns
These tests all come up green with our existing code:
These tests are better than not having any tests at all, and we are able to validate functionality of both our view model and our background library.
But we do have a few problems.
Look at the times. Many of our tests take over 500 ms to run -- this is half a second! Our very small test suite takes 7 seconds to run. The longer our tests take to run, the less frequently we run them.
Look at what we're testing. We're testing *both* the view model and library functions here. That means if one of our tests fail, we'll need to do some digging to figure out what part of the code is actually failing.
Look at the pauses. I really don't like the idea of having "Task.Delay()" in any of my tests. I would rather have tests that rely on something more deterministic (which is one reason why I use the PropertyChangeTracker object in other code). With our tests, they may sometimes fail if something takes a bit longer to run. Inconsistency is not good.
These problems could be fixed by modifying our code and focusing on unit tests
Unit Tests
With unit tests, we're looking at a single piece of functionality and verifying a single assumption. Let's look at a few things we could change to make this code easier to unit test.
First, add an interface for our background library. By adding an interface, we create a "seam" in our code. We can combine this with property injection to make it very easy to replace the real background library with a fake background library that we can use for testing. This would give us better isolation so that we could test our view model independently.
Next, move progress reporting to the background library. In the current code, our calculation of the progress percentage is happening in our BackgroundWorker method (in the view model). This calculation should be moved to the library the BackgroundWorker is calling.
If we combine this with additional progress properties in our library/interface, we can verify that progress is being calculated correctly.
Then, modify the code to make it easier to test cancellation. One of our problems is "start / pause / cancel / pause" in our tests. By adding an interface, we can easily create a fake library that takes no time at all to finish; this would eliminate our pauses waiting for completion. But with a little bit more modification, we can write tests that would verify cancellation without having the awkward pauses.
I'm not exactly sure what this code will look like. It will take a bit of experimentation to make sure that we're eliminating the problems mentioned above.
Look forward to these changes in a future article and a new branch in the GitHub project.
Wrap Up
When we're trying to add tests to existing code, it's often best to take small steps. By looking at what we *can* test without code modification, we can get some of the benefits of automated testing. And some valid tests are better than no valid tests. (Note: having a bunch of invalid tests is *worse* than having no tests at all.)
From there, we can start to look at the shortcomings of the tests we're able to easily make. Then we can think about the tests we really want to write and ask ourselves "Why is it hard to write this test?" From there, we can look at loosening the tight-coupling, breaking out features, and adding bits of information that will make it much easier to write our tests.
My goal is always to have code that is easy to read and maintain and also have tests that are easy to read and maintain. This isn't always something we get to overnight -- especially when we have existing code. But we can take small steps, and we'll eventually get to where we want to be.
Happy Coding!
I am trying to write tests using NUnit on an application utilizing BackgroundWorker. I have gone through your course and read some of the articles on your blog. I am wondering if you can give me any suggestions on how to do so?So let's explore this.
The code shown in this article is available on GitHub: jeremybytes/testing-backgroundworker-component. Specifically, look at the 01-IntegrationTests branch.
Application Overview
We'll start with our sample application that uses the BackgroundWorker component. Here's the functionality.
Application start:
Fresh Start |
When we click the "Start" button, it kicks off our long running process (using the BackgroundWorker component).
Process Running |
Here we can see the process running. The progress bar is updating, and we have a message "Iteration 21 of 50" that tells us how far along the process is. In addition, we can see the "Start" button is disabled, and the "Cancel" button is enabled.
If we let things run to completion, we get this output:
Process Complete |
Our background process only does one thing in our sample: it pauses for 100 ms. This means that if we have an input value of "50", our entire process takes 5 seconds (5000 ms) to complete.
If we press "Cancel" partway through the process, we get this output:
Process Canceled |
Code Overview
We won't look at all of the code here; just the parts that we need to look at for testing. For a full overview of the BackgroundWorker component and how it helps us offload work to another thread, take a look at the walk-through and articles online: Keep Your UI Responsive with the BackgroundWorker Component.
For more information on using the BackgroundWorker component with the MVVM design pattern, take a look a this article: BackgroundWorker Component and MVVM.
If you'd prefer a video overview, you can watch my Pluralsight course: Introduction to the .NET BackgroundWorker Component.
View Model Properties
We'll concentrate on testing through the view model today. This is the "MainWindowViewModel.cs" file that is part of the "DataProcessor" project (from the GitHub code).
Here are the properties from the view model:
These directly relate to the UI elements.
- Iterations is databound to the "Iterations" box on the UI
- ProgressPercentage is hooked up the progress bar
- Output is databound to the "Output" box
- StartEnabled determines whether the "Start" button is enabled
- CancelEnabled determines whether the "Cancel" button is enabled
In addition to these properties, there are 2 public methods:
These methods are called when the corresponding buttons are clicked in the UI. The "StartProcess" method kicks off the background process using the BackgroundWorker component, and the "CancelProcess" method lets the BackgroundWorker know that cancellation is requested.
Determining What to Test
When I think about adding tests to existing code, I think about *what* I want to test. Then I can move forward to determine *how* to test it.
In testing the view model, I want to check the behavior of the various methods and how the properties are updated. I test the public members of my classes since this is how my class exposes itself to the outside world.
So I really want to make sure that the properties are getting updated which means that we get the expected behavior from the application. (For the expected behavior, we can refer to the screenshots at the beginning of the article.)
Determining How to Test
Now we have to look at the best way to test this code. Ideally, I want granular unit tests that will verify the behavior of my view model and also the behavior of my background library. Unfortunately, we have some tight coupling between our view model and library, so granular unit tests will require some code modification.
Rather than modifying the code, we'll start out at a higher level of testing. These would be considered "integration tests" because we're really seeing how both our view model and library behave together. The reason that we're starting here is that we can write these tests *without* modifying the existing application.
This will give us a good starting point. Then from there, we can look at what code needs to change to make the unit tests easy to write.
Integration Tests
In addition to the "DataProcessor" project that we saw above, we have a separate project called "DataProcessor.Tests". This is a class library where we've added the NuGet packages for NUnit and the NUnit Test Adapter.
For more information on setting up NUnit and the Test Adapter, you can watch a minute or two of my TDD Basics video: TDD Basics @ 2:55.
Testing Output
Our tests are in the "MainViewModelTests.cs" file. Here's our first test:
The purpose of this test is to verify that the "Output" value is the same as the "Iterations" value if we let the background process run to completion.
In the "Arrange" section, we create an instance of our view model, and then we set the "Iterations" property to 5. As a reminder, we're using the same background library that our application uses. This means that our background process will run for 500 ms (approximately).
In the "Act" section, we kick off our background process, and then we wait. The "await Task.Delay(600)" method will pause operation for 600ms. (And note that since we're using "await", we also need to mark our test method as "async". This works just fine in NUnit and most other testing frameworks.)
This pause is very important. Since our background process is running on a separate thread, our test code will continue to run. Rather that moving immediately to the "Assert", we give the background process time to complete. Having tests with delays in them is definitely not ideal, but this gets us started on the road to testing.
In the "Assert" section, we verify our expectations: that the value of our "Output" (a string) matches the value of "Iterations" (converted to string).
Another Output Test
If the process gets canceled partway through, then we expect that the "Output" property will contain the string "Canceled". Here's a test for that:
For the "Act" section here, we start the process, wait 100 ms, then cancel the process. The reason for the first pause is that we want to give the background process a bit of a chance to run before we cancel it. The second pause (after the cancel) is to give the background process a chance to handle the cancellation request.
Then our assertion just verifies that our "Output" value is "Canceled".
Testing StartEnabled
To make sure that the "StartEnabled" property is getting set appropriately, we'll look at 3 different cases: (1) before the process is started, (2) while the process is running, and (3) after the process is completed.
The first case is pretty simple:
We just initialize our view model and check the value of the property (which should be "true").
Here's are test for the "running" state:
Here we start the process and then wait for 100 ms (reminder, our test case should take 500 ms to complete). Then we verify that the property is "false".
Lastly, we wait for completion:
Just as with our first "Output" test, we wait 600 ms to give our process time to complete. The first assertion (in the "Act" section) is a sanity check to verify the process is complete.
The second assertion (in the "Assert" section) is our verification that the "StartEnabled" goes back to the correct value ("true").
Testing CancelEnabled
The tests for the "CancelEnabled" property look just like the tests for the "StartEnabled" property -- except the expected state is reversed ("true" vs. "false"). We won't bother to look at the tests here, but you can see them in the code project.
Testing Progress
Our tests have been a bit less-than-ideal so far -- I don't like to have delays in my tests, particularly 1/2 second delays (those really add up). But our goal at this point is to get some useful tests out of our existing code.
The last property that I want to test is the "ProgressPercentage" (which is tied to the progress bar in the UI). Unfortunately, our current code doesn't give us enough information to let us know if the progress percentage value is accurate. Calculating that percentage is really a job for the background process, and in a future article, we'll look at modifying the code to make that testable.
What we *can* test with the current code is to make sure that the "ProgressPercentage" property gets updated the correct number of times.
For our test example, we have an "Iterations" value of 5. On each iteration, our progress is updated. Based on that, we would expect that the "ProgressPercentage" gets updated 5 times if we run to completion.
However, the last step of our process is to reset the progress bar back down to "0". So there's actually 1 more update that is part of our process. This means our "ProgressPercentage" should change 6 times in our test scenario.
Tracking Changes
So how do we track how many times a property has been changed? For this, I pulled out the "PropertyChangeTracker" project that I've been working on. This is a helper class that hooks up to the "INotifyPropertyChanged" event. Each time a property is changed, our tracker will know about it.
The existing tracker code knows how to track *when* a property changed. I added a new method to let us know *how many times* a property has been changed:
From the PropertyChangeTracker |
This simply counts the number of times that a property name appears in the internal notifications list.
The Test
Here's our test for the "ProgressPercentage" property. This initial test will make sure that the property gets updated 6 times if we let things run to completion:
We have a couple new times in our "Arrange" section. Since set an "expectedProgressCount" variable based on the "Iterations" property. This is the value we will use in our assertion.
Then we create an instance of the PropertyChangeTracker (for details on usage and purpose, see the original article).
In the "Act" section, we reset the tracker. This will clear out any notifications that may have already been received. Then we start the process and let it run to completion.
The last step is to get the "ChangeCount" out of the tracker. We use the "nameof()" expression here to make sure we don't have any typos in our parameter.
In the "Assert" section, we compare our expected value with the value that we pull out of the change tracker.
Progress and Cancellation
There is one more test for "ProgressPercentage", and that's when we cancel the process. Again, since we can't easily get details on progress without cracking open our code, we'll just do a basic sanity check.
If our process is canceled, then we expect that the "ProgressPercentage" property is updated *fewer* than 6 times. Yeah, it's not a lot of information, but we'll go ahead and put a test in for this:
We've seen the various parts in previous tests, so we won't go through them again.
Test Results and Concerns
These tests all come up green with our existing code:
These tests are better than not having any tests at all, and we are able to validate functionality of both our view model and our background library.
But we do have a few problems.
Look at the times. Many of our tests take over 500 ms to run -- this is half a second! Our very small test suite takes 7 seconds to run. The longer our tests take to run, the less frequently we run them.
Look at what we're testing. We're testing *both* the view model and library functions here. That means if one of our tests fail, we'll need to do some digging to figure out what part of the code is actually failing.
Look at the pauses. I really don't like the idea of having "Task.Delay()" in any of my tests. I would rather have tests that rely on something more deterministic (which is one reason why I use the PropertyChangeTracker object in other code). With our tests, they may sometimes fail if something takes a bit longer to run. Inconsistency is not good.
These problems could be fixed by modifying our code and focusing on unit tests
Unit Tests
With unit tests, we're looking at a single piece of functionality and verifying a single assumption. Let's look at a few things we could change to make this code easier to unit test.
First, add an interface for our background library. By adding an interface, we create a "seam" in our code. We can combine this with property injection to make it very easy to replace the real background library with a fake background library that we can use for testing. This would give us better isolation so that we could test our view model independently.
Next, move progress reporting to the background library. In the current code, our calculation of the progress percentage is happening in our BackgroundWorker method (in the view model). This calculation should be moved to the library the BackgroundWorker is calling.
If we combine this with additional progress properties in our library/interface, we can verify that progress is being calculated correctly.
Then, modify the code to make it easier to test cancellation. One of our problems is "start / pause / cancel / pause" in our tests. By adding an interface, we can easily create a fake library that takes no time at all to finish; this would eliminate our pauses waiting for completion. But with a little bit more modification, we can write tests that would verify cancellation without having the awkward pauses.
I'm not exactly sure what this code will look like. It will take a bit of experimentation to make sure that we're eliminating the problems mentioned above.
Look forward to these changes in a future article and a new branch in the GitHub project.
Wrap Up
When we're trying to add tests to existing code, it's often best to take small steps. By looking at what we *can* test without code modification, we can get some of the benefits of automated testing. And some valid tests are better than no valid tests. (Note: having a bunch of invalid tests is *worse* than having no tests at all.)
From there, we can start to look at the shortcomings of the tests we're able to easily make. Then we can think about the tests we really want to write and ask ourselves "Why is it hard to write this test?" From there, we can look at loosening the tight-coupling, breaking out features, and adding bits of information that will make it much easier to write our tests.
My goal is always to have code that is easy to read and maintain and also have tests that are easy to read and maintain. This isn't always something we get to overnight -- especially when we have existing code. But we can take small steps, and we'll eventually get to where we want to be.
Happy Coding!
Tuesday, March 1, 2016
New Video: Task & Await Recorded at NDC London 2016
As mentioned previously, I had an awesome time at NDC London back in January. One great thing about NDC is that they record the sessions and make them available. Today, the recording of my session was posted.
Or watch it here:
Slides and code samples are available here: Session Materials: I'll Get Back to You.
This has become one of my more popular topics this year. And I've had a great response. Here are some of the comments from the Twitterverse from NDC London:
Look for Your Chance
If you'd like to see this presentation LIVE, be sure to check out my speaking schedule. You can see my upcoming confirmed events on my homepage: www.jeremybytes.com. I have several proposals out there, so be sure to check back for updates as events are confirmed.
If you're in California, I'll be at user groups in Los Angeles, Berkeley, and Fresno over the next 6 weeks. And I'll also be presenting this topic at KCDC in June.
And, of course, I have lots of other topics that I love to talk about. I hope to see you at an event soon!
Happy Coding!
I’ll Get Back to You: Task, Await, and Asynchronous Methods in C# - Jeremy ClarkWatch it on Vimeo: I'll Get Back to You.
There's a lot of confusion about async/await, Task/TPL, and asynchronous and parallel programming in general. So let's start with the basics and look at how we can consume asynchronous methods using Task and then see how the "await" operator can makes things easier for us. Along the way, we’ll look at continuations, cancellation, and exception handling.
Or watch it here:
Slides and code samples are available here: Session Materials: I'll Get Back to You.
This has become one of my more popular topics this year. And I've had a great response. Here are some of the comments from the Twitterverse from NDC London:
@jeremybytes thanked you in person at #ndslondon but again THANK YOU!Finally cleared up TPL async & await for me!Now to replay for my team!— Gray King (@gray_king) February 1, 2016
I've finally cleared up my async understanding thanks to @jeremybytes #ndclondon— Andy Clarke (@theMagicWeasel) January 15, 2016
Task.exception.Flatten().InnerExceptions - where have you been all my (developer) life??? Thanks @jeremybytes #ndclondon— João Lebre (@jplebre) January 15, 2016
@jeremybytes #ndclondon Great talk on async, await and tasks in C#. Good job!— Tim Norris (@NozzaTwit) January 15, 2016
"Completed does not mean what you think it means" yeah, thanks Microsoft #ndclondon @jeremybytes pic.twitter.com/2qNNBojHg1— João Lebre (@jplebre) January 15, 2016
— ~/developers/chris (@ChrisAnnODell) January 15, 2016
— Jake Greenwood (@jakegreenwood) January 15, 2016
@jeremybytes was awesome in #ndclondon— Jenya Y. (@jenyaye) January 15, 2016
— Nick Foster (@goldenhornet) January 15, 2016
— Anthony Chu (@nthonyChu) January 15, 2016The total count on the evaluation cards: 157 Green, 7 Yellow, and 0 Red.
Look for Your Chance
If you'd like to see this presentation LIVE, be sure to check out my speaking schedule. You can see my upcoming confirmed events on my homepage: www.jeremybytes.com. I have several proposals out there, so be sure to check back for updates as events are confirmed.
If you're in California, I'll be at user groups in Los Angeles, Berkeley, and Fresno over the next 6 weeks. And I'll also be presenting this topic at KCDC in June.
And, of course, I have lots of other topics that I love to talk about. I hope to see you at an event soon!
Happy Coding!
Subscribe to:
Posts (Atom)