When creating unit tests, we come across blocks of code that appear to be difficult to test. But with some thought, we can come up with some interesting solutions. Previously, we took a look at a test helper class that allows us to track changes to properties.
In recent articles, we've improved our PropertyChangeTracker class so that it works when "all properties" are updated and also added a finer-grained timout. This time, we'll create some asynchronous tests since that's the primary use case for the tracker class.
Articles in this Series
o Tracking Property Changes in Unit Tests
o Update 1: Supporting "All Properties"
o Update 2: Finer-Grained Timeout
o Update 3: Async Tests (this article)
Note: The completed code for this article can be found in the "TestConsolidation" branch of the GitHub project: jeremybytes/property-change-tracker.
To simplify our tests, we'll reduce the number of fake classes that we have. When we originally set up the tests, we created separate fake classes for the different ways that we can implement INotifyPropertyChanged.
The result is that we had 3 sets of tests with 4 tests each (i.e. 12 tests total):
As we've seen in a previous article, all 3 of these fake classes actually get compiled down to the same code. Because of this, we really only need to have 1 fake class.
In addition, we should take a look at what we really need to be testing. Our tracker class is a test helper that ties into the "PropertyChanged" event on any class that implements "INotifyPropertyChanged". We aren't testing whether the PropertyChanged event is fired at the right time; we're testing that our tracker class behaves appropriately when the event *is* fired.
This means that we really only need one fake class (and consequently, one set of tests).
One Fake Class
So we'll get rid of the "FakeClassStandardProperty" and "FakeClassCallerMemberName". We'll hang onto the fake class that uses the "nameof()" expression, but we'll rename it to "FakePropertiesClass".
This gives us a single fake class. Here's a sample property:
The contents of the class are unchanged; only the name is different.
One Set of Tests
We can also delete our other test classes so that we only have one set of tests. Because we only have one set of tests, we can simplify the test names a bit as well.
Here are the first 2 tests for updates to a single property:
In addition, we have tests that check the functionality of the timeout and whether our tracker works when "all properties" are updated:
This means that we only have 4 tests (instead of 12):
The purpose of the PropertyChangeTracker class is to help us test classes that are updated asynchronously (see the original article for more information). Because of this, we should have some tests that update our properties asynchronously.
Let's add those tests.
Asynchronous Helper Method
To start with, we'll create a method that will do a property update asynchronously. This is a helper method inside our test class:
For the parameters, we pass in a delay (in milliseconds) and the fake class that we're using for testing.
The delay is used in a call to "Task.Delay()". "Task.Delay()" behaves similarly to "Thread.Sleep()" in that it pauses execution of our code. The difference is that "Task.Delay()" does not block the current thread.
We "await" the "Task.Delay()" call. This means that our code that sets the "LastName" property will not run until after the delay has completed.
One other thing to notice is the "ConfigureAwait(false)". Normally when we "await" an asynchronous method, it grabs the current context and the subsequent code runs in that same context -- this generally means it runs on the same thread.
When we use "ConfigureAwait(false)", the current context is *not* captured. So the subsequent code runs wherever the Task Scheduler sees fit. This often means that the code will run on a different thread.
Tests with Asynchronous Updates
So let's create a test that uses this code to update a property asynchronously.
In this test, we call "UpdateProperty" with a delay of 50 milliseconds. Then we ask our tracker class to "WaitForChange" with a max wait of 100 milliseconds.
Since our delay is less than our max wait, we would expect that the "LastName" property will be updated and our "WaitForChange" method will return "true".
And that's exactly what happens.
So let's try this again with a different max wait time:
This test still uses a 50 millisecond delay (for the "UpdateProperty" method), but is only uses a 20 millisecond max wait (for the "WaitForChange" method).
This means that our tracker class should stop waiting *before* the asynchronous update happens. And we see this in our test results:
The problem is that "WaitForChange" returns "false", but our "Assert" is still looking for "true". In fact, we should expect a "false" value here.
Here's our corrected test:
And now all of our tests give us the expected results:
Let's take a quick look at the run times for our asynchronous tests.
The 3rd test in our list, "Tracker_SinglePropertyAsyncCompleted_ReturnsTrue", takes 65 milliseconds to run. This includes the 50 millisecond delay that we have (plus the time it takes to run the rest of the test).
The 4th test in our list, "Tracker_SinglePropertyAsyncNotCompleted_ReturnsFalse", takes 19 milliseconds to complete. This represents the 20 millisecond timeout for the "WaitForChange" method. The timeout isn't exact due to the "while" loop and time calculation that is used in the "WaitForChange" method implementation.
But we can see that these timings are close to what we would expect.
We've simplified our test suite quite a bit. We've removed redundancy from our tests and focused on what we really need to be testing for.
In addition, we added test code that updates a property asynchronously. This matches the actual use case for our PropertyChangeTracker class. I prefer tests that match real-world usage as much as possible. This gives us the best chance of finding potential bugs.
We'll keep the original (non-async) tests because it's still good to have a "sanity check" for our code.
It's good to review our code and tests. We often find that we have tests that we don't actually need and that we need additional tests that we don't currently have.