My speaking schedule has been a bit light lately, but I do have 3 events currently scheduled for June. I always have a lot of fun. Here's a picture from the Central California .NET User Group in Fresno (from April):
I also had a chance to visit my friends at the Disney .NET Developer group a few weeks ago. It was great to catch up with friends and share some cool technology.
Friday, June 5th
Denver Dev Day
Denver, CO
Event Site
Topics:
o DI Why? Getting a Grip on Dependency Injection
o IEnumerable, ISaveable, IDontGetIt: Understanding .NET Interfaces
o Learn to Love Lambdas (and LINQ, Too)
I'm really looking forward to this event (which is just one week away). It is my first chance to speak in Colorado, and I'm looking forward to meeting lots of new people. There's a great lineup of topics, so it should be a lot of fun.
Thursday, June 25th
DotNet Group.org
Las Vegas, NV
Meetup Event
Topic: Getting Started with Git
I've had the chance to go out to Las Vegas to speak at this group several times. It's a great group of developers, and it's been great watching this group grow over the last few years.
Saturday & Sunday, June 27th & 28th
So Cal Code Camp
San Diego, CA
Event Site
Topics:
o Unit Testing Makes Me Faster: Convincing Your Boss, Your Co-Workers, and Yourself
o DI Why? Getting a Grip on Dependency Injection
o Learn the Lingo: Design Patterns
o Clean Code: Homicidal Maniacs Read Code, Too!
o IEnumerable, ISaveable, IDontGetIt: Understanding .NET Interfaces
Immediately after getting back from Las Vegas, I'm heading down to San Diego for code camp. Code Camps are a great place to hear some great speakers, learn new technologies, and talk to tons of great people. The So Cal Code Camps always have plenty of space, so I usually sign up for multiple topics (if you haven't figured it out yet, I really like sharing my experiences to make things easier for other folks).
I'll be presenting a new topic: "Unit Testing Makes Me Faster". This is a subject that is near and dear to my heart, and this will be my first chance to share it in a formal way.
And of course, code camp is a great chance for *you* to share some of your experiences. It's currently 4 weeks away, so you still have time to sign up and plan your talk. (If you need some help, check out this article: Meet the Next Code Camp Speaker: You!)
I have a few events lined up for the summer. I'm really excited that I was accepted for That Conference. It's another chance to speak somewhere new, and I'm really looking forward to it (plus, waterslides!)
I hope to see you at an upcoming event. And be sure to contact me if you'd like to come out to your event.
Happy Coding!
Friday, May 29, 2015
Thursday, May 28, 2015
Beware of "Never" and "Always"
If I hear someone say "Always do this" or "Never do that", I generally run the other way (but there are exceptions, of course ☺).
I am a big fan of best practices. The reason for this is that I spent a lot of time in my career doing things in a "could be a lot better" way and finding out the hard way that my approach had some issues that I didn't think about.
Eventually, I came to my senses and started looking at recommendations from people who have been programming way longer than I have. They've already made those mistakes and come up with better solutions. These get described with the general term "best practices".
A Good Place to Start
I treat advice that I get from other developers (and the developer gurus) as a good place to start. This means that I will generally follow the advice unless I come up with a really good reason not to.
Here's an example that I like to show in my presentations:
"Hikers and bikers move to the side of the road when a vehicle approaches." This sounds like a great piece of advice. We don't want to get run over. But we also need to consider our surroundings.
But Don't Follow Blindly
Sometimes when we look at our surroundings, we find out that following the best practice may not be a good idea after all:
So even though following this advice is generally a good idea, it would actually cause us more problems in this particular environment. I'd prefer to take my chances with the cars than the alligators.
BTW, I found out that this picture is from Shark Valley -- part of the Everglades National Park in Florida. (And I'd love a better quality image of this if anyone has one.)
User Input is Evil
Here's something we all need to be aware of:
And this leads us to a good piece of advice:
Implementing a Best Practice...
I've spent several articles exploring guard clauses on a particular method. This is the method that we've been looking at when exploring TDD and Conway's Game of Life. Here's the original method:
We don't validate the parameters. And we saw why this was a problem in the last article: Testing for Exceptions with NUnit. It's possible to pass in an invalid "currentState" parameter and get a seemingly-valid value out of the method.
At the same time, the "liveNeighbors" parameter is really only valid for values between 0 and 8 -- the number of neighbors that a square-shaped item can have in a grid.
So, we spent some time adding guard clauses at the top of the method:
And we also saw a couple different ways to write unit tests to make sure these guard clauses work as expected: both using NUnit and letting Smart Unit Tests check them.
But after going through the exercise of adding the guard clauses and creating tests for them, I promptly removed them.
...And Then Ignoring It
Yes, I just admitted to actively ignoring a best practice of validating input. But there is a specific reason for that. In the application where this method is used, speed is important. This method gets called thousands of times for each generation when we run our Conway's Game of Life application. And we could end up running through hundreds of generations during that process. That means this method gets called hundreds of thousands of times.
In fact, in an effort to parallelize the method calls, I ran through several experiments to see what is most efficient: Optimizing Conway's Game of Life.
Here's the method (and level of parallelism) that worked best for this scenario:
The "GetNewState" method (the one we've been looking at) is called in a nested "for" loop. And because this method is called so often, we need it to run quickly.
Guard Clauses Slow Things Down
To keep this method running quickly, we want to trim as much code as possible. The guard clauses may not seem like they add a lot of work (just 2 "if" conditions), we see a big difference if we actually check some metrics.
Here's the performance of our method *without* the guard clauses (the same as we saw in the optimization article):
The 10,000 iterations is the number of times the entire grid is updated. Since we have a 25x70 grid, each iteration runs this method 1,750 times. And we can see our result is about 1-1/2 seconds.
Here's the performance with our guard clauses added:
This is 4 times slower -- definitely not acceptable when speed is an important feature.
Have a Good Reason
I recommend best practices to developers as much as possible. What I've found is that they are good advice to follow the vast majority of times (probably in the 95% range). It's really those exceptional cases (like when alligators are resting on the side of the road) that we decide to ignore the practice.
But we're not really ignoring the best practice at that point. "Ignore" implies that we don't care about it. We do care about it, but we've determined that it's not appropriate for this particular scenario. And we also understand the consequences of our decision.
In our example, I understand the consequence of not having the guard clauses: it is possible for someone to call this method with invalid parameters, and the method will return an invalid response. But *in this scenario*, this method is not published to the outside world. Instead, it is called by another class in the application (which I have full control over). And the way that class is set up, I know that the method will not be called with invalid parameters.
So I understand the risk, and I'm willing to take that on because performance is a very high priority here.
Best Practices are Awesome
I really like learning about best practices. I want to understand the problems that other developers have run into. I want to understand the solutions that they have come up with. And I want to steal as much of that experience as possible so that I can skip over the painful part of the process.
And that's why I like to show best practices to other folks, whether it has to do with abstraction, dependency injection, design patterns, or general coding style. They can save us all sorts of time. But we shouldn't follow them blindly. We should take time to understand the benefits and consequences. This way, we know that we're using the right tool for the job.
So rather than thinking of these as "always" or "never" items, we need to think of them as a great place to start. Most of the time, they are the right answer. But we have the option of choosing another path if that's what makes most sense.
Happy Coding!
I am a big fan of best practices. The reason for this is that I spent a lot of time in my career doing things in a "could be a lot better" way and finding out the hard way that my approach had some issues that I didn't think about.
Eventually, I came to my senses and started looking at recommendations from people who have been programming way longer than I have. They've already made those mistakes and come up with better solutions. These get described with the general term "best practices".
A Good Place to Start
I treat advice that I get from other developers (and the developer gurus) as a good place to start. This means that I will generally follow the advice unless I come up with a really good reason not to.
Here's an example that I like to show in my presentations:
"Hikers and bikers move to the side of the road when a vehicle approaches." This sounds like a great piece of advice. We don't want to get run over. But we also need to consider our surroundings.
But Don't Follow Blindly
Sometimes when we look at our surroundings, we find out that following the best practice may not be a good idea after all:
So even though following this advice is generally a good idea, it would actually cause us more problems in this particular environment. I'd prefer to take my chances with the cars than the alligators.
BTW, I found out that this picture is from Shark Valley -- part of the Everglades National Park in Florida. (And I'd love a better quality image of this if anyone has one.)
User Input is Evil
Here's something we all need to be aware of:
Fact: All input is evil.I don't think anyone disagrees with this. Anyone who has an open form on a website knows this first hand. And if you don't want to believe me, just ask little Bobby Tables (xkcd: Exploits of a Mom):
And this leads us to a good piece of advice:
Best Practice: Verify input before using it.It's really hard to argue with this. When we're talking about input, we're not just referring to things that users type in, we're also referring to input that we get from other developers when we create APIs or library methods.
Implementing a Best Practice...
I've spent several articles exploring guard clauses on a particular method. This is the method that we've been looking at when exploring TDD and Conway's Game of Life. Here's the original method:
We don't validate the parameters. And we saw why this was a problem in the last article: Testing for Exceptions with NUnit. It's possible to pass in an invalid "currentState" parameter and get a seemingly-valid value out of the method.
At the same time, the "liveNeighbors" parameter is really only valid for values between 0 and 8 -- the number of neighbors that a square-shaped item can have in a grid.
So, we spent some time adding guard clauses at the top of the method:
And we also saw a couple different ways to write unit tests to make sure these guard clauses work as expected: both using NUnit and letting Smart Unit Tests check them.
But after going through the exercise of adding the guard clauses and creating tests for them, I promptly removed them.
...And Then Ignoring It
Yes, I just admitted to actively ignoring a best practice of validating input. But there is a specific reason for that. In the application where this method is used, speed is important. This method gets called thousands of times for each generation when we run our Conway's Game of Life application. And we could end up running through hundreds of generations during that process. That means this method gets called hundreds of thousands of times.
In fact, in an effort to parallelize the method calls, I ran through several experiments to see what is most efficient: Optimizing Conway's Game of Life.
Here's the method (and level of parallelism) that worked best for this scenario:
The "GetNewState" method (the one we've been looking at) is called in a nested "for" loop. And because this method is called so often, we need it to run quickly.
Guard Clauses Slow Things Down
To keep this method running quickly, we want to trim as much code as possible. The guard clauses may not seem like they add a lot of work (just 2 "if" conditions), we see a big difference if we actually check some metrics.
Here's the performance of our method *without* the guard clauses (the same as we saw in the optimization article):
The 10,000 iterations is the number of times the entire grid is updated. Since we have a 25x70 grid, each iteration runs this method 1,750 times. And we can see our result is about 1-1/2 seconds.
Here's the performance with our guard clauses added:
This is 4 times slower -- definitely not acceptable when speed is an important feature.
Have a Good Reason
I recommend best practices to developers as much as possible. What I've found is that they are good advice to follow the vast majority of times (probably in the 95% range). It's really those exceptional cases (like when alligators are resting on the side of the road) that we decide to ignore the practice.
But we're not really ignoring the best practice at that point. "Ignore" implies that we don't care about it. We do care about it, but we've determined that it's not appropriate for this particular scenario. And we also understand the consequences of our decision.
In our example, I understand the consequence of not having the guard clauses: it is possible for someone to call this method with invalid parameters, and the method will return an invalid response. But *in this scenario*, this method is not published to the outside world. Instead, it is called by another class in the application (which I have full control over). And the way that class is set up, I know that the method will not be called with invalid parameters.
So I understand the risk, and I'm willing to take that on because performance is a very high priority here.
Best Practices are Awesome
I really like learning about best practices. I want to understand the problems that other developers have run into. I want to understand the solutions that they have come up with. And I want to steal as much of that experience as possible so that I can skip over the painful part of the process.
And that's why I like to show best practices to other folks, whether it has to do with abstraction, dependency injection, design patterns, or general coding style. They can save us all sorts of time. But we shouldn't follow them blindly. We should take time to understand the benefits and consequences. This way, we know that we're using the right tool for the job.
So rather than thinking of these as "always" or "never" items, we need to think of them as a great place to start. Most of the time, they are the right answer. But we have the option of choosing another path if that's what makes most sense.
Happy Coding!
Tuesday, May 26, 2015
Testing for Exceptions with NUnit
As mentioned last time, I've been exploring NUnit a bit more, and one feature that I really like is how to test for exceptions -- that is, creating tests for scenarios where we expect an exception to be thrown.
To take a look at this, we'll first see why we might want to deal with exceptions in our tests. Then we'll look at the way we would handle this with MSTest (and see a few shortcomings). Then we'll see how NUnit makes things much easier.
To see related articles, including the method creation using TDD, MSTest, and NUnit, check out Coding Practice with Conway's Game of Life.
Needing to Check for Exceptions
We used TDD to create the method to process the rules for Conway's Game of Life. Here's the method that we ended up with:
This method passes all of our tests, and it works fine in the application. But when we ran it through Smart Unit Tests (from the Visual Studio 2015 Preview), we found a problem. Here are the results of that test run:
This shows the parameters that were passed in to our method to test it. We want to pay attention to the "currentState" parameter. This is an enum called "CellState" which has 2 values: Alive and Dead.
But we can see that in addition to these values, Smart Unit Tests also created a test with a value of "2". This seems strange at first, but we need to remember that enums are simply integers underneath. That means that "2" is a valid value (meaning it will compile), but it's technically out of range.
Adding the Guard Clauses
To fix this problem, we added some guard clauses (full article). The guard clauses do some range checking on our parameters, and throw exceptions if we run into an issue:
This makes sure that our CellState enum is within the range of our valid values (using the "IsDefined" function), and we also check the "liveNeighbors" parameter since we expect the values will be between 0 and 8.
If we hit one of these guard clauses, we expect that an ArgumentOutOfRangeException will be thrown. In order to have good coverage in our unit tests, we need to test that functionality.
Testing for Exceptions with MSTest
We'll create a couple of tests using MSTest first. This will show us the problems that we run into when we try to test for exceptions.
In MSTest, we have an attribute that we can use: "ExpectedException". When we put this attribute on a test, then the test will only pass if the code throws that exception. If the code does not throw that exception, then we get a failing test.
Here are a couple of tests for our guard clauses:
We can see that both of these tests expect to come across an ArgumentOutOfRangeException: one for the "currentState" (our first parameter), and one for the "liveNeighbors" (our second parameter)
The Problem
But we have a problem with this code. Even though we're able to verify that an ArgumentOutOfRangeException was thrown, we aren't able to verify *which* ArgumentOutOfRangeException was thrown. If our code is incorrect, it could be that both of these throw exceptions based on the first guard clause (which would be a problem).
So we really need to be able to get more details out of the exception in order to adequately test this scenario.
Manual Workaround
It would be nice if there was an automatic way of doing this, but unfortunately, the ExpectedException attribute that we're using is too limited for what we need.
One solution is to skip the attribute and code things up manually. Here's what that code looks like:
Instead of using the attribute, we wrap our code in a try/catch block. This gives us access to the actual exception being thrown. So in our catch block, we can verify that it is the correct one. In this case, we compare the message to what we expect.
Notice the "Assert.Fail()" call that we have in the try block. This is very important when we do a test like this. If a test does not have any failing asserts (and does not throw any exceptions), then by default, it is a passing test.
Since we expect the "GetNewState" call will result in an exception, we need to explicitly fail the test if the exception is not thrown. In this test, we do not expect that we will ever hit the "Assert.Fail()" line of code (since we expect the exception). But if we do hit that line of code for some reason, our test will fail, and we'll know that something went wrong.
This pattern works, and I've used it in my own unit tests. But it is a bit tedious to write. It would be much nicer if we could simplify our tests when we're checking for exceptions.
It turns out that NUnit allows us to simplify our code, making the tests easier to write and maintain.
Testing for Exceptions with NUnit
NUnit has an extensive library to make testing easier. We'll look at 2 methods today: Throws and Catch. For more information, you can check out the documentation for Exception Asserts.
As a side note, NUnit does also have an attribute-based way of checking exceptions which is more extensive than MSTest, but the Exception Asserts is the preferred way of doing things today.
Let's walk through this test:
First, we set up our variables that we use for our parameters (and we can see that our CellState parameter is out of range).
Next, we use the "Assert.Throws" method. This takes a generic parameter for the type of exception that we're expecting. One thing that is interesting about this method is that it returns that exception (which we save in the "ex" variable). This means that we can inspect the exception later.
The "Assert.Throws" method has a parameter that is a "TestDelegate" type. This is a delegate that takes no parameters. And since we know how to use lambda expressions as delegates, we can just put our code right in here.
In our case, we simply call the "GetNewState" method with our test parameters. This is the block of code that we expect to throw an exception.
Finally, we use an Assert like we did in MSTest to check to see if this is the particular ArgumentOutOfRangeException that we expected here.
Advantages
There are a few advantages here. First, we can get a reference to the exception that was thrown (through the return value). This gives us something that we can inspect further.
Second, we limit our check for an exception to a particular line of code (in this case the call to "GetNewState"). If we wanted, we could use a multi-statement lambda expression to wrap more code. We can think of this code as the code that we would wrap in the "try" block in our manual check.
This means that if an exception is thrown somewhere else in the test method (even if it is an ArgumentOutOfRangeException), it will result in a test failure. We will only get a passing test if the exception is thrown in the block of code that is part of the "Assert.Throws" method call.
Best of all, we don't need to manually fail the test with "Assert.Fail". This is one less piece of code to worry about. Overall, our readability is better, and that means we'll be more likely to write and maintain these types of tests.
"Throws" vs. "Catch"
NUnit has another exception assert method called "Catch". "Throws" looks for a specific exception type (in our case "ArgumentOutOfRangeException").
"Catch" will look for a specific exception type *or* any of its descendants. Let's look at another example:
Notice that "Assert.Catch" has a generic type parameter of "ArgumentException". This means that this will work on any code that throws an ArgumentException or any exception that descends from ArgumentException -- including ArgumentOutOfRangeException and ArgumentNullException.
Since our code throws an ArgumentOutOfRangeException, this test still passes.
Prefer "Throws" Over "Catch"
As general advice, I would recommend preferring "Throws" over "Catch". The reason for this is that we should always be looking for as specific of an exception as possible. This is true when we write our exception handling in our code, and it should also be true when we're writing tests to verify exceptions.
Of course there are cases where we may want to use "Catch", but we should consider the implications before doing so.
Other Helper Classes
NUnit has a number of helper classes to make assertions easier. Check out the category list on the sidebar of the Assertions documentation.
I still need to dig into these classes, but they look very useful. As an example, let's look at the StringAssert class. This gives us a chance to rewrite the final assertion in our test:
So instead of using "Assert.IsTrue" and then having the parse the contents of the parameter, we can use the "StringAssert.Contains" method. This is a bit more readable since we know right away that we're looking to see if a string contains a particular value.
And the method parameters follow the standard "expected/actual" pattern that we have in other assertions. So we're saying that we expect to find "Invalid Cell State" somewhere in our actual exception message string.
Wrap Up
So we've seen that NUnit has some classes and methods that make it much easier to check for exceptions in our unit tests. And since we can plug NUnit into our Visual Studio Test Explorer, we can get the result of those tests right in our IDE with everything else.
This is just one way that NUnit can make our tests easier to read and maintain. There are many other ways as well. I'm just getting started on this journey, and I'll be sure to keep you up-to-date on other interesting and useful things that I find.
Happy Coding!
To take a look at this, we'll first see why we might want to deal with exceptions in our tests. Then we'll look at the way we would handle this with MSTest (and see a few shortcomings). Then we'll see how NUnit makes things much easier.
To see related articles, including the method creation using TDD, MSTest, and NUnit, check out Coding Practice with Conway's Game of Life.
Needing to Check for Exceptions
We used TDD to create the method to process the rules for Conway's Game of Life. Here's the method that we ended up with:
This method passes all of our tests, and it works fine in the application. But when we ran it through Smart Unit Tests (from the Visual Studio 2015 Preview), we found a problem. Here are the results of that test run:
This shows the parameters that were passed in to our method to test it. We want to pay attention to the "currentState" parameter. This is an enum called "CellState" which has 2 values: Alive and Dead.
But we can see that in addition to these values, Smart Unit Tests also created a test with a value of "2". This seems strange at first, but we need to remember that enums are simply integers underneath. That means that "2" is a valid value (meaning it will compile), but it's technically out of range.
Adding the Guard Clauses
To fix this problem, we added some guard clauses (full article). The guard clauses do some range checking on our parameters, and throw exceptions if we run into an issue:
This makes sure that our CellState enum is within the range of our valid values (using the "IsDefined" function), and we also check the "liveNeighbors" parameter since we expect the values will be between 0 and 8.
If we hit one of these guard clauses, we expect that an ArgumentOutOfRangeException will be thrown. In order to have good coverage in our unit tests, we need to test that functionality.
Testing for Exceptions with MSTest
We'll create a couple of tests using MSTest first. This will show us the problems that we run into when we try to test for exceptions.
In MSTest, we have an attribute that we can use: "ExpectedException". When we put this attribute on a test, then the test will only pass if the code throws that exception. If the code does not throw that exception, then we get a failing test.
Here are a couple of tests for our guard clauses:
We can see that both of these tests expect to come across an ArgumentOutOfRangeException: one for the "currentState" (our first parameter), and one for the "liveNeighbors" (our second parameter)
The Problem
But we have a problem with this code. Even though we're able to verify that an ArgumentOutOfRangeException was thrown, we aren't able to verify *which* ArgumentOutOfRangeException was thrown. If our code is incorrect, it could be that both of these throw exceptions based on the first guard clause (which would be a problem).
So we really need to be able to get more details out of the exception in order to adequately test this scenario.
Manual Workaround
It would be nice if there was an automatic way of doing this, but unfortunately, the ExpectedException attribute that we're using is too limited for what we need.
One solution is to skip the attribute and code things up manually. Here's what that code looks like:
Instead of using the attribute, we wrap our code in a try/catch block. This gives us access to the actual exception being thrown. So in our catch block, we can verify that it is the correct one. In this case, we compare the message to what we expect.
Notice the "Assert.Fail()" call that we have in the try block. This is very important when we do a test like this. If a test does not have any failing asserts (and does not throw any exceptions), then by default, it is a passing test.
Since we expect the "GetNewState" call will result in an exception, we need to explicitly fail the test if the exception is not thrown. In this test, we do not expect that we will ever hit the "Assert.Fail()" line of code (since we expect the exception). But if we do hit that line of code for some reason, our test will fail, and we'll know that something went wrong.
This pattern works, and I've used it in my own unit tests. But it is a bit tedious to write. It would be much nicer if we could simplify our tests when we're checking for exceptions.
It turns out that NUnit allows us to simplify our code, making the tests easier to write and maintain.
Testing for Exceptions with NUnit
NUnit has an extensive library to make testing easier. We'll look at 2 methods today: Throws and Catch. For more information, you can check out the documentation for Exception Asserts.
As a side note, NUnit does also have an attribute-based way of checking exceptions which is more extensive than MSTest, but the Exception Asserts is the preferred way of doing things today.
Let's walk through this test:
First, we set up our variables that we use for our parameters (and we can see that our CellState parameter is out of range).
Next, we use the "Assert.Throws" method. This takes a generic parameter for the type of exception that we're expecting. One thing that is interesting about this method is that it returns that exception (which we save in the "ex" variable). This means that we can inspect the exception later.
The "Assert.Throws" method has a parameter that is a "TestDelegate" type. This is a delegate that takes no parameters. And since we know how to use lambda expressions as delegates, we can just put our code right in here.
In our case, we simply call the "GetNewState" method with our test parameters. This is the block of code that we expect to throw an exception.
Finally, we use an Assert like we did in MSTest to check to see if this is the particular ArgumentOutOfRangeException that we expected here.
Advantages
There are a few advantages here. First, we can get a reference to the exception that was thrown (through the return value). This gives us something that we can inspect further.
Second, we limit our check for an exception to a particular line of code (in this case the call to "GetNewState"). If we wanted, we could use a multi-statement lambda expression to wrap more code. We can think of this code as the code that we would wrap in the "try" block in our manual check.
This means that if an exception is thrown somewhere else in the test method (even if it is an ArgumentOutOfRangeException), it will result in a test failure. We will only get a passing test if the exception is thrown in the block of code that is part of the "Assert.Throws" method call.
Best of all, we don't need to manually fail the test with "Assert.Fail". This is one less piece of code to worry about. Overall, our readability is better, and that means we'll be more likely to write and maintain these types of tests.
"Throws" vs. "Catch"
NUnit has another exception assert method called "Catch". "Throws" looks for a specific exception type (in our case "ArgumentOutOfRangeException").
"Catch" will look for a specific exception type *or* any of its descendants. Let's look at another example:
Notice that "Assert.Catch" has a generic type parameter of "ArgumentException". This means that this will work on any code that throws an ArgumentException or any exception that descends from ArgumentException -- including ArgumentOutOfRangeException and ArgumentNullException.
Since our code throws an ArgumentOutOfRangeException, this test still passes.
Prefer "Throws" Over "Catch"
As general advice, I would recommend preferring "Throws" over "Catch". The reason for this is that we should always be looking for as specific of an exception as possible. This is true when we write our exception handling in our code, and it should also be true when we're writing tests to verify exceptions.
Of course there are cases where we may want to use "Catch", but we should consider the implications before doing so.
Other Helper Classes
NUnit has a number of helper classes to make assertions easier. Check out the category list on the sidebar of the Assertions documentation.
I still need to dig into these classes, but they look very useful. As an example, let's look at the StringAssert class. This gives us a chance to rewrite the final assertion in our test:
So instead of using "Assert.IsTrue" and then having the parse the contents of the parameter, we can use the "StringAssert.Contains" method. This is a bit more readable since we know right away that we're looking to see if a string contains a particular value.
And the method parameters follow the standard "expected/actual" pattern that we have in other assertions. So we're saying that we expect to find "Invalid Cell State" somewhere in our actual exception message string.
Wrap Up
So we've seen that NUnit has some classes and methods that make it much easier to check for exceptions in our unit tests. And since we can plug NUnit into our Visual Studio Test Explorer, we can get the result of those tests right in our IDE with everything else.
This is just one way that NUnit can make our tests easier to read and maintain. There are many other ways as well. I'm just getting started on this journey, and I'll be sure to keep you up-to-date on other interesting and useful things that I find.
Happy Coding!
Integrating NUnit into Visual Studio Test Explorer
So, I've been inspired to explore NUnit a bit more as a testing framework. In a previous article, we saw how NUnit allows us to easily parameterize our tests. But there are a couple of things that have kept me leaning toward MSTest, primarily the integration with Visual Studio.
It turns out that NUnit *can* integrate with the Visual Studio Test Explorer. It's not quite as complete as MSTest, but it's definitely a lot better than using the stand-alone test runner.
The reason I'm getting back into this is because there are some compelling features of NUnit that I want to start using regularly. In an upcoming article, I'll talk about testing for exceptions, which is much better in NUnit than the current version of MSTest. Check for other testing articles here: Coding Practice with Conway's Game of Life.
[Update 11/23/2015: The steps shown in this article are temporarily broken due to a version mismatch between the NUnit framework and the NUnit test adapter. This will be fixed once the 3.0 version of the test adapter is released. In the meantime, you can check out this article for a workaround: Fixing an NUnit Version Mismatch.]
[Update 3/4/2016: A demonstration of installing NUnit in Visual Studio 2015 (including dealing with the current mismatch) is available by watching a few minutes of this TDD video: TDD Basics @ 3:30.]
[Update 04/20/2016: The 3.0 version of the Test Adapter has just been released. When using NUnit 3.0, be sure to use the "NUnit3TestAdapater" package from NuGet. When using NUnit 2.0, use the "NUnitTestAdapter" package from NuGet. More information here: Integrating NUnit into Visual Studio - Update for NUnit 3]
Integrating NUnit into Test Explorer
One of the things that I like about MSTest is the integration with Visual Studio. When I initially looked at NUnit, I was using the separate test runner application.
This shows all of our parameterized tests and the results. (As a reminder of how to install the NUnit test runner, see the prior article: Parameterized Tests with NUnit.)
We got the test runner from NuGet. But it turns out that there is another NuGet package that we can use to integrate NUnit with the Visual Studio Test Explorer.
For this, we add a package to the test project (you'll want to add this to each of your test projects that use NUnit):
The "NUnit Test Adapter for VS2012, VS2013 and VS2015" will add the Test Explorer support that we're looking for.
After rebuilding our solution, we see the tests show up in the Test Explorer:
That's pretty cool.
Note: For this example, I unloaded the MSTest project so that we're only seeing the NUnit tests. If we have MSTest projects loaded as well, then we would see all of the tests mixed together.
Parameterized Test Results
The results of our parameterized tests are shown here. We only have 5 actual test methods, but we have 18 passing tests.
If we look at one of our parameterized tests:
We'll see that we have a test result for each of the test cases:
We don't get the same grouping as we get with the custom test runner, but the important thing is that we get all of the results.
A Few Shortcomings
Everything isn't quite perfect here. And that's okay once we understand it. (And I'll be looking into this further to see if there are some ways around it.)
One shortcoming has to do with the way that CodeLens works with these tests. Here's what we see when we turn on CodeLens and take a look at our production method:
Notice where it says "1/1 passing". All 5 of our tests reference this method. But CodeLens only sees one of the tests. And if we look at the details, we can see which test it is.
Let's look at that test:
Notice that this is *not* a parameterized test. This has the "[Test]" attribute rather than "[TestCase]". And if we see the green circle/check mark. This tells us that this test currently passes.
But what if we look at one of the other tests:
Here we have 2 test cases (the same that we saw above). And notice that there is no green circle/check mark here. So, CodeLens is not recognizing this as a test.
This isn't a huge deal to me because I don't normally use CodeLens (I find it more distracting than helpful). But it has another implication, though: my favorite button:
The "Run Tests After Build" button is a toggle button in the Test Explorer (sorry, this is not in all editions of Visual Studio; I'm using Visual Studio 2013 Ultimate). When it is clicked down, affected unit tests are automatically re-run after every build.
When I say "affected unit tests", these are tests that are impacted by changes to the code or changes to the tests. The great thing about this is that I get immediate feedback each time I build my application.
The problem is that the tests are not getting associated with the methods that they are testing. As we saw above, only 1 of the tests is actually associated with our GetNewState method. So if that method is changed, only 1 test is affected (and the other tests are not re-run).
I've noticed this behavior as I've been working with NUnit in some other areas. This means that I've had to manually re-run tests after I've made code changes. It's not a huge deal, but it takes away a piece of functionality that I really liked.
I'm looking to see if there is a solution to this (such as adding another attribute to my tests). And hopefully this will be addressed in future versions of the NUnit Test Adapter (I'm using version 2.0.0 from April 2015) -- and it may not even be a problem with the Test Adapter, I'm just guessing.
Wrap Up
Big plus: NUnit tests can be integrated in the Visual Studio Test Explorer. Bit of a minus: there are a few quirks with CodeLens and "Run Tests After Build".
But I'm willing to deal with the shortcomings as I dig into NUnit a bit more. Next time, I'll show how NUnit lets us easily test for specific exceptions in specific places in our code. Testing for expected exceptions is a big part of my testing process, so I'm very interested in this. And what I've seen so far is really cool.
Happy Coding!
It turns out that NUnit *can* integrate with the Visual Studio Test Explorer. It's not quite as complete as MSTest, but it's definitely a lot better than using the stand-alone test runner.
The reason I'm getting back into this is because there are some compelling features of NUnit that I want to start using regularly. In an upcoming article, I'll talk about testing for exceptions, which is much better in NUnit than the current version of MSTest. Check for other testing articles here: Coding Practice with Conway's Game of Life.
[Update 11/23/2015: The steps shown in this article are temporarily broken due to a version mismatch between the NUnit framework and the NUnit test adapter. This will be fixed once the 3.0 version of the test adapter is released. In the meantime, you can check out this article for a workaround: Fixing an NUnit Version Mismatch.]
[Update 3/4/2016: A demonstration of installing NUnit in Visual Studio 2015 (including dealing with the current mismatch) is available by watching a few minutes of this TDD video: TDD Basics @ 3:30.]
[Update 04/20/2016: The 3.0 version of the Test Adapter has just been released. When using NUnit 3.0, be sure to use the "NUnit3TestAdapater" package from NuGet. When using NUnit 2.0, use the "NUnitTestAdapter" package from NuGet. More information here: Integrating NUnit into Visual Studio - Update for NUnit 3]
Integrating NUnit into Test Explorer
One of the things that I like about MSTest is the integration with Visual Studio. When I initially looked at NUnit, I was using the separate test runner application.
This shows all of our parameterized tests and the results. (As a reminder of how to install the NUnit test runner, see the prior article: Parameterized Tests with NUnit.)
We got the test runner from NuGet. But it turns out that there is another NuGet package that we can use to integrate NUnit with the Visual Studio Test Explorer.
For this, we add a package to the test project (you'll want to add this to each of your test projects that use NUnit):
The "NUnit Test Adapter for VS2012, VS2013 and VS2015" will add the Test Explorer support that we're looking for.
After rebuilding our solution, we see the tests show up in the Test Explorer:
That's pretty cool.
Note: For this example, I unloaded the MSTest project so that we're only seeing the NUnit tests. If we have MSTest projects loaded as well, then we would see all of the tests mixed together.
Parameterized Test Results
The results of our parameterized tests are shown here. We only have 5 actual test methods, but we have 18 passing tests.
If we look at one of our parameterized tests:
We'll see that we have a test result for each of the test cases:
We don't get the same grouping as we get with the custom test runner, but the important thing is that we get all of the results.
A Few Shortcomings
Everything isn't quite perfect here. And that's okay once we understand it. (And I'll be looking into this further to see if there are some ways around it.)
One shortcoming has to do with the way that CodeLens works with these tests. Here's what we see when we turn on CodeLens and take a look at our production method:
Notice where it says "1/1 passing". All 5 of our tests reference this method. But CodeLens only sees one of the tests. And if we look at the details, we can see which test it is.
Let's look at that test:
Notice that this is *not* a parameterized test. This has the "[Test]" attribute rather than "[TestCase]". And if we see the green circle/check mark. This tells us that this test currently passes.
But what if we look at one of the other tests:
Here we have 2 test cases (the same that we saw above). And notice that there is no green circle/check mark here. So, CodeLens is not recognizing this as a test.
This isn't a huge deal to me because I don't normally use CodeLens (I find it more distracting than helpful). But it has another implication, though: my favorite button:
The "Run Tests After Build" button is a toggle button in the Test Explorer (sorry, this is not in all editions of Visual Studio; I'm using Visual Studio 2013 Ultimate). When it is clicked down, affected unit tests are automatically re-run after every build.
When I say "affected unit tests", these are tests that are impacted by changes to the code or changes to the tests. The great thing about this is that I get immediate feedback each time I build my application.
The problem is that the tests are not getting associated with the methods that they are testing. As we saw above, only 1 of the tests is actually associated with our GetNewState method. So if that method is changed, only 1 test is affected (and the other tests are not re-run).
I've noticed this behavior as I've been working with NUnit in some other areas. This means that I've had to manually re-run tests after I've made code changes. It's not a huge deal, but it takes away a piece of functionality that I really liked.
I'm looking to see if there is a solution to this (such as adding another attribute to my tests). And hopefully this will be addressed in future versions of the NUnit Test Adapter (I'm using version 2.0.0 from April 2015) -- and it may not even be a problem with the Test Adapter, I'm just guessing.
Wrap Up
Big plus: NUnit tests can be integrated in the Visual Studio Test Explorer. Bit of a minus: there are a few quirks with CodeLens and "Run Tests After Build".
But I'm willing to deal with the shortcomings as I dig into NUnit a bit more. Next time, I'll show how NUnit lets us easily test for specific exceptions in specific places in our code. Testing for expected exceptions is a big part of my testing process, so I'm very interested in this. And what I've seen so far is really cool.
Happy Coding!
Monday, May 18, 2015
More Fun with DateTime: Scheduling Items with DateTime or DateTimeOffset?
Last time, we looked at the DateTimeKind property of DateTime. What we saw is that there are some challenges when DateTimeKind is "Unspecified".
Matt Johnson (expert in all things DateTime related), left a possible solution: converting the output of the sunset provider to DateTimeOffset.
I wish this were easy.
DateTimeOffset: What is it Good For?
The primary reason why we want to use DateTimeOffset is that it represents a point in time. It does this by storing the time with an offset from UTC (this is a number like "-07:00", not a time zone like "PST"). This means that DateTimeOffset is not affected by Daylight Saving Time.
Let's compare DateTime and DateTimeOffset by looking at some times. The fun part about this is that we get the "same" time twice when using local time as DateTime.
But if we use DateTimeOffset, we don't have to worry about that. The time is not representative of any time zone, it simply shows how many hours it differs from UTC.
This is why DateTimeOffset is often recommended. It represents a specific point in time unambiguously.
For more information, see the MSDN article: Choosing Between DateTime, DateTimeOffset, and TimeZoneInfo. Which includes the following:
Too bad real life is a bit trickier.
Problems with Scheduling
So, I did go through the sunset provider library and swap things out for DateTimeOffset. And I also set up the schedule items to use DateTimeOffset.
To see the updates, check out the following commit on GitHub: Switched to use DateTimeOffset.
This makes perfect sense for the sunset and sunrise information. The sunset for a particular date represents a particular point in time. But the other schedule items are a bit trickier.
Application Behavior
To see the problem, let's look at a schedule item. Here's the JSON representation from our file:
This shows the scheduled time as May 1, 2015 at 5:00 p.m. (local time). When the application starts, it rolls this forward until it is in the future. So, for example, if it is currently May 18th at 9:47 p.m. (which is the time I'm writing this), this schedule item will end up as May 19th at 5:00 p.m. -- the next "5:00 p.m."
At the same time, as the application runs, when we get to 5:00 p.m., this item will be executed (in this case a lamp will be turned off) and then the item is rolled to the next day.
And this all works fine until we get to Daylight Saving Time. But before we get there, let's enjoy this comic from XKCD:
The Problem with Daylight Saving Time
To see the problem, let's put together a little bit of code (just a console application). We'll pretend that it's the day before Daylight Saving Time ends this year.
Rather than picking the end of Daylight Saving Time ourselves, we'll ask the TimeZone information about that. This block of code gives us a UTC time that represents when DST ends this year (which happens to be November 1st at 2:00 a.m.)
Now we'll create two variables, one DateTime and one DateTimeOffset:
These represent the same time. And when we look at the output, we get no surprises:
It's 5:00 p.m. on October 31st.
But now we'll add 1 day to these (and cross over the DST boundary):
Here's the output:
Looks okay. But it's a bit deceptive.
Now we see what we really have:
The original times are fine, but the updated times are an hour apart! The first represents 5:00 p.m. local time, and the second represents 4:00 p.m. local time.
This is a problem for our application. The application regularly rolls the times forward by one day. That means we'll run into this problem every time Daylight Saving Time starts or ends.
What's the Answer?
We've got a problem. How do we solve it? Well, I haven't figured that out yet. The best solution is to eliminate Daylight Saving Time (or move to Arizona). But that's not practical at the moment.
A simple (yet kludgy) solution is to reload the schedule items from file whenever DST starts or ends. This would force the times into the appropriate offset (since there is no offset specified in the JSON, it is treated as "local" time). This would work, but it does smell a bit funny.
Maybe some items need to be DateTime while others are DateTimeOffset. I really don't like the idea of mixing these two. DateTimeOffset makes perfect sense when we're talking about the sunset and sunrise times -- these are discrete points in time. But DateTime makes sense for other times that are rescheduled each day at the same time.
I'll need to put a bit more thought into this. What's important is that we've identified the issue. I still have a few more months to fix it (until October 31st actually), so there's no rush. Feel free to leave any suggestions in the comments.
Happy Coding!
Matt Johnson (expert in all things DateTime related), left a possible solution: converting the output of the sunset provider to DateTimeOffset.
"Another way to handle this would be to change the methods of your ISunsetProvider interface to return DateTimeOffset instead of DateTime. In the implementation, you would return new DateTimeOffset(solarTimes.Sunset). The constructor will treat Unspecified as Local, and assign the correct offset."Matt often recommends using DateTimeOffset, so I looked into it further and swapped the code over to use DateTimeOffset. This has pros and cons (and some unexpected behavior).
I wish this were easy.
DateTimeOffset: What is it Good For?
The primary reason why we want to use DateTimeOffset is that it represents a point in time. It does this by storing the time with an offset from UTC (this is a number like "-07:00", not a time zone like "PST"). This means that DateTimeOffset is not affected by Daylight Saving Time.
Let's compare DateTime and DateTimeOffset by looking at some times. The fun part about this is that we get the "same" time twice when using local time as DateTime.
Before Daylight Saving Time Ends (11/01/2015 09:25 UTC)The fun part of using local time is that when someone says it is "01:25:00" on November 1st, we get to ask "which one?"
11/01/2015 01:25:00 (Local Time - Pacific)
11/01/2015 01:25:00 (DateTimeOffset -07:00)
After Daylight Saving Time Ends (11/01/2015 10:25 UTC)
11/01/2015 01:25:00 (Local Time - Pacific)
11/01/2015 02:25:00 (DateTimeOffset -07:00)
But if we use DateTimeOffset, we don't have to worry about that. The time is not representative of any time zone, it simply shows how many hours it differs from UTC.
This is why DateTimeOffset is often recommended. It represents a specific point in time unambiguously.
For more information, see the MSDN article: Choosing Between DateTime, DateTimeOffset, and TimeZoneInfo. Which includes the following:
Too bad real life is a bit trickier.
Problems with Scheduling
So, I did go through the sunset provider library and swap things out for DateTimeOffset. And I also set up the schedule items to use DateTimeOffset.
To see the updates, check out the following commit on GitHub: Switched to use DateTimeOffset.
This makes perfect sense for the sunset and sunrise information. The sunset for a particular date represents a particular point in time. But the other schedule items are a bit trickier.
Application Behavior
To see the problem, let's look at a schedule item. Here's the JSON representation from our file:
This shows the scheduled time as May 1, 2015 at 5:00 p.m. (local time). When the application starts, it rolls this forward until it is in the future. So, for example, if it is currently May 18th at 9:47 p.m. (which is the time I'm writing this), this schedule item will end up as May 19th at 5:00 p.m. -- the next "5:00 p.m."
At the same time, as the application runs, when we get to 5:00 p.m., this item will be executed (in this case a lamp will be turned off) and then the item is rolled to the next day.
And this all works fine until we get to Daylight Saving Time. But before we get there, let's enjoy this comic from XKCD:
The Problem with Daylight Saving Time
To see the problem, let's put together a little bit of code (just a console application). We'll pretend that it's the day before Daylight Saving Time ends this year.
Rather than picking the end of Daylight Saving Time ourselves, we'll ask the TimeZone information about that. This block of code gives us a UTC time that represents when DST ends this year (which happens to be November 1st at 2:00 a.m.)
Now we'll create two variables, one DateTime and one DateTimeOffset:
These represent the same time. And when we look at the output, we get no surprises:
It's 5:00 p.m. on October 31st.
But now we'll add 1 day to these (and cross over the DST boundary):
Here's the output:
Looks okay. But it's a bit deceptive.
These times are actually different!Don't believe me? Let's change the output format from "s" (sortable format) to "o" (round-trip format):
Now we see what we really have:
The original times are fine, but the updated times are an hour apart! The first represents 5:00 p.m. local time, and the second represents 4:00 p.m. local time.
This is a problem for our application. The application regularly rolls the times forward by one day. That means we'll run into this problem every time Daylight Saving Time starts or ends.
What's the Answer?
We've got a problem. How do we solve it? Well, I haven't figured that out yet. The best solution is to eliminate Daylight Saving Time (or move to Arizona). But that's not practical at the moment.
A simple (yet kludgy) solution is to reload the schedule items from file whenever DST starts or ends. This would force the times into the appropriate offset (since there is no offset specified in the JSON, it is treated as "local" time). This would work, but it does smell a bit funny.
Maybe some items need to be DateTime while others are DateTimeOffset. I really don't like the idea of mixing these two. DateTimeOffset makes perfect sense when we're talking about the sunset and sunrise times -- these are discrete points in time. But DateTime makes sense for other times that are rescheduled each day at the same time.
I'll need to put a bit more thought into this. What's important is that we've identified the issue. I still have a few more months to fix it (until October 31st actually), so there's no rush. Feel free to leave any suggestions in the comments.
Happy Coding!
Wednesday, May 6, 2015
Fun with DateTime: DateTimeKind.Unspecified
Last time, we changed out the sunset provider of our home automation software. Since I was trying to concentrate on something else at the time, I copy/pasta'd a code sample from documentation into the application. The code worked, but it felt like there was an easier way to do things.
I took a closer look and did find an easier way. But I also ran into the curiosity of how things behave when we have an unspecified kind on a date. Let's take a look.
Simplifying Code
The code we start with isn't that complex.
This uses the Solar Calculator package from NuGet. The second line (in each method) creates the SolarTimes object that we need by passing in the date and latitude/longitude. The other 2 lines of code converts that date/time to a specific timezone.
It turns out that we don't need a specific timezone for our application. Instead we can use the local time. This makes the code much simpler. Here's the updated code:
Instead of converting the "Sunset" and "Sunrise" properties to a particular timezone, we just take the values as-is. And this gives us the output that we expect:
This code works because the "Sunset" property represents local time (sort of). The DateTime structure has a "Kind" property. We'll need to explore this a little further.
Note: I also changed the name of the class. The package I used was also called "SolarCalculator", so there could have been some nasty side-effects. It makes sense to append "SunsetProvider" since that is how this is used in the rest of the application.
DateTimeKind
The "Kind" property of a DateTime is an enum (DateTimeKind) that has these values (from the MSDN documentation):
This means that we can mark a DateTime as being "Local" or "Utc". That should make things really easy.
So, let's see what "Sunset" is:
Oops, it looks like the "Kind" is "Unspecified". But if we look at the value itself, we can see that it represents the local time for sunset.
Changing the DateTimeKind
Even though this has the right value, I feel like things would be better if we could change the "Unspecified" to "Local". And this is where we see some of the weirdness when we have an "Unspecified" kind.
DateTime has a "ToLocalTime" method (from MSDN documentation):
This looks pretty promising. After running this method, the "Kind" will be "Local". But this method assumes that the "Unspecified" time is UTC. So, if we use it, we end up with an unexpected value:
That's not what we want; sunset is not just after noon. And this value is completely wrong (since it now represents a local time).
Let's try again. DateTime also has a "ToUniversalTime" method (from MSDN documentation):
This method makes the opposite assumption. This method assumes that the "Unspecified" time is Local. So, if we use it, we end up with an unexpected output:
This is actually a bit closer. The value is correct. Yes, I know that sunset is not at 2:30 in the morning, but the Kind for this is "Utc". So the value is correct, even though the display is not what we want.
A Smelly Solution
The value is correct (2:38 a.m. UTC is 7:38 p.m. Local), so let's see what happens when we combine these methods:
Because the Kind is no longer "Unspecified" when we call "ToLocalTime", we get the output that we expect. Let's look at the progression:
But this just smells bad to me. Luckily, there is another option.
Setting DateTimeKind Directly
Our other option is to set the DateTimeKind directly. Well, sort of directly. The "Kind" property is read-only, but we do have a method that we can use:
This allows us to do the following:
This returns a new DateTime object that has the Kind set to "Local". And it has the value that we want as well:
If we really want to avoid having an "Unspecified" date/time, we can set it this way. And this method is much less smelly than converting a time to UTC and then converting it back to Local.
Wrap Up
So we've seen that when we deal with DateTimeKind, things get interesting if the value is "Unspecified". The "ToLocalTime" method assumes that the value represented is UTC, so it does a conversion. The "ToUniversalTime" method assumes that the value represented is Local, so it does a conversion. If we're not careful, we can end up with an incorrect value.
Working with DateTime is not as easy as it looks -- and we're not even dealing with any crazy timezone rules here. For a deeper dive, be sure to check out Matt Johnson's material, including the Pluralsight course Date and Time Fundamentals. (And I'm sure Matt has some specifics to add to this discussion.)
I like to simplify code wherever possible. Since this application doesn't need to deal with specific time zones, we can remove that code and deal with the local time. And to figure this out, I got to explore the DateTime object a bit more and learn how to use the "Kind" property. Learning something new is always fun.
This approach isn't the right solution every time, but it works just fine here. Sometimes we do care about specific time zones, but that's not needed in our case. The updated code is easier to approach, and I prefer to keep things simple where practical.
Happy Coding!
I took a closer look and did find an easier way. But I also ran into the curiosity of how things behave when we have an unspecified kind on a date. Let's take a look.
Simplifying Code
The code we start with isn't that complex.
Original Class |
This uses the Solar Calculator package from NuGet. The second line (in each method) creates the SolarTimes object that we need by passing in the date and latitude/longitude. The other 2 lines of code converts that date/time to a specific timezone.
It turns out that we don't need a specific timezone for our application. Instead we can use the local time. This makes the code much simpler. Here's the updated code:
Updated Class |
Instead of converting the "Sunset" and "Sunrise" properties to a particular timezone, we just take the values as-is. And this gives us the output that we expect:
This code works because the "Sunset" property represents local time (sort of). The DateTime structure has a "Kind" property. We'll need to explore this a little further.
Note: I also changed the name of the class. The package I used was also called "SolarCalculator", so there could have been some nasty side-effects. It makes sense to append "SunsetProvider" since that is how this is used in the rest of the application.
DateTimeKind
The "Kind" property of a DateTime is an enum (DateTimeKind) that has these values (from the MSDN documentation):
This means that we can mark a DateTime as being "Local" or "Utc". That should make things really easy.
So, let's see what "Sunset" is:
Oops, it looks like the "Kind" is "Unspecified". But if we look at the value itself, we can see that it represents the local time for sunset.
Changing the DateTimeKind
Even though this has the right value, I feel like things would be better if we could change the "Unspecified" to "Local". And this is where we see some of the weirdness when we have an "Unspecified" kind.
DateTime has a "ToLocalTime" method (from MSDN documentation):
This looks pretty promising. After running this method, the "Kind" will be "Local". But this method assumes that the "Unspecified" time is UTC. So, if we use it, we end up with an unexpected value:
That's not what we want; sunset is not just after noon. And this value is completely wrong (since it now represents a local time).
Let's try again. DateTime also has a "ToUniversalTime" method (from MSDN documentation):
This method makes the opposite assumption. This method assumes that the "Unspecified" time is Local. So, if we use it, we end up with an unexpected output:
This is actually a bit closer. The value is correct. Yes, I know that sunset is not at 2:30 in the morning, but the Kind for this is "Utc". So the value is correct, even though the display is not what we want.
A Smelly Solution
The value is correct (2:38 a.m. UTC is 7:38 p.m. Local), so let's see what happens when we combine these methods:
Because the Kind is no longer "Unspecified" when we call "ToLocalTime", we get the output that we expect. Let's look at the progression:
- Sunset = 7:38:58 p.m. (Unspecified)
- ToUniversalTime() = 2:38:58 a.m. (UTC)
- ToLocalTime() = 7:38:58 p.m. (Local)
But this just smells bad to me. Luckily, there is another option.
Setting DateTimeKind Directly
Our other option is to set the DateTimeKind directly. Well, sort of directly. The "Kind" property is read-only, but we do have a method that we can use:
This allows us to do the following:
This returns a new DateTime object that has the Kind set to "Local". And it has the value that we want as well:
If we really want to avoid having an "Unspecified" date/time, we can set it this way. And this method is much less smelly than converting a time to UTC and then converting it back to Local.
Wrap Up
So we've seen that when we deal with DateTimeKind, things get interesting if the value is "Unspecified". The "ToLocalTime" method assumes that the value represented is UTC, so it does a conversion. The "ToUniversalTime" method assumes that the value represented is Local, so it does a conversion. If we're not careful, we can end up with an incorrect value.
Working with DateTime is not as easy as it looks -- and we're not even dealing with any crazy timezone rules here. For a deeper dive, be sure to check out Matt Johnson's material, including the Pluralsight course Date and Time Fundamentals. (And I'm sure Matt has some specifics to add to this discussion.)
I like to simplify code wherever possible. Since this application doesn't need to deal with specific time zones, we can remove that code and deal with the local time. And to figure this out, I got to explore the DateTime object a bit more and learn how to use the "Kind" property. Learning something new is always fun.
This approach isn't the right solution every time, but it works just fine here. Sometimes we do care about specific time zones, but that's not needed in our case. The updated code is easier to approach, and I prefer to keep things simple where practical.
Happy Coding!
Subscribe to:
Posts (Atom)