I am a book person. I realize that most people aren't. Everyone has a preferred way of getting new information into his/her head: it could be books, it could be videos, it could be live demonstrations, it could be classroom learning, or it could be one-on-one mentoring.
I've read a ton of technical books over the years (well, I don't know if it is literally a ton, but several hundred pounds for sure). For this year, I gave myself a goal of reading one technical book a month. Some of the early books have been a bit on the short side, so I've managed to read a few more up to this point. Here's what I've read so far this year and what I've got "in the stack" for the next several months:
Completed 2013 Reading (with Reviews)
Async in C# 5.0 by Alex Davies
Jeremy's Review (Jan 2013) / Amazon Link
Working Effectively with Legacy Code by Michael C. Feathers
Jeremy's Review (Feb 2013) / Amazon Link
Test-Driven Development by Example by Kent Beck
Jeremy's Review (Mar 2013) / Amazon Link
Javascript: The Good Parts by Douglas Crockford
Jeremy's Review (Mar 2013) / Amazon Link
The Agile Samurai: How Agile Masters Deliver Great Software by Jonathan Rasmusson
Jeremy's Review (Mar 2013) / Amazon Link
In Progress
Building Windows 8 Apps with C# and XAML by Jeremy Likness
Amazon Link
In the Stack
C# Smorgasbord by Filip Ekberg
Amazon Link
Don't Make Me Think: A Common Sense Approach to Web Usability by Steve Krug
Amazon Link
Refactoring: Improving the Design of Existing Code by Martin Fowler
Amazon Link
Designing with the Mind in Mind: Simple Guide to Understanding User Interface Design Rules by Jeff Johnson
Amazon Link
Peopleware: Productive Projects and Teams by Tom DeMarco & Timothy Lister
Amazon Link
Refactoring to Patterns by Joshua Kerievsky
Amazon Link
Current Themes
As you can probably tell, I've got a couple of themes with this list. First is Best Practices. I have had a lot of successful projects in the past as well as a number of not-so-successful projects. I've been analyzing the differences based on my own experiences. What I've found is that many of the conclusions that I've reached through experience are itemized in several of these classic books, such as Clean Code which I read last year. (And as I've noted before, there are some books that I really should have read quite some time ago.)
Another theme is usability and user interaction design. This is an area that has always interested me. I don't have the natural eye that many designers (and a few developers) have. I make usable (but not brilliantly usable) applications, and I'm looking to improve that skill.
The Heap
I'm a just-in-time learner on many topics, so I'll bump things out of the stack if there's something that I need to pick up more quickly (or if my whims change). In addition to the current Stack of books, I've also got a larger Heap. These are the books that I've been collecting, but haven't gotten to yet.
The Heap has some more user interface design books (such as About Face: The Essentials of User Interface Design). And also a number of books about other languages (such as Haskell, Lisp, and a few others that use Scheme or Eiffel). I'll probably start out with Seven Languages in Seven Weeks to get a taste before diving deeper.
Wrap Up
Constant learning is essential in the development world. There's too much out there to be able to master everything, but we can get a taste for a lot of different techniques and technologies. And we can take a deep dive to reach expert level in one or two.
Whatever your mode of learning, keep doing it. If you've got any book suggestions, let me know. The Heap is constantly growing, and I'm always looking for more.
Happy Coding!
Friday, March 29, 2013
Thursday, March 28, 2013
Book Review: The Agile Samurai
I'm continuing to burn through my reading list this year. I recently finished reading The Agile Samurai: How Agile Masters Deliver Great Software by Jonathan Rasmusson (Amazon link).
Rasmusson provides a very good introduction to the Agile process and how to implement it in your projects. The approach is friendly and conversational -- which ends up underlining the point that much of Agile is about communication. The book itself is filled with diagrams that reinforce the points at hand and each section ends with a conversation with the Master Sensei to review points and answer questions that the "aspiring warrior" may have.
Ultimately, this leads to a set of reinforced principles that describe how to be successful with Agile.
The Parts
The book is broken down into sections that provide a logical procession through a typical project.
Part 1 - Introducing Agile
The first two chapters give an overview of what Agile is and some of the primary principles. One example is "Done means done". There is no 80% complete. A feature is either done or it is not -- this includes everything through acceptance testing. A feature is not done until it is ready to be deployed.
Another key feature of Agile is the self-organizing team. Chapter 2 talks about the different members of the team, how each of them are vital to the project success, and how people fall into these roles naturally or by choice.
Ultimately, everything is the responsibility of the Team. So, if you see something that needs to be done, go ahead and do it (with proper communication, of course).
Part 2 - Agile Project Inception
The next three chapters talk about how to get an Agile project started. This starts with getting everyone on board with the process. The next thing is to make sure that everyone is on the same page with the proposed project.
This book has a very practical approach to starting a project: The Inception Deck. These are 10 items that need to be completed before the project is actually started. These items make sure that we (as a Team) know where we are going, where we are not going, and start to determine how we are going to get there.
One of these items is creating the Elevator Pitch. If you only have 30 seconds to describe your project, how would you do it? Think about your current project; how would you describe it? If you struggle with this, it could mean that your project does not have a clear purpose. It could mean that your project is trying to do too many things at once. It could mean that you're not quite sure where you are going.
Another item is to create a "Not List". This is a list of things that the project is not going to do. This again helps make sure that everyone has the same expectations. If we say up front that we are not going to try to integrate with the legacy system, then we have that out in the open for everyone to see. If any of these "not" items are an issue, then we have a discussion to determine whether they should be moved to the list of things we do want in the project.
These are just 2 of the items in the Inception Deck to determine whether the project will give us the benefits that we expect and to make sure we're all headed in the same direction.
Part 3 - Agile Project Planning
The next three chapters have to do with planning. We've figured out what our project is all about (with the Inception Deck), now it's time to start planning.
Estimation is always a big part of any planning. One of the things that makes estimation a bit harder with Agile is that we don't have all of the requirements up-front. We have a general idea of the pieces that we need, but the specific "this function needs to do this, this, and this" comes later. So where do we start?
We start by creating User Stories. These are the functions that we want in our project. It could be things like logging in to the system, things that are a bit bigger like showing an inventory list, or things that are much bigger such as checking out a shopping cart.
The idea is that we take these user stories and assign relative levels of effort. We don't know exactly how long each item will take, but we do know which items are bigger or smaller than other items. We take this information and organize these according to some system. A point system (1 for small, 3 for medium, 5 for large) will give us an idea of how long (relatively) each feature will take.
At this point, we don't know how long the project will take to complete because we don't know how many "points" our team can handle on a regular basis. It may seem odd to say "we don't know how long this will take to complete." But the reality is that even if we give a hard answer in the beginning, things change so much that that original estimate is almost always wrong.
Part 4 - Agile Project Execution
The next three chapters are all about execution. Now that we have our general plan, how do we execute on it? An important tenant is to "Deliver something of value every week." Now this many not always be practical (and it may be that we are delivering value every two or three weeks, depending on our plans). But the general idea is that we are constantly moving forward with completed features.
This doesn't mean that we necessarily have a useful product. Usually, we need a critical mass of features before the final product is useful to the end users. But we are completing the features.
And remember "Done means done". When we get to the delivery part of the process, we have completed features that are ready to deploy. We don't have a whole bunch of 90% complete features. If we find that we aren't actually finishing everything, then we need to stop any new development and go back to finish up the things we have in progress.
Once we start delivering, we find out the velocity of our team -- how many "points" we are able to handle during each of our development cycles. This will take a couple of cycles to figure out (the first one will be slow, and there will always be things that either speed-up or slow-down the process).
As mentioned earlier, communication is a huge part of Agile. The customer is part of the Team. Face-to-face communication should be happening regularly (ideally daily). But in addition to the people who are part of the team, we need to communicate with the larger group. The recommendation is to set up regular "showcases" with the larger team -- this is often the user group, stakeholders, project funders, and others. This gives the project team a chance to show off the regular progress on the project. When the project is visibly moving forward, people are happy. This is much better than a team disappearing for 6 months and coming back with the wrong thing.
This showcase accomplishes a couple of things. First, (as mentioned), it creates a trust in the team. Forward progress is seen as a good thing. Second, there is more immediate feedback. If a feature comes out differently than expected, there is a chance to adjust the feature to better fit the users' needs. Third, if there is no progress, don't cancel the meeting. This gives the Team a chance to get up and say "nothing significant was done." Yes, it's embarrassing. But it is honest communication, and the Team will not want to go through that again, if possible.
Agile is all about change. We need to figure out what to do when the users want new features, or one of the items in the low-priority list gets moved to the top of the list. Again, communication is the key. If the expectations were set during the project inception, everyone knows what happens next: compromise. We figure out (as a Team), if we want to do a swap (trade a high-priority feature for a low-priority one) or extend the scope by adding the new feature (this is generally not recommended as it leads to scope-creep and a project that never gets released).
Part 5 - Creating Agile Software
The four chapters of the book talk about processes that help with Agile development. These include Unit Testing, Refactoring, Test-Driven Development (TDD), and Continuous Integration. These are all introductions to the topics at hand with recommendations to look into them further.
One thing that I appreciated about these chapters is that Rasmusson emphasized that these techniques may or may not work in your environment. Take and use what works and leave the rest (but be sure to give them a try to see if they do work).
Wrap Up
The Agile Samurai is a very practical introduction to Agile and techniques that can help you be successful. Throughout the book are useful examples and specific things that we can do to keep moving forward in a productive way.
The general approach of "do what works for you" is a good one. The book provides a number of techniques but advises that you determine which are appropriate for your environment. I would recommend this book to folks who are interested in looking further into Agile techniques and how they can be successful with them.
Happy Coding!
Rasmusson provides a very good introduction to the Agile process and how to implement it in your projects. The approach is friendly and conversational -- which ends up underlining the point that much of Agile is about communication. The book itself is filled with diagrams that reinforce the points at hand and each section ends with a conversation with the Master Sensei to review points and answer questions that the "aspiring warrior" may have.
Ultimately, this leads to a set of reinforced principles that describe how to be successful with Agile.
The Parts
The book is broken down into sections that provide a logical procession through a typical project.
Part 1 - Introducing Agile
The first two chapters give an overview of what Agile is and some of the primary principles. One example is "Done means done". There is no 80% complete. A feature is either done or it is not -- this includes everything through acceptance testing. A feature is not done until it is ready to be deployed.
Another key feature of Agile is the self-organizing team. Chapter 2 talks about the different members of the team, how each of them are vital to the project success, and how people fall into these roles naturally or by choice.
Ultimately, everything is the responsibility of the Team. So, if you see something that needs to be done, go ahead and do it (with proper communication, of course).
Part 2 - Agile Project Inception
The next three chapters talk about how to get an Agile project started. This starts with getting everyone on board with the process. The next thing is to make sure that everyone is on the same page with the proposed project.
This book has a very practical approach to starting a project: The Inception Deck. These are 10 items that need to be completed before the project is actually started. These items make sure that we (as a Team) know where we are going, where we are not going, and start to determine how we are going to get there.
One of these items is creating the Elevator Pitch. If you only have 30 seconds to describe your project, how would you do it? Think about your current project; how would you describe it? If you struggle with this, it could mean that your project does not have a clear purpose. It could mean that your project is trying to do too many things at once. It could mean that you're not quite sure where you are going.
Another item is to create a "Not List". This is a list of things that the project is not going to do. This again helps make sure that everyone has the same expectations. If we say up front that we are not going to try to integrate with the legacy system, then we have that out in the open for everyone to see. If any of these "not" items are an issue, then we have a discussion to determine whether they should be moved to the list of things we do want in the project.
These are just 2 of the items in the Inception Deck to determine whether the project will give us the benefits that we expect and to make sure we're all headed in the same direction.
Part 3 - Agile Project Planning
The next three chapters have to do with planning. We've figured out what our project is all about (with the Inception Deck), now it's time to start planning.
Estimation is always a big part of any planning. One of the things that makes estimation a bit harder with Agile is that we don't have all of the requirements up-front. We have a general idea of the pieces that we need, but the specific "this function needs to do this, this, and this" comes later. So where do we start?
We start by creating User Stories. These are the functions that we want in our project. It could be things like logging in to the system, things that are a bit bigger like showing an inventory list, or things that are much bigger such as checking out a shopping cart.
The idea is that we take these user stories and assign relative levels of effort. We don't know exactly how long each item will take, but we do know which items are bigger or smaller than other items. We take this information and organize these according to some system. A point system (1 for small, 3 for medium, 5 for large) will give us an idea of how long (relatively) each feature will take.
At this point, we don't know how long the project will take to complete because we don't know how many "points" our team can handle on a regular basis. It may seem odd to say "we don't know how long this will take to complete." But the reality is that even if we give a hard answer in the beginning, things change so much that that original estimate is almost always wrong.
Part 4 - Agile Project Execution
The next three chapters are all about execution. Now that we have our general plan, how do we execute on it? An important tenant is to "Deliver something of value every week." Now this many not always be practical (and it may be that we are delivering value every two or three weeks, depending on our plans). But the general idea is that we are constantly moving forward with completed features.
This doesn't mean that we necessarily have a useful product. Usually, we need a critical mass of features before the final product is useful to the end users. But we are completing the features.
And remember "Done means done". When we get to the delivery part of the process, we have completed features that are ready to deploy. We don't have a whole bunch of 90% complete features. If we find that we aren't actually finishing everything, then we need to stop any new development and go back to finish up the things we have in progress.
Once we start delivering, we find out the velocity of our team -- how many "points" we are able to handle during each of our development cycles. This will take a couple of cycles to figure out (the first one will be slow, and there will always be things that either speed-up or slow-down the process).
As mentioned earlier, communication is a huge part of Agile. The customer is part of the Team. Face-to-face communication should be happening regularly (ideally daily). But in addition to the people who are part of the team, we need to communicate with the larger group. The recommendation is to set up regular "showcases" with the larger team -- this is often the user group, stakeholders, project funders, and others. This gives the project team a chance to show off the regular progress on the project. When the project is visibly moving forward, people are happy. This is much better than a team disappearing for 6 months and coming back with the wrong thing.
This showcase accomplishes a couple of things. First, (as mentioned), it creates a trust in the team. Forward progress is seen as a good thing. Second, there is more immediate feedback. If a feature comes out differently than expected, there is a chance to adjust the feature to better fit the users' needs. Third, if there is no progress, don't cancel the meeting. This gives the Team a chance to get up and say "nothing significant was done." Yes, it's embarrassing. But it is honest communication, and the Team will not want to go through that again, if possible.
Agile is all about change. We need to figure out what to do when the users want new features, or one of the items in the low-priority list gets moved to the top of the list. Again, communication is the key. If the expectations were set during the project inception, everyone knows what happens next: compromise. We figure out (as a Team), if we want to do a swap (trade a high-priority feature for a low-priority one) or extend the scope by adding the new feature (this is generally not recommended as it leads to scope-creep and a project that never gets released).
Part 5 - Creating Agile Software
The four chapters of the book talk about processes that help with Agile development. These include Unit Testing, Refactoring, Test-Driven Development (TDD), and Continuous Integration. These are all introductions to the topics at hand with recommendations to look into them further.
One thing that I appreciated about these chapters is that Rasmusson emphasized that these techniques may or may not work in your environment. Take and use what works and leave the rest (but be sure to give them a try to see if they do work).
Wrap Up
The Agile Samurai is a very practical introduction to Agile and techniques that can help you be successful. Throughout the book are useful examples and specific things that we can do to keep moving forward in a productive way.
The general approach of "do what works for you" is a good one. The book provides a number of techniques but advises that you determine which are appropriate for your environment. I would recommend this book to folks who are interested in looking further into Agile techniques and how they can be successful with them.
Happy Coding!
Wednesday, March 20, 2013
Dependency Injection Composition Root: Are We Just Moving the Coupling?
I spoke on Dependency Injection at a user group yesterday, and an excellent question came up regarding the composition root and the coupling in our application. I wanted to explore this in a bit more detail.
The Scenario
In the sample code, we take a look at an application that is split into layers (each layer is a separate project):
The code from the ViewModel constructor looks like this:
There are several drawbacks to this (as is mentioned in the presentation). One is that due to the tight coupling between the ViewModel and the Repository, it would be difficult to add a different repository type (like one that uses a SQL or CSV data store). We would end up modifying our ViewModel with some sort of conditional to decide what sort of repository to use.
But the real question is whether the ViewModel should even care what kind of repository it's using? The answer is No. To fix this, we added a layer of abstraction (a repository interface) and used Constructor Injection to pass the responsibility of deciding what concrete type is used to someone else. We end up with the following code in our ViewModel:
This is good for our ViewModel: it is no longer tightly coupled to a particular repository; it only knows about an abstraction (the interface). We can remove the project reference to the PersonServiceRepository assembly. If we want to use a CSV repository (that implements IPersonRepository), then the ViewModel does not need to change (and this is what the presentation code actually shows). Success!
Moving Responsibility
But someone has to be responsible for deciding which concrete Repository we're using. The sample code moves this into the UI application (we'll have a bit more explanation of why this was chosen for this sample in just a bit).
The App.xaml.cs has the startup code for our application. Here's where we put the objects together:
What we have here is commonly referred to as the "Composition Root". This is the spot in our application where our objects are assembled -- we're taking our construction blocks and snapping them together. This is where we instantiate our Repository and pass it to the ViewModel. Then we take our instantiated ViewModel and pass it to our View (MainWindow.xaml).
If we want to change to a different Repository, we just update our composition root. We don't need to change our ViewModel or any other code in our application.
The Question: Are We Just Moving the Tight Coupling?
This is where the question came up. We've moved where we are instantiating our objects into the core application project. But now the application needs to have references to the PersonServiceRepository as well as the MainWindowViewModel and everything else in our application. Are we just moving the tight coupling?
The answer for this example is technically "Yes". But that's not necessarily a bad thing (and if we don't like it, there are other solutions as well). Let's take a closer look.
I put together the sample code in order to show the basics of Dependency Injection. This means that I simplified code in several places so that we can better understand the basic concepts. In the "real" applications that I've worked on, we did not have a hard-coded composition root, it was more dynamic (we'll look at this concept in a bit).
But even with the tight coupling in our composition root, we have loose coupling between our components (View, ViewModel, Repository). This means we still get the advantages that we were looking for. If we want to add a new Repository type, we don't need to modify our ViewModel. We do need to modify our composition root, but that's okay because our composition root is actually responsible for snapping the blocks together.
In the full sample, we also see that even with the tight coupling in the composition root, we still get the advantage of extensibility (by easily adding different repositories) and testability (by having "seams" in the code that make it easier to isolate code for unit tests).
But, we have some other options.
Bootstrapping
When using dependency injection and/or creating modular applications, there's a concept of "bootstrapping" the application. The bootstrapper is where we put the code that the application needs to find everything that it needs.
We could easily add another layer to this application:
This gives us a good place to start looking at the dynamic loading features of our tools. The sample code shows how we can use Unity to explicitly register our dependencies (through code or configuration). But we can also write a little bit of code to scan through a set of assemblies and automatically register the types that it finds there. If you want to pursue this route, you can take a look at Mark Seemann's book (Dependency Injection in .NET) or perhaps look at a larger application framework (such as Prism) that helps with modularization and dynamic loading.
Lazy Instantiation
Our sample shows "eager loading" -- that is, we're creating all of our objects up-front (whether we use them immediately or not). This is probably not the best approach for an application of any significant size. Again, the sample is purposefully simple so that we can concentrate on the core DI topics.
But we don't have to create our objects at the beginning. For example, when using Unity, we just need to put our registrations somewhere before we use the objects. The most convenient place is to register the objects that we need in the composition root / bootstrapper.
Registration does not instantiate any of these objects -- it simply puts them in a catalog so that the Unity container knows where to find the concrete types when it needs them. The objects are created when we start to resolve the objects from the container. Our sample container code shows a "Resolve" of the MainWindow right in the composition root. This has the effect of instantiating all of the required objects (the Repository, ViewModel, and View).
But we could resolve the View only when we are about to use it. In that case, none of the objects are instantiated up front. They get loaded as we need them.
This is a great approach, especially if you are doing modular programming. And this is exactly what we did in a recent project that uses Prism. In that project, we had a number of isolated modules (each module contained the Views and ViewModels for a particular function). When a module is created, it registers all of it's types with the Unity container. Then when a particular View is requested (through the Navigation features of Prism), that View is resolved from the container. At that point, the View is instantiated (along with any dependencies).
Lots of Options
There are a lot of options for us to choose from. Ultimately, which direction we go depends on the requirements of our application. If we don't need the flexibility of swapping out components at runtime, then we can use compile-time configuration (which still gives us the advantages of extensibility and testability). If we have a large number of independent functions, then we can look at creating isolated modules that only create objects as they are needed.
Remember that Dependency Injection is really a set of patterns that allows us to write loosely-coupled code. The specific implementation is up to us. There are plenty of examples as well as tools and frameworks to choose from. Take a quick look to see which ones will fit in best with your environment.
Happy Coding!
The Scenario
In the sample code, we take a look at an application that is split into layers (each layer is a separate project):
- View
- ViewModel
- Repository
- Service
The code from the ViewModel constructor looks like this:
There are several drawbacks to this (as is mentioned in the presentation). One is that due to the tight coupling between the ViewModel and the Repository, it would be difficult to add a different repository type (like one that uses a SQL or CSV data store). We would end up modifying our ViewModel with some sort of conditional to decide what sort of repository to use.
But the real question is whether the ViewModel should even care what kind of repository it's using? The answer is No. To fix this, we added a layer of abstraction (a repository interface) and used Constructor Injection to pass the responsibility of deciding what concrete type is used to someone else. We end up with the following code in our ViewModel:
This is good for our ViewModel: it is no longer tightly coupled to a particular repository; it only knows about an abstraction (the interface). We can remove the project reference to the PersonServiceRepository assembly. If we want to use a CSV repository (that implements IPersonRepository), then the ViewModel does not need to change (and this is what the presentation code actually shows). Success!
Moving Responsibility
But someone has to be responsible for deciding which concrete Repository we're using. The sample code moves this into the UI application (we'll have a bit more explanation of why this was chosen for this sample in just a bit).
The App.xaml.cs has the startup code for our application. Here's where we put the objects together:
What we have here is commonly referred to as the "Composition Root". This is the spot in our application where our objects are assembled -- we're taking our construction blocks and snapping them together. This is where we instantiate our Repository and pass it to the ViewModel. Then we take our instantiated ViewModel and pass it to our View (MainWindow.xaml).
If we want to change to a different Repository, we just update our composition root. We don't need to change our ViewModel or any other code in our application.
The Question: Are We Just Moving the Tight Coupling?
This is where the question came up. We've moved where we are instantiating our objects into the core application project. But now the application needs to have references to the PersonServiceRepository as well as the MainWindowViewModel and everything else in our application. Are we just moving the tight coupling?
The answer for this example is technically "Yes". But that's not necessarily a bad thing (and if we don't like it, there are other solutions as well). Let's take a closer look.
I put together the sample code in order to show the basics of Dependency Injection. This means that I simplified code in several places so that we can better understand the basic concepts. In the "real" applications that I've worked on, we did not have a hard-coded composition root, it was more dynamic (we'll look at this concept in a bit).
But even with the tight coupling in our composition root, we have loose coupling between our components (View, ViewModel, Repository). This means we still get the advantages that we were looking for. If we want to add a new Repository type, we don't need to modify our ViewModel. We do need to modify our composition root, but that's okay because our composition root is actually responsible for snapping the blocks together.
In the full sample, we also see that even with the tight coupling in the composition root, we still get the advantage of extensibility (by easily adding different repositories) and testability (by having "seams" in the code that make it easier to isolate code for unit tests).
But, we have some other options.
Bootstrapping
When using dependency injection and/or creating modular applications, there's a concept of "bootstrapping" the application. The bootstrapper is where we put the code that the application needs to find everything that it needs.
We could easily add another layer to this application:
- Bootstrapper
- View
- ViewModel
- Repository
- Service
This gives us a good place to start looking at the dynamic loading features of our tools. The sample code shows how we can use Unity to explicitly register our dependencies (through code or configuration). But we can also write a little bit of code to scan through a set of assemblies and automatically register the types that it finds there. If you want to pursue this route, you can take a look at Mark Seemann's book (Dependency Injection in .NET) or perhaps look at a larger application framework (such as Prism) that helps with modularization and dynamic loading.
Lazy Instantiation
Our sample shows "eager loading" -- that is, we're creating all of our objects up-front (whether we use them immediately or not). This is probably not the best approach for an application of any significant size. Again, the sample is purposefully simple so that we can concentrate on the core DI topics.
But we don't have to create our objects at the beginning. For example, when using Unity, we just need to put our registrations somewhere before we use the objects. The most convenient place is to register the objects that we need in the composition root / bootstrapper.
Registration does not instantiate any of these objects -- it simply puts them in a catalog so that the Unity container knows where to find the concrete types when it needs them. The objects are created when we start to resolve the objects from the container. Our sample container code shows a "Resolve" of the MainWindow right in the composition root. This has the effect of instantiating all of the required objects (the Repository, ViewModel, and View).
But we could resolve the View only when we are about to use it. In that case, none of the objects are instantiated up front. They get loaded as we need them.
This is a great approach, especially if you are doing modular programming. And this is exactly what we did in a recent project that uses Prism. In that project, we had a number of isolated modules (each module contained the Views and ViewModels for a particular function). When a module is created, it registers all of it's types with the Unity container. Then when a particular View is requested (through the Navigation features of Prism), that View is resolved from the container. At that point, the View is instantiated (along with any dependencies).
Lots of Options
There are a lot of options for us to choose from. Ultimately, which direction we go depends on the requirements of our application. If we don't need the flexibility of swapping out components at runtime, then we can use compile-time configuration (which still gives us the advantages of extensibility and testability). If we have a large number of independent functions, then we can look at creating isolated modules that only create objects as they are needed.
Remember that Dependency Injection is really a set of patterns that allows us to write loosely-coupled code. The specific implementation is up to us. There are plenty of examples as well as tools and frameworks to choose from. Take a quick look to see which ones will fit in best with your environment.
Happy Coding!
Sunday, March 17, 2013
Book Review: JavaScript: The Good Parts
Web programming continues to grow in importance -- especially in our current world with its plethora of devices running on a variety of platforms. Whether we like it or not, JavaScript has become the lingua franca of device programming. Every web-connected device that we have -- desktop, laptop, tablet, phone, phablet, game console -- understands JavaScript.
This led me to read JavaScript: The Good Parts by Douglas Crockford (Amazon link). This is a very dense book. A lot of information is presented in a very small space. Even though one could read through it very quickly (100 pages + appendices), you will probably find yourself spending a lot of time with each section to fully understand what's going on.
Last Decade vs. Today
My primary experience with JavaScript is from the early 00's when we were using it as a scripting language that added a bit of functionality to our otherwise static webpages. I would populate combo boxes based on JavaScript arrays, enable/disable buttons, and do simple DOM manipulation (such as showing/hiding divs on a page).
Back then, debugging was all but non-existent. Many of us remember a time where debugging consisted of running the page in a browser and looking for the yellow triangle in the corner. That told us "something didn't work" -- and that was it.
Today, JavaScript is used for more than just simple DOM manipulation (although it is still used for that). A huge number of libraries have entered popular use: jQuery, Knockout, Node, Backbone, and many, many more. Plus, we also have JavaScript wrappers (such as CoffeeScript and TypeScript) that compile down to plain JavaScript but allow us to use different syntax and idioms. JSON (JavaScript Object Notation) has grown as a light-weight data format that is preferred to XML is many situations.
In short, JavaScript is hard to avoid if you want to do relevant web programming. But this leads to a problem.
The Primary Problem with JavaScript
The reason why most developers run into problems with JavaScript is that they think that they already know JavaScript. On the surface, it looks fairly easy: it follows a C-style syntax and the idioms look familiar. But there are a lot of misconceptions.
I've been fortunate enough to know Troy Miles (a mobile web guru in Southern California) and hear him speak a number of times about JavaScript and web programming. (If you're interested in mobile and/or JavaScript, I would highly recommend that you check out his blog http://therockncoder.blogspot.com/ and YouTube channel http://www.youtube.com/rockncoder).
Because I had attended many of Troy's talks, I was prepared for what I found in JavaScript: The Good Parts. One of these learnings is "Don't think that you already know JavaScript." JavaScript is a language just like any other programming language. We wouldn't expect to walk up to Haskell or Ruby and start programming competently without actually learning the language. Why should we treat JavaScript any differently?
The Good Parts
The reason that Douglas Crockford references "The Good Parts" is because he's only covering a portion of JavaScript. He starts by saying that JavaScript is full of bad stuff -- things that you should avoid because there's no way to use them without getting yourself into trouble. But he argues that if we only stick the with the "good parts" of JavaScript, that we will find an expressive, elegant language that we can use to meet our needs.
One of the misconceptions about JavaScript is that it is related to Java. But it's not. Java is related to JavaScript the same way that car is related to carpet. Because of this misconception, developers often expect to treat JavaScript like an object-oriented language. That leads to much pain.
In reality, JavaScript is a functional language. If we treat it as such, then we'll be able to take full advantage of what it has to offer. Functional programming is a different way of thinking from object-oriented programming, so this means that OO developers must learn the differences.
Helpful Bits
I found quite a bit of useful information in this book. One of these has to do with the grammar of the language. The book provides railroad diagrams that show the syntax. These diagrams show up throughout the book and are collected for reference in an appendix. You can see these on the O'Reilly website: http://oreilly.com/javascript/excerpts/javascript-good-parts/syntax-diagrams.html.
Another part that I found useful was how objects are managed. Object literals give us a way to create object values. In addition, prototyping gives us a pseudo-inheritance model that let's us create and update values in a single location that can affect multiple objects.
The biggest thing that I took away from this book is the dynamic nature of the language. Values can be added or updated at any time. If you ask for a value that doesn't exist, then you simply get an "undefined" back -- there are no errors for asking for something that's not there.
The Bad Parts
Douglas Crockford talks about several bad parts (in fact, there are appendices dedicated to "The Bad Parts" and "The Awful Parts"). These include things like global variables, scope, semicolon insertion, "falsy" values, and several others.
What's good about this is that we get a warning about things to watch out for. These are things that are part of the JavaScript language and perfectly valid. However, if we use them in our code, we will probably find that they are more trouble that they are worth. They often lead us to false assumptions or inconsistent behavior. These warnings are invaluable.
In addition to the warnings, Crockford provides several work-arounds and tips. For example, JavaScript allows you to declare variables anywhere in a scope (including after they are used). Scope is based on the function, so it's best to put all of your variables at the top of a function. This is especially important since we won't get an error if we reference a variable that hasn't be declared.
Wrap Up
JavaScript is a vital language in today's multi-device world. We need to make sure that we take the time to actually learn the language. It is tempting to think that we already know it (since the syntax looks so familiar). But if we study the language, we find that JavaScript is a lot different from what we have assumed.
JavaScript: The Good Parts is a great read for anyone wanting to break their assumptions and start down the road toward really learning the language. With this book as a start, we are on our way to using JavaScript as the effective and elegant language that it can be.
Happy Coding!
This led me to read JavaScript: The Good Parts by Douglas Crockford (Amazon link). This is a very dense book. A lot of information is presented in a very small space. Even though one could read through it very quickly (100 pages + appendices), you will probably find yourself spending a lot of time with each section to fully understand what's going on.
Last Decade vs. Today
My primary experience with JavaScript is from the early 00's when we were using it as a scripting language that added a bit of functionality to our otherwise static webpages. I would populate combo boxes based on JavaScript arrays, enable/disable buttons, and do simple DOM manipulation (such as showing/hiding divs on a page).
Back then, debugging was all but non-existent. Many of us remember a time where debugging consisted of running the page in a browser and looking for the yellow triangle in the corner. That told us "something didn't work" -- and that was it.
Today, JavaScript is used for more than just simple DOM manipulation (although it is still used for that). A huge number of libraries have entered popular use: jQuery, Knockout, Node, Backbone, and many, many more. Plus, we also have JavaScript wrappers (such as CoffeeScript and TypeScript) that compile down to plain JavaScript but allow us to use different syntax and idioms. JSON (JavaScript Object Notation) has grown as a light-weight data format that is preferred to XML is many situations.
In short, JavaScript is hard to avoid if you want to do relevant web programming. But this leads to a problem.
The Primary Problem with JavaScript
The reason why most developers run into problems with JavaScript is that they think that they already know JavaScript. On the surface, it looks fairly easy: it follows a C-style syntax and the idioms look familiar. But there are a lot of misconceptions.
I've been fortunate enough to know Troy Miles (a mobile web guru in Southern California) and hear him speak a number of times about JavaScript and web programming. (If you're interested in mobile and/or JavaScript, I would highly recommend that you check out his blog http://therockncoder.blogspot.com/ and YouTube channel http://www.youtube.com/rockncoder).
Because I had attended many of Troy's talks, I was prepared for what I found in JavaScript: The Good Parts. One of these learnings is "Don't think that you already know JavaScript." JavaScript is a language just like any other programming language. We wouldn't expect to walk up to Haskell or Ruby and start programming competently without actually learning the language. Why should we treat JavaScript any differently?
The Good Parts
The reason that Douglas Crockford references "The Good Parts" is because he's only covering a portion of JavaScript. He starts by saying that JavaScript is full of bad stuff -- things that you should avoid because there's no way to use them without getting yourself into trouble. But he argues that if we only stick the with the "good parts" of JavaScript, that we will find an expressive, elegant language that we can use to meet our needs.
One of the misconceptions about JavaScript is that it is related to Java. But it's not. Java is related to JavaScript the same way that car is related to carpet. Because of this misconception, developers often expect to treat JavaScript like an object-oriented language. That leads to much pain.
In reality, JavaScript is a functional language. If we treat it as such, then we'll be able to take full advantage of what it has to offer. Functional programming is a different way of thinking from object-oriented programming, so this means that OO developers must learn the differences.
Helpful Bits
I found quite a bit of useful information in this book. One of these has to do with the grammar of the language. The book provides railroad diagrams that show the syntax. These diagrams show up throughout the book and are collected for reference in an appendix. You can see these on the O'Reilly website: http://oreilly.com/javascript/excerpts/javascript-good-parts/syntax-diagrams.html.
Another part that I found useful was how objects are managed. Object literals give us a way to create object values. In addition, prototyping gives us a pseudo-inheritance model that let's us create and update values in a single location that can affect multiple objects.
The biggest thing that I took away from this book is the dynamic nature of the language. Values can be added or updated at any time. If you ask for a value that doesn't exist, then you simply get an "undefined" back -- there are no errors for asking for something that's not there.
The Bad Parts
Douglas Crockford talks about several bad parts (in fact, there are appendices dedicated to "The Bad Parts" and "The Awful Parts"). These include things like global variables, scope, semicolon insertion, "falsy" values, and several others.
What's good about this is that we get a warning about things to watch out for. These are things that are part of the JavaScript language and perfectly valid. However, if we use them in our code, we will probably find that they are more trouble that they are worth. They often lead us to false assumptions or inconsistent behavior. These warnings are invaluable.
In addition to the warnings, Crockford provides several work-arounds and tips. For example, JavaScript allows you to declare variables anywhere in a scope (including after they are used). Scope is based on the function, so it's best to put all of your variables at the top of a function. This is especially important since we won't get an error if we reference a variable that hasn't be declared.
Wrap Up
JavaScript is a vital language in today's multi-device world. We need to make sure that we take the time to actually learn the language. It is tempting to think that we already know it (since the syntax looks so familiar). But if we study the language, we find that JavaScript is a lot different from what we have assumed.
JavaScript: The Good Parts is a great read for anyone wanting to break their assumptions and start down the road toward really learning the language. With this book as a start, we are on our way to using JavaScript as the effective and elegant language that it can be.
Happy Coding!
Thursday, March 14, 2013
Scummy Behavior: Preying on Windows Users
I got a call from one of those "There's a problem with your computer" places a few days ago. Since I had some free time, I decided to talk to them to see what it was all about. What I found is a group of people who have some of the scummiest behavior that I've seen in a while. The worst part of it is that it would be very easy to fall for, not just for people who are computer illiterate, but for computer literate people as well.
The Scenario
So, I got a call from someone telling me that my computer was reporting errors. The first thing I do is ask how they know that the errors are coming from my computer. This particular person didn't leave the script, so I kept interrupting him. I asked him what IP address the errors were coming from. He just went back to the script. When he saw that I wasn't going to stop asking questions, he decided to transfer me to his supervisor.
The Hook
The next person I talked to actually left the script for a little bit. When I asked how he knew that the errors were coming from my computer, he said that Windows collects errors and sends them back to Microsoft. They get these errors and help people fix the problem.
Now, I didn't get into "how did you get my phone number" because I knew that wouldn't get me anywhere. So, I asked how he knew that these errors were coming from *my* computer. He said he'd show me.
I won't go through step by step what he asked me to do. But it is very apparent that their script was designed for people who didn't use the computer much. Things along the line of "next to the Control key there's a key with the Windows logo on it. Do you see that?"
Ultimately, he had me open a command prompt and asked me to type "assoc". This primarily displays a list of file associations -- like that .xls is an Excel spreadsheet file. But these folks take advantage of something else that displays in the list.
The person asked me about the "longest line near the bottom." Now, the stuff at the bottom of my list were associations that had to do with Visual Studio, so I figured that he wanted me to find something a little further up. Ultimately, he had me look at this line:
.ZFSendToTarget=CLSID\{888DCA60-FC0A-11CF-8F0F-00C04FD7D062}
He then proceeded to read off the GUID part to "prove" that the errors were coming from my machine. That really pissed me off, because I know that this value is not unique to my machine (more on this below), but the average Windows user would not necessarily know that.
A Little Bit of Humor
The next thing I asked him was if he knew that a CLSID was?
For those of you who have done COM programming (twitch, twitch), then you know that class ids for COM objects *must* be the same on every single machine. That's the whole point. That's how Windows identifies COM objects.
I told the person on the phone that CLSIDs were not unique to the computer, and that they are the same on every single machine. I even told him that I was sitting in front of two computers that had the same value (which was a lie -- oops).
Finally, I asked him if he was sitting in front of a computer. He said yes, so I asked him to look at his ZFSendToTarget value. There was a slight pause, and then an "Oh, my." Then he went to get his supervisor.
The only consolation that I get from this exchange is the hope that this person didn't realize that he was involved in a scam -- that he had just taken a job in a call center. I hope that when he realized that what he was telling to the people he called was a lie, that he left. That's what I'm really hoping.
The Technical Lead
The last person I talked to described himself as the technical lead for the center. I was pretty much done with my exploration at this point. So, I told him that he didn't know what he was talking about. I asked him to follow the steps on his machine, and he told me that he got a different value (which was obviously a lie).
Eventually he got tired of me and hung up.
The Worst Part of This Scam
The worst part of this scam is that it sounds very reasonable. I'm worried about people like my mom getting this type of call. (Note: I'm not actually worried about my mom getting this call because she would call me to ask about it before doing anything. She also has a tendency to leave dialog boxes on her screen until she has a chance to talk to me. Better safe than sorry. She also reads my blog: "Hi, Mom!").
If you don't know what a CLSID is, then it sounds reasonable that this long, scary-looking GUID is unique to your computer. It certainly looks like a unique value. And it's not just people like my mom I'm worried about. Technically literate people like my brother could also be sucked in by this (again, not specifically my brother because he'd analyze this a bit more). My brother uses Windows computers all the time, rebuilds laptops, and has done some programming in the past. But I doubt that he's ever done COM programming (twitch, twitch), so he probably doesn't know what the CSLIDs are for.
This scam is nothing new. Right after the call, I did some quick Bingling and found that there were lots of other people talking about this scam.
BTW, if you're wondering what "ZFSendToTarget" means, it has to do with the right-click option in File Explorer. If you right-click on a file, select "Send To", then you see an option for "Zipped (compressed) folder". This CLSID is for that.
Again, what I hate about this is that it targets everyone who's never done COM programming (twitch, twitch), which is a huge number of Windows users and even a large number of Windows developers. (And you can always recognize someone who's done COM programming by the uncontrolled twitch whenever it's mentioned.)
Happy Coding!
The Scenario
So, I got a call from someone telling me that my computer was reporting errors. The first thing I do is ask how they know that the errors are coming from my computer. This particular person didn't leave the script, so I kept interrupting him. I asked him what IP address the errors were coming from. He just went back to the script. When he saw that I wasn't going to stop asking questions, he decided to transfer me to his supervisor.
The Hook
The next person I talked to actually left the script for a little bit. When I asked how he knew that the errors were coming from my computer, he said that Windows collects errors and sends them back to Microsoft. They get these errors and help people fix the problem.
Now, I didn't get into "how did you get my phone number" because I knew that wouldn't get me anywhere. So, I asked how he knew that these errors were coming from *my* computer. He said he'd show me.
I won't go through step by step what he asked me to do. But it is very apparent that their script was designed for people who didn't use the computer much. Things along the line of "next to the Control key there's a key with the Windows logo on it. Do you see that?"
Ultimately, he had me open a command prompt and asked me to type "assoc". This primarily displays a list of file associations -- like that .xls is an Excel spreadsheet file. But these folks take advantage of something else that displays in the list.
The person asked me about the "longest line near the bottom." Now, the stuff at the bottom of my list were associations that had to do with Visual Studio, so I figured that he wanted me to find something a little further up. Ultimately, he had me look at this line:
.ZFSendToTarget=CLSID\{888DCA60-FC0A-11CF-8F0F-00C04FD7D062}
He then proceeded to read off the GUID part to "prove" that the errors were coming from my machine. That really pissed me off, because I know that this value is not unique to my machine (more on this below), but the average Windows user would not necessarily know that.
A Little Bit of Humor
The next thing I asked him was if he knew that a CLSID was?
For those of you who have done COM programming (twitch, twitch), then you know that class ids for COM objects *must* be the same on every single machine. That's the whole point. That's how Windows identifies COM objects.
I told the person on the phone that CLSIDs were not unique to the computer, and that they are the same on every single machine. I even told him that I was sitting in front of two computers that had the same value (which was a lie -- oops).
Finally, I asked him if he was sitting in front of a computer. He said yes, so I asked him to look at his ZFSendToTarget value. There was a slight pause, and then an "Oh, my." Then he went to get his supervisor.
The only consolation that I get from this exchange is the hope that this person didn't realize that he was involved in a scam -- that he had just taken a job in a call center. I hope that when he realized that what he was telling to the people he called was a lie, that he left. That's what I'm really hoping.
The Technical Lead
The last person I talked to described himself as the technical lead for the center. I was pretty much done with my exploration at this point. So, I told him that he didn't know what he was talking about. I asked him to follow the steps on his machine, and he told me that he got a different value (which was obviously a lie).
Eventually he got tired of me and hung up.
The Worst Part of This Scam
The worst part of this scam is that it sounds very reasonable. I'm worried about people like my mom getting this type of call. (Note: I'm not actually worried about my mom getting this call because she would call me to ask about it before doing anything. She also has a tendency to leave dialog boxes on her screen until she has a chance to talk to me. Better safe than sorry. She also reads my blog: "Hi, Mom!").
If you don't know what a CLSID is, then it sounds reasonable that this long, scary-looking GUID is unique to your computer. It certainly looks like a unique value. And it's not just people like my mom I'm worried about. Technically literate people like my brother could also be sucked in by this (again, not specifically my brother because he'd analyze this a bit more). My brother uses Windows computers all the time, rebuilds laptops, and has done some programming in the past. But I doubt that he's ever done COM programming (twitch, twitch), so he probably doesn't know what the CSLIDs are for.
This scam is nothing new. Right after the call, I did some quick Bingling and found that there were lots of other people talking about this scam.
BTW, if you're wondering what "ZFSendToTarget" means, it has to do with the right-click option in File Explorer. If you right-click on a file, select "Send To", then you see an option for "Zipped (compressed) folder". This CLSID is for that.
Again, what I hate about this is that it targets everyone who's never done COM programming (twitch, twitch), which is a huge number of Windows users and even a large number of Windows developers. (And you can always recognize someone who's done COM programming by the uncontrolled twitch whenever it's mentioned.)
Happy Coding!
Wednesday, March 13, 2013
More Delegate Exception Handling
I was speaking about Delegates at a user group a couple weeks ago and decided it was time to dive a little bit deeper into exception handling with Multi-Cast Delegates. As a reminder about how multi-cast delegates work (and what happens if one of the methods throws an exception), check out my previous article: Exceptions in Multi-Cast Delegates.
Standard Behavior
As a quick review, all delegates in .NET are multi-cast delegates, meaning that multiple methods can be assigned to a single delegate. When that delegate is invoked, all of the methods get run synchronously in the order that they were assigned.
Here are the delegate assignments that we were looking at before:
act += p => Console.WriteLine(p.Average(r => r.Rating).ToString());
act += p => { throw new Exception("Error Here"); };
act += p => Console.WriteLine(p.Min(s => s.StartDate));
When the delegate is invoked normally, the execution stops with the exception in the second method. The third method never gets run. (Check the previous article for details.)
The Issue
The main issue is that we are rarely on "both sides" of the delegate. Usually we are either creating methods to assign to a delegate (to hook into an existing framework or API), or we are creating an API with a delegate for other people to use.
Think of it from this point of view: Let's assume that we have a publish/subscribe scenario (after all, events are just special kinds of delegates). In that case, we have the publisher (the API providing the delegate to which we can assign methods) and the subscribers (the methods that are assigned).
If we are creating subscriber methods, then the solution is easy. We just need to make sure that we don't let any unhandled exception escape our method. If our code is not working correctly, then we need to trap the error and make sure that it doesn't impact any other subscribers.
But things are a little more complicated if we are the publisher. As we saw in the previous examples, if there is a misbehaving subscriber, then other subscribers may be impacted. We need to avoid this. If we are responsible for the API, then we need to ensure that we notify every subscriber, regardless of whether they are behaving or not.
A More Robust Solution
We've already seen that the code we had previously just won't cut it:
try
{
act(people);
}
catch (Exception ex)
{
Console.WriteLine("Caught Exception: {0}", ex.Message);
}
To fix this, we can dig a little deeper into the delegate itself. It turns out that all delegates have a method called "GetInvocationList()". This will give us a collection of all of the methods that are assigned to the delegate (the return is technically an array of Delegate objects). If there are no methods, then the list will be empty; otherwise, it will have all of the methods assigned. This gives us a chance to handle things a little more manually.
foreach (var actDelegate in act.GetInvocationList())
{
try
{
actDelegate.Method.Invoke(this, new object[] { people });
}
catch
{
Console.WriteLine("Error in delegate method.");
}
}
Let's quickly walk this code. First we call GetInvocationList() on our delegate and then "foreach" through it. As noted above, each item (actDelegate) will be a System.Delegate object.
Now, inside the loop, we can access each method individually. The Delegate has a Method property that gives us a MethodInfo object. MethodInfo has an Invoke method that allows us to run the delegate method. The syntax here looks a bit odd. The first parameter is the object on which to invoke the method. In our case, we use "this" because our delegate exists in the class instance that we are in.
The next parameter is an object array with all of the parameters that will be passed to the method being invoked. In our case, we have a single parameter (List<Person>). So, we just have to create a new object array with that parameter as a member.
This seems like a lot of work, but because we are invoking each method individually, we can wrap each invocation in a try/catch block (which is exactly what we did). Now, if a method misbehaves, we simply catch the exception and move on to the next one in the list.
If we were to use our sample assignments above, we get the following output:
6.42857142857143
Error in delegate method.
10/17/1975 12:00:00 AM
So, we've successfully coded around a misbehaving method and ensured that all of the methods assigned to our delegate are run.
Is This Really Necessary?
The question on the top of your mind is "Is this really necessary?" After all, mutli-cast delegates are designed to allow us to make a single invocation that runs all of the assigned methods. Should we add this much complexity?
The answer is "It depends." (You've probably noticed that "it depends" is my standard answer.) If I were writing a delegate where I control both ends (for example, I'm writing both the publisher and the subscribers), then I probably wouldn't code this way. I would put more effort into ensuring that the subscribers behave appropriately and would not throw an unhandled exception.
However, if I were creating an API for general distribution, I would be much more protective about my delegate invocations. In that scenario, we need to make sure that a single bad input cannot bring down the system.
Wrap Up
Delegates are extremely powerful and useful -- multi-cast delegates even more so. They can help us make our system extensible without requiring modification. They can offer "hooks" for other developers to run their own code. They give us a way to do pseudo-functional programming in our object-oriented C# environment.
We need to be aware of what can go wrong when we add these tools. As we've seen, it is not difficult to handle exceptions (either in the methods or in the delegate invocation). When we make sure that we have our bases covered, then we can minimize unpleasant surprises in our system.
Happy Coding!
Standard Behavior
As a quick review, all delegates in .NET are multi-cast delegates, meaning that multiple methods can be assigned to a single delegate. When that delegate is invoked, all of the methods get run synchronously in the order that they were assigned.
Here are the delegate assignments that we were looking at before:
act += p => Console.WriteLine(p.Average(r => r.Rating).ToString());
act += p => { throw new Exception("Error Here"); };
act += p => Console.WriteLine(p.Min(s => s.StartDate));
When the delegate is invoked normally, the execution stops with the exception in the second method. The third method never gets run. (Check the previous article for details.)
The Issue
The main issue is that we are rarely on "both sides" of the delegate. Usually we are either creating methods to assign to a delegate (to hook into an existing framework or API), or we are creating an API with a delegate for other people to use.
Think of it from this point of view: Let's assume that we have a publish/subscribe scenario (after all, events are just special kinds of delegates). In that case, we have the publisher (the API providing the delegate to which we can assign methods) and the subscribers (the methods that are assigned).
If we are creating subscriber methods, then the solution is easy. We just need to make sure that we don't let any unhandled exception escape our method. If our code is not working correctly, then we need to trap the error and make sure that it doesn't impact any other subscribers.
But things are a little more complicated if we are the publisher. As we saw in the previous examples, if there is a misbehaving subscriber, then other subscribers may be impacted. We need to avoid this. If we are responsible for the API, then we need to ensure that we notify every subscriber, regardless of whether they are behaving or not.
A More Robust Solution
We've already seen that the code we had previously just won't cut it:
try
{
act(people);
}
catch (Exception ex)
{
Console.WriteLine("Caught Exception: {0}", ex.Message);
}
To fix this, we can dig a little deeper into the delegate itself. It turns out that all delegates have a method called "GetInvocationList()". This will give us a collection of all of the methods that are assigned to the delegate (the return is technically an array of Delegate objects). If there are no methods, then the list will be empty; otherwise, it will have all of the methods assigned. This gives us a chance to handle things a little more manually.
foreach (var actDelegate in act.GetInvocationList())
{
try
{
actDelegate.Method.Invoke(this, new object[] { people });
}
catch
{
Console.WriteLine("Error in delegate method.");
}
}
Let's quickly walk this code. First we call GetInvocationList() on our delegate and then "foreach" through it. As noted above, each item (actDelegate) will be a System.Delegate object.
Now, inside the loop, we can access each method individually. The Delegate has a Method property that gives us a MethodInfo object. MethodInfo has an Invoke method that allows us to run the delegate method. The syntax here looks a bit odd. The first parameter is the object on which to invoke the method. In our case, we use "this" because our delegate exists in the class instance that we are in.
The next parameter is an object array with all of the parameters that will be passed to the method being invoked. In our case, we have a single parameter (List<Person>). So, we just have to create a new object array with that parameter as a member.
This seems like a lot of work, but because we are invoking each method individually, we can wrap each invocation in a try/catch block (which is exactly what we did). Now, if a method misbehaves, we simply catch the exception and move on to the next one in the list.
If we were to use our sample assignments above, we get the following output:
6.42857142857143
Error in delegate method.
10/17/1975 12:00:00 AM
So, we've successfully coded around a misbehaving method and ensured that all of the methods assigned to our delegate are run.
Is This Really Necessary?
The question on the top of your mind is "Is this really necessary?" After all, mutli-cast delegates are designed to allow us to make a single invocation that runs all of the assigned methods. Should we add this much complexity?
The answer is "It depends." (You've probably noticed that "it depends" is my standard answer.) If I were writing a delegate where I control both ends (for example, I'm writing both the publisher and the subscribers), then I probably wouldn't code this way. I would put more effort into ensuring that the subscribers behave appropriately and would not throw an unhandled exception.
However, if I were creating an API for general distribution, I would be much more protective about my delegate invocations. In that scenario, we need to make sure that a single bad input cannot bring down the system.
Wrap Up
Delegates are extremely powerful and useful -- multi-cast delegates even more so. They can help us make our system extensible without requiring modification. They can offer "hooks" for other developers to run their own code. They give us a way to do pseudo-functional programming in our object-oriented C# environment.
We need to be aware of what can go wrong when we add these tools. As we've seen, it is not difficult to handle exceptions (either in the methods or in the delegate invocation). When we make sure that we have our bases covered, then we can minimize unpleasant surprises in our system.
Happy Coding!
Monday, March 11, 2013
March Speaking Engagements
I have a few speaking engagements scheduled for March 2013. Hopefully, I'll see you at an event soon.
Tuesday, March 19, 2013
Disney .NET Developers Group
Disney employees and Cast Members can check TechSpot for details. Webcast available for those not in Burbank.
Burbank, CA
Wednesday, March 20, 2013
Pasadena .NET User Group (formerly San Gabriel Valley .NET Developers Group)
http://sgvdotnet.org/
Pasadena, CA
Saturday, March 23, 2013
Utah Code Camp 2013
http://www.utahgeekevents.com/
Sandy, UT
The topic this month is Dependency Injection: A Practical Introduction. We'll be looking at how Dependency Injection (DI) can make our applications more easily testable, extensible and maintainable. Starting with an application with no DI, we'll add DI manually, and then look at how a DI container can make life easier. When we're done, you should have a good feel for where to start and why Dependency Injection is a great addition to your toolbox.
Happy Coding!
Tuesday, March 19, 2013
Disney .NET Developers Group
Disney employees and Cast Members can check TechSpot for details. Webcast available for those not in Burbank.
Burbank, CA
Wednesday, March 20, 2013
Pasadena .NET User Group (formerly San Gabriel Valley .NET Developers Group)
http://sgvdotnet.org/
Pasadena, CA
Saturday, March 23, 2013
Utah Code Camp 2013
http://www.utahgeekevents.com/
Sandy, UT
The topic this month is Dependency Injection: A Practical Introduction. We'll be looking at how Dependency Injection (DI) can make our applications more easily testable, extensible and maintainable. Starting with an application with no DI, we'll add DI manually, and then look at how a DI container can make life easier. When we're done, you should have a good feel for where to start and why Dependency Injection is a great addition to your toolbox.
Happy Coding!
Sunday, March 10, 2013
Book Review: Test-Driven Development by Example
I've mentioned several times how I'm interested in Test-Driven Development (TDD) but that I'm not quite there in my own coding (lots of unit testing, a few "test-first" scenarios, but mostly verification tests). To help me move toward TDD, I recently read Test-Driven Development by Example by Kent Beck (Amazon link). This is a relatively short book (around 200 pages) but is densely packed with good techniques.
TDD in Three Parts
Test-Driven Development by Example is broken up into 3 parts, each with a different focus. A quick introduction to the TDD process (red, green, refactor) kicks things off. For those unfamiliar with TDD, the idea is that we always write the tests first. So, the first thing we do is write a failing test (red). Then, we write just enough code to get that test to pass (green). Finally, we eliminate duplication and/or rework the design (refactor) to make the code that was "just enough to pass" into code that we don't mind leaving in the system.
Part I: The Money Example
The first part walks through an example of creating a way to manage money with different currencies (in Java). I found this section to be the most useful. Beck starts by creating a written list of things that need to be tested based on the functionality required.
In the money example, this comes down to the following:
This is where we want to end up, but it's way to big of a step. So, the test is broken down into smaller and smaller steps that we can actually write tests for and code for.
I won't got into the details here (that's why you should read the book). Ultimately, to get to this final functionality, we end up with a collection of 30 or so tests (sorry, I didn't actually count them). This may sound like a lot, but it turns out to be worthwhile. Throughout the process, the focus is getting small pieces of functionality into the system (even if it is just verifying equality).
The result of the continuous red/green/refactor process is a design the evolves based on the actual needs of the system rather that what we think the system might need.
Part II: The xUnit Example
The second part is equally interesting; Beck builds a unit testing framework with Python. Python is used because of the dynamic nature of the language. It seems a bit meta to be building a unit testing framework using TDD (and granted, it seems a bit awkward getting started). But ultimately, building a test system based on its ability to test itself is intriguing.
The actual example may not be that applicable (most of us don't build unit testing frameworks). But the process is extremely useful. Again, Beck takes the process and breaks it down into small, easily testable steps. This keeps a constant forward motion -- we don't get "stuck" anywhere along the way.
Part III: Patterns for Test-Driven Development
The third part is more a catalog of different patterns that can be used to help with TDD. This includes both patterns that are used for the tests themselves as well as the patterns used in the tested code.
The testing patterns are those used throughout the book, so it's good to see a separate (sort-of-formal) explanation along with the example from the earlier code. This serves to reinforce the ideas from the earlier parts of the book.
There is a chapter on Refactoring which is both good and not so good. It's good in the way that it shows different refactoring patterns (such as Move Method and Extract Interface) along with steps that we need to take to implement the pattern. But it is a bit anemic. Most of the patterns just have a page or two of description.
This isn't a book on refactoring, so I wouldn't hold the lack of details against it. Instead, treat it as a good jumping off point to study refactoring in more detail. A great choice for refactoring patterns is Working Effectively with Legacy Code by Michael C. Feathers (recent review). Since I recently read this book, the refactoring patterns were fresh in my mind. Another recommendation that comes up is Refactoring: Improving the Design of Existing Code by Martin Fowler and featuring Kent Beck, among others (Amazon link). As a disclaimer, I haven't actually read Refactoring yet (it's in my stack of "to be read"), but it gets mentioned so often (and it's Martin Fowler), that it's a pretty safe recommendation.
What I Took Away
So, what did I take away from Test-Driven Development by Example?
Don't Think "Too Big"
First, I've been thinking "too big". What was interesting about both examples is how Beck continually broke the problem down into smaller and smaller pieces. My tendency is to think of the problem at the macro level and try to test for that. Instead, I need to think much, much smaller.
The idea behind writing small tests is to keep from getting stuck. If you come across a problem, and you're not quite sure how to approach it, break it down into simpler pieces. This way, you are constantly moving forward. With immediate feedback (watching the red turn to green), we are constantly making progress.
In several places in the book, Beck starts taking larger steps (when he's feeling confident). Sometimes those larger steps work; sometimes they don't. If the larger step doesn't work, then there's no problem. Simply break it down and try again.
One really good piece of advice that Beck gave is to leave a "red" test at the end of a coding session. This will serve as a bookmark when we pick up your coding the next day. And it will help us get back into the mindset of where we left off.
Test Until Bored
Another problem is to figure out when we might be thinking too small. Maybe we get into a position where we think our tests are getting too small -- that we're testing too much. Beck recommends that we "Test until bored."
"Test until bored" is a great approach. Again, it helps to make sure that we remain interested in our code and what we're building. I've heard this repeated by a few different testing folks, including Llewellyn Falco in a recent episode of Hanselminutes (Approval Tests with Llewellyn Falco). This is an excellent podcast, and Llewellyn shows how Approval Tests (an open-source testing "enhancer") can sit on top of your current testing framework to make it easier to test things that would normally be hard to test.
Reconciling Design
I do still have one open issue after reading Test-Driven Development by Example: how much design do we do?
If you've seen my previous posts about design patterns (available here: Learn the Lingo: Design Patterns), you know that one of the reasons I encourage people to learn design patterns is so that we can stay in design mode longer (meaning, we think about our design a bit more before writing any code). This is the exact opposite of what TDD tells us. TDD tells us that if we write our tests in small increments and refactor appropriately, then good design will naturally evolve.
I'll have to admit that I'm still stuck on this one. As an experienced developer, I've got a fairly good idea of what needs to be built in certain circumstances. In those cases, I end up with quite a bit of the design already done by the time I start coding. This may be something that I need to unlearn. Or it may be something that I need to reconcile with TDD. We shouldn't give up what we've learned through our experience. So, probably the best approach is for me to learn how to combine TDD with the design that I think I'll end up with. It could be that TDD takes me in a completely different direction. Time will tell.
Wrap Up
A lot of experienced developers recommend TDD as a way of building well-design, maintainable apps. It's something that's really hard to ignore. It helps us write code in small pieces. As I recently heard, "If you aren't writing incremental code, then you're writing excremental code." Test-Driven Development by Example gave me the information that I need to jump in and do this in my own coding. Seeing the examples play out in a step-by-step fashion helped me understand the TDD mindset of moving in very small pieces and only building things as the tests demand.
Happy Coding!
TDD in Three Parts
Test-Driven Development by Example is broken up into 3 parts, each with a different focus. A quick introduction to the TDD process (red, green, refactor) kicks things off. For those unfamiliar with TDD, the idea is that we always write the tests first. So, the first thing we do is write a failing test (red). Then, we write just enough code to get that test to pass (green). Finally, we eliminate duplication and/or rework the design (refactor) to make the code that was "just enough to pass" into code that we don't mind leaving in the system.
Part I: The Money Example
The first part walks through an example of creating a way to manage money with different currencies (in Java). I found this section to be the most useful. Beck starts by creating a written list of things that need to be tested based on the functionality required.
In the money example, this comes down to the following:
$5 + 10 CHF = $10 if rate is 2:1
This is where we want to end up, but it's way to big of a step. So, the test is broken down into smaller and smaller steps that we can actually write tests for and code for.
I won't got into the details here (that's why you should read the book). Ultimately, to get to this final functionality, we end up with a collection of 30 or so tests (sorry, I didn't actually count them). This may sound like a lot, but it turns out to be worthwhile. Throughout the process, the focus is getting small pieces of functionality into the system (even if it is just verifying equality).
The result of the continuous red/green/refactor process is a design the evolves based on the actual needs of the system rather that what we think the system might need.
Part II: The xUnit Example
The second part is equally interesting; Beck builds a unit testing framework with Python. Python is used because of the dynamic nature of the language. It seems a bit meta to be building a unit testing framework using TDD (and granted, it seems a bit awkward getting started). But ultimately, building a test system based on its ability to test itself is intriguing.
The actual example may not be that applicable (most of us don't build unit testing frameworks). But the process is extremely useful. Again, Beck takes the process and breaks it down into small, easily testable steps. This keeps a constant forward motion -- we don't get "stuck" anywhere along the way.
Part III: Patterns for Test-Driven Development
The third part is more a catalog of different patterns that can be used to help with TDD. This includes both patterns that are used for the tests themselves as well as the patterns used in the tested code.
The testing patterns are those used throughout the book, so it's good to see a separate (sort-of-formal) explanation along with the example from the earlier code. This serves to reinforce the ideas from the earlier parts of the book.
There is a chapter on Refactoring which is both good and not so good. It's good in the way that it shows different refactoring patterns (such as Move Method and Extract Interface) along with steps that we need to take to implement the pattern. But it is a bit anemic. Most of the patterns just have a page or two of description.
This isn't a book on refactoring, so I wouldn't hold the lack of details against it. Instead, treat it as a good jumping off point to study refactoring in more detail. A great choice for refactoring patterns is Working Effectively with Legacy Code by Michael C. Feathers (recent review). Since I recently read this book, the refactoring patterns were fresh in my mind. Another recommendation that comes up is Refactoring: Improving the Design of Existing Code by Martin Fowler and featuring Kent Beck, among others (Amazon link). As a disclaimer, I haven't actually read Refactoring yet (it's in my stack of "to be read"), but it gets mentioned so often (and it's Martin Fowler), that it's a pretty safe recommendation.
What I Took Away
So, what did I take away from Test-Driven Development by Example?
Don't Think "Too Big"
First, I've been thinking "too big". What was interesting about both examples is how Beck continually broke the problem down into smaller and smaller pieces. My tendency is to think of the problem at the macro level and try to test for that. Instead, I need to think much, much smaller.
The idea behind writing small tests is to keep from getting stuck. If you come across a problem, and you're not quite sure how to approach it, break it down into simpler pieces. This way, you are constantly moving forward. With immediate feedback (watching the red turn to green), we are constantly making progress.
In several places in the book, Beck starts taking larger steps (when he's feeling confident). Sometimes those larger steps work; sometimes they don't. If the larger step doesn't work, then there's no problem. Simply break it down and try again.
One really good piece of advice that Beck gave is to leave a "red" test at the end of a coding session. This will serve as a bookmark when we pick up your coding the next day. And it will help us get back into the mindset of where we left off.
Test Until Bored
Another problem is to figure out when we might be thinking too small. Maybe we get into a position where we think our tests are getting too small -- that we're testing too much. Beck recommends that we "Test until bored."
"Test until bored" is a great approach. Again, it helps to make sure that we remain interested in our code and what we're building. I've heard this repeated by a few different testing folks, including Llewellyn Falco in a recent episode of Hanselminutes (Approval Tests with Llewellyn Falco). This is an excellent podcast, and Llewellyn shows how Approval Tests (an open-source testing "enhancer") can sit on top of your current testing framework to make it easier to test things that would normally be hard to test.
Reconciling Design
I do still have one open issue after reading Test-Driven Development by Example: how much design do we do?
If you've seen my previous posts about design patterns (available here: Learn the Lingo: Design Patterns), you know that one of the reasons I encourage people to learn design patterns is so that we can stay in design mode longer (meaning, we think about our design a bit more before writing any code). This is the exact opposite of what TDD tells us. TDD tells us that if we write our tests in small increments and refactor appropriately, then good design will naturally evolve.
I'll have to admit that I'm still stuck on this one. As an experienced developer, I've got a fairly good idea of what needs to be built in certain circumstances. In those cases, I end up with quite a bit of the design already done by the time I start coding. This may be something that I need to unlearn. Or it may be something that I need to reconcile with TDD. We shouldn't give up what we've learned through our experience. So, probably the best approach is for me to learn how to combine TDD with the design that I think I'll end up with. It could be that TDD takes me in a completely different direction. Time will tell.
Wrap Up
A lot of experienced developers recommend TDD as a way of building well-design, maintainable apps. It's something that's really hard to ignore. It helps us write code in small pieces. As I recently heard, "If you aren't writing incremental code, then you're writing excremental code." Test-Driven Development by Example gave me the information that I need to jump in and do this in my own coding. Seeing the examples play out in a step-by-step fashion helped me understand the TDD mindset of moving in very small pieces and only building things as the tests demand.
Happy Coding!
Sunday, March 3, 2013
Application Guidance Update: Kona and Prism
Back in December, I mentioned that there were updates coming for application guidance on WPF/Silverlight (Prism) and for Windows Store Apps (Kona). Well, here's a few more interesting things to look at.
Kona: Guidance for Building Windows Store LOB Apps
The biggest news is that the Kona guidance for building Windows Store Apps is moving forward. Brian Noyes was on .NET Rocks last week (http://www.dotnetrocks.com/default.aspx?showNum=847) talking about the work that Microsoft Patterns & Practices is putting into their latest guidance package. If you're interested in carrying over a Prism-like development style to Windows Store Apps, then this episode is definitely worth the listen.
Kona isn't released quite yet; it's still in beta. But Brian mentioned that they are doing (near) weekly drops on the Codeplex pages; check out the latest here: http://konaguidance.codeplex.com/. At the time of this writing, the latest version is from February 22, 2013. It looks like the release will happen sometime in March, but keep your eyes on the site.
Why Not Prism for Windows Store Apps?
One question that came up during the .NET Rocks episode is why Kona is a separate guidance package rather than an extension of the current Prism framework. There are a couple of primary reasons.
First, Windows Store Apps are different enough from WPF and Silverlight apps that the approaches to development may not translate well. So, rather than trying to fit a square peg into a round hole, the team decided to start from scratch. The approach was to first build a Line of Business application (the type that a corporate developer might build) and then work backwards to see what common items can be made easier by extracting them into the Kona framework.
Next, there are certain things in Prism that just don't make sense in the Windows Store App world. For example, Prism includes extensive support for Regions -- a way to put different views into different parts of the shell. In the full-screen world of Windows Store Apps (it's killing me not to just say "Metro" here...), the idea of regions just doesn't fit.
Additionally, Prism has good support for dynamically loading modules. This gives you the ability to drop assemblies into a folder that can then get loaded into the application at runtime. In the Windows Store App world, all of the assemblies need to be included in the package that is submitted to the store. So, there's no way to just "drop in a new assembly". All of the functionality needs to be included up front.
So, it makes sense that the Kona guidance is separate from Prism. I haven't taken a close look at Kona yet (I still haven't jumped into the Windows Store App world), so I don't have any opinions at this point. But, if I do need to build line of business applications for Windows Store Apps, I'll definitely start here.
Prism 4.5
One other interesting thing that Brian mentioned is that Prism is basically "done". That doesn't mean that it has hit end-of-life or anything like that (so don't panic). It just means that the framework has gone as far as it needs to, so we shouldn't expect new features at this point.
Prism has been compiled to work against the .NET 4.5 framework (you can get the latest from Codeplex: http://compositewpf.codeplex.com/releases/view/95815). But the features and functionality are the same as Prism 4.1. Brian did mention that we can expect an updated version when Patterns & Practices releases the next version of Unity, but there won't be any new functionality.
Wrap Up
I'm not necessarily an advocate of using either Prism or Kona. As with any framework, you need to take a close look at the problems that the framework addresses. If you don't have those problems, then it's probably not the right framework for you.
As mentioned previously (Five Months In: Working with Prism and WPF), Prism worked very well on the last major project I was involved with. But we also had many of the concerns that the framework set out to solve (such as Regions, Navigation, Modularity, and MVVM helpers). Your mileage may vary.
It's always good to know what's available. Then we are more likely to pick the right tool for the job when it's time to build a particular type of application. Keep expanding the toolbox.
Happy Coding!
Kona: Guidance for Building Windows Store LOB Apps
The biggest news is that the Kona guidance for building Windows Store Apps is moving forward. Brian Noyes was on .NET Rocks last week (http://www.dotnetrocks.com/default.aspx?showNum=847) talking about the work that Microsoft Patterns & Practices is putting into their latest guidance package. If you're interested in carrying over a Prism-like development style to Windows Store Apps, then this episode is definitely worth the listen.
Kona isn't released quite yet; it's still in beta. But Brian mentioned that they are doing (near) weekly drops on the Codeplex pages; check out the latest here: http://konaguidance.codeplex.com/. At the time of this writing, the latest version is from February 22, 2013. It looks like the release will happen sometime in March, but keep your eyes on the site.
Why Not Prism for Windows Store Apps?
One question that came up during the .NET Rocks episode is why Kona is a separate guidance package rather than an extension of the current Prism framework. There are a couple of primary reasons.
First, Windows Store Apps are different enough from WPF and Silverlight apps that the approaches to development may not translate well. So, rather than trying to fit a square peg into a round hole, the team decided to start from scratch. The approach was to first build a Line of Business application (the type that a corporate developer might build) and then work backwards to see what common items can be made easier by extracting them into the Kona framework.
Next, there are certain things in Prism that just don't make sense in the Windows Store App world. For example, Prism includes extensive support for Regions -- a way to put different views into different parts of the shell. In the full-screen world of Windows Store Apps (it's killing me not to just say "Metro" here...), the idea of regions just doesn't fit.
Additionally, Prism has good support for dynamically loading modules. This gives you the ability to drop assemblies into a folder that can then get loaded into the application at runtime. In the Windows Store App world, all of the assemblies need to be included in the package that is submitted to the store. So, there's no way to just "drop in a new assembly". All of the functionality needs to be included up front.
So, it makes sense that the Kona guidance is separate from Prism. I haven't taken a close look at Kona yet (I still haven't jumped into the Windows Store App world), so I don't have any opinions at this point. But, if I do need to build line of business applications for Windows Store Apps, I'll definitely start here.
Prism 4.5
One other interesting thing that Brian mentioned is that Prism is basically "done". That doesn't mean that it has hit end-of-life or anything like that (so don't panic). It just means that the framework has gone as far as it needs to, so we shouldn't expect new features at this point.
Prism has been compiled to work against the .NET 4.5 framework (you can get the latest from Codeplex: http://compositewpf.codeplex.com/releases/view/95815). But the features and functionality are the same as Prism 4.1. Brian did mention that we can expect an updated version when Patterns & Practices releases the next version of Unity, but there won't be any new functionality.
Wrap Up
I'm not necessarily an advocate of using either Prism or Kona. As with any framework, you need to take a close look at the problems that the framework addresses. If you don't have those problems, then it's probably not the right framework for you.
As mentioned previously (Five Months In: Working with Prism and WPF), Prism worked very well on the last major project I was involved with. But we also had many of the concerns that the framework set out to solve (such as Regions, Navigation, Modularity, and MVVM helpers). Your mileage may vary.
It's always good to know what's available. Then we are more likely to pick the right tool for the job when it's time to build a particular type of application. Keep expanding the toolbox.
Happy Coding!
Subscribe to:
Posts (Atom)