The end of the year is always a good time to take a look back and evaluate progress. It seems like 2012 has been a busy year for me, so let's take a look back at Things I've Done, Things I've Learned, Cool Stuff that Happened, and some Thing I Didn't Get To.
Things I've Done
Speaking
It seems like I did a lot of speaking this year. Anyone who's heard me speak has probably picked up on how much I love it, so I try to get out to as many events as I can. This year I presented 33 sessions at 19 events. The events included 5 Code Camps: the 3 So Cal Code Camps, Desert Code Camp in Arizona, and the Silicon Valley Code Camp (my first time there -- wow, it's big).
The rest of the presentations were at various User Groups in So Cal as well as Arizona and Nevada. A big thank you to all of the user groups who hosted me; I always have a great time. (And not that I'm hinting or anything, but I'm available again this year -- just use the INETA request link on the right.)
Blogging
I wrote 54 blog articles this year (55 if you count this one). That's the most so far. The articles are a mixture of technical articles, book reviews, upcoming events, and answers to questions from session attendees. I even had a couple of 5 part series articles. It was nice to have some free time earlier in the year. I need to make sure that I set time aside specifically for technical articles.
New Sessions
I came up with 2 new sessions this year complete with code samples and walkthroughs: T, Earl Grey, Hot: Generics in .NET and Dependency Injection: A Practical Introduction. My goal was to come up with two sessions, and I'm glad that I had a chance to present the new topics. Dependency Injection has been a very popular topic, and I'm sure that I'll be speaking on that throughout the coming year.
Things I've Learned
It's been a big year for learning new things. As usual, most of my learning is just-in-time learning for whatever projects that I'm working on. I had a chance to work on a great WPF project (I really love XAML) that uses Prism (XAML application guidance from the Microsoft Patterns & Practices team). If you've been following my blog, you've seen my thoughts on the framework. And if you haven't been following my blog, you can catch up here and here (short version: Prism has worked very well for this particular project).
I've also learned quite a bit about Dependency Injection both through Mark Seemann's excellent book and from experience using it myself. I also had to do a bit of learning outside of my project requirements so that I could write the new session mentioned above.
And one of the big reasons I'm a fan of Dependency Injection is Unit Testing. I really dug in to Unit Testing this year, and I'm surprised at how quickly I took to it. I've used mocking and dependency injection to isolate units of code for testing. The one thing I like most about unit testing is that I can refactor with confidence. If there's a bit of clumsy code that needs to be reworked, I first make sure that I have good tests of the functionality. Then when I extract methods or update algorithms, I can be sure that the code is still working as intended. As I mentioned in an article earlier this year, I'm still not at TDD yet, but I find myself doing small bits of TDD during my coding process.
Honorable mention goes to MVVM and Tasks. I've been buried in MVVM and doing a bit of fine-tuning to make sure that I have good separation of concerns in the presentation of my applications. I've also dug into Tasks a bit deeper and have been learning about asynchronous calls, UI interaction, exception handling, and unit testing. Lots of good stuff.
And 2 of the most impactful books that I read this year are Clean Code and The Clean Coder. Clean Code has made me rethink some of the ways that I've been writing methods. I'm starting to think smaller with more layers -- something I was reluctant to do before. The Clean Coder has made me think more about myself as a professional -- making sure that my conduct is appropriate, that I'm clear with my communication, and that I live up to my commitments. This is definitely a process (both the code and me), so I'll keep moving forward in both areas in the coming year.
Cool Stuff that Happened
Probably the coolest thing that happened this year was being presented with a Microsoft MVP Award. It was a great honor to be recognized for my development community contributions, and I'm looking forward to attending the MVP summit with the goal of making lots of new MVP friends (in addition to the MVP friends that I already have).
I've been getting a bit of recognition at Code Camps as well. At the So Cal Code Camp in Los Angeles, I had someone ask if I was giving the Design Patterns talk again. And at the Desert Code Camp in Arizona, I had someone ask if I was talking about Lambda Expressions. These people attended my sessions the year before and still remembered not just me, but the topics as well. That makes me feel like I'm really making an impact with the topics that I present.
Another cool thing is that my blog is getting over 1,000 hits per month (I even had close to 2,000 in September). That might not sound like much (and it really isn't), but it's a big increase over the 250 to 350 hits per month that I used to get.
One thing that has directed some traffic to my blog is a mention on StackOverflow (thanks for the referral, Peter Ritchie). For some reason my articles on the BackgroundWorker component get a lot of hits. I guess it's because I have a soft spot for that component. I know that it will probably go away soon (it didn't get included in WinRT), but I'll hang onto it as long as I can. It's great at what it does.
Also for some strange reason, if you Google "MVVM Design Pattern", one of my articles comes up on the first page. That's driven a lot of traffic to my blog (which is really cool), but I'm not quite sure how I got ranked so high. I guess I shouldn't complain because it probably won't last too much longer.
Things I Didn't Get To
There were a couple of things that I didn't get to this year, but there's always next year. First, I was planning on getting my CTT+ certification (Certified Technical Trainer). I'm still interesting in pursuing it, but I didn't get to it this year.
I also didn't produce any technical videos this year. I was planning on putting together a handful of quick video tutorials on various topics (like lambda expressions and delegates) but never got around to it. A friend recently asked me for some feedback on a video that he's producing, and I think that might have given me the kick in the butt that I need to start making my own.
A Great Year Gone -- Another One Coming
Overall, it's been a great year. I'm learning and growing as a developer -- that lets me know that I'm still interested and haven't stagnated. I'm still having a great time speaking and mentoring, and I love to help other people. And it's been really great to get some recognition from the developer community.
There's a great year ahead. I've got some exciting things lined up. And I also have no idea what's going to happen with a lot of other things.
Take a few minutes to reflect on what you've learned this year and how you've grown as a developer. If you're not constantly learning, then you're probably falling behind. If you don't already attend a local User Group, find one. It's a great way to keep in contact with other developers, to find out what's going on outside of your company, and to learn about lots of great technologies.
Looking forward to another great year!
Happy Coding!
Monday, December 31, 2012
Monday, December 17, 2012
Update on Windows 8 Guidance
Last week (Five Months In: Working with Prism in WPF), I mentioned that application guidance for Windows 8 is still pending. As an update, Blaine Wastell published an article a few days ago with the roadmap for Windows 8 guidance: Prism on .NET 4.5 and the road to Windows 8 apps. So, it looks like we have something to look forward to from the Patterns & Practices team in the next few months.
Happy Coding!
Happy Coding!
Sunday, December 9, 2012
Five Months In: Working with Prism in WPF
Several months ago, I put down some initial thoughts regarding Prism: First Impressions: Working with Prism in WPF. Now that I've been working with Prism for five months, I thought it was time for a follow-up.
It turns out that my opinions haven't changed much from those first impressions. Prism is working out very well for the project I'm working on. Taking advantage of the modularity (and the independence of each module) has allowed us to quickly modify and add to our task workflows. We've managed to keep good separation and isolation which has made unit testing much easier. The navigation objects are working as expected. And the helper objects make the code much more compact, readable, and maintainable.
Prerequisites
I have confirmed my opinion that Prism is not for beginning programmers; there are a number of intermediate topics that are necessary to understand. What is interesting is that I have articles or presentations that are centered around these prerequisites. I didn't plan for this, it just turns out that the topics I've written about happen to revolve around the foundational knowledge required for Prism.
Here's the list from the previous article along with links to my materials:
Useful Features
In the previous article, I mentioned a number of features that I liked. That list is still accurate. I'll go into a little more detail on 2 of these features.
DelegateCommand
The DelegateCommand allows you to create an ICommand object with much less code. Commanding is used to data bind to buttons in the UI. Let's compare some code.
To implement ICommand yourself is not very complicated. But there is a lot of boiler-plate code in a custom object. Here's a sample command implementation (this is taken from the Dependency Injection session samples):
As we can see, there is quite a bit of code here, but we really only care about the last line (that assigns to the People property of the ViewModel). The rest is plumbing.
Compare this to using a DelegateCommand from the Prism framework:
This allows us to do the same thing (assign a value to the People property), but we can do this with just one line of code (the DelegateCommand constructor) instead of needing to create a custom class. The parameter of the DelegateCommand constructor is the "Execute" we want to perform ("Execute" is part of the ICommand interface).
ICommand has another method: "CanExecute". This returns true or false depending on whether the command can be executed. In our example, this always returns "true". But sometimes this is more complicated -- such as if we need to check a property or run some code. The DelegateCommand constructor takes a second parameter which is a delegate for "CanExecute". If you do not provide one, then "CanExecute" is always true.
As you can see, DelegateCommand makes our code much more compact. And if we don't want to use a lambda expression (or if our Execute is several lines of code), then we can use a standard delegate method instead. The result is that it is easier to see the intent of our code. Instead of looking through a custom command object for the Execute method, we have the code right here in the constructor (or a pointer to it in the case of a named delegate).
When you have a number of commands in the application, this makes a huge difference in the amount of code that needs to be written as well as the readability of that code.
NotificationObject
NotificationObject is another class that is provided by the Prism framework. This class implements the INotifyPropertyChanged interface. This interface needs to be implemented in order for data binding to work as expected in XAML. The interface itself is not hard to implement (just a few lines of boiler-plate code). The difference between NotificationObject and manual implementation has to do with the RaisePropertyChanged methods that are available.
When implementing INotifyPropertyChanged, you generally make a call to RaisePropertyChanged in the setter for a Property, such as this:
The RaisePropertyChanged method will alert the UI that the underlying property has been updated and that the UI should refresh the binding. Notice that this version of RaisePropertyChanged takes a string parameter.
In contrast, NotificationObject provides a RaisePropertyChanged method that takes an Expression. This lets us write code like this:
Your first thought might be, "I don't see much of a difference." But there is. Instead of using a string, we use an expression with the actual Property. This means that we get IntelliSense when we are creating this method. It also means we will get a compiler error if we type the property name incorrectly. And most importantly, if we rename our property, our refactoring tools will automatically update the RaisePropertyChanged method as well. Using properties over strings is definitely the way to go.
Nothing You Can't Do Yourself
I had an interesting conversation with someone about Prism. He asked me what I liked about it, and I told him that I really liked the DelegateCommand. His response was that you could create a DelegateCommand object yourself -- it's only about 15 lines of code. And he is absolutely correct.
Prism doesn't perform any magic. It consists of a set of helper classes and methods that you could implement yourself. But we don't have to. The Patterns & Practices team has implemented these for us and put them together in a package that is easy to use.
If you don't like any of the implementations, you are free to change them. Prism comes with all of the source code (including unit tests). So you can enhance or re-implement any of the features. My experience is that the features are well-implemented and have met my needs.
What About Windows 8?
My one disappointment around Prism is that it looks like there won't be an implementation for Windows Store Applications. Prism covers a lot of platforms: WPF, Silverlight 5, and Windows Phone 7.5. The latest rumors are that there will be a different guidance package from the Patterns & Practices team for Windows Store Applications, and I have not been able to hunt down a beta or a release date at this point.
If you are developing Windows 8 desktop application with WPF, then Prism will work just fine. But for Windows Store Applications, we will need to wait for another solution.
[UPDATE: There has been announcement regarding guidance for Windows Store Apps. More info: Update on Windows 8 Guidance]
Wrap Up
As I mentioned, my opinion of Prism has not changed much from my initial impressions of working with it. It has been a very good framework for my current project, and I will keep it in mind for future projects. If you are working on fairly straight-forward applications or applications with just a few screens, then Prism may be more than you need. But if you are working on modular or enterprise-level applications, it may be just what the doctor ordered.
Happy Coding!
It turns out that my opinions haven't changed much from those first impressions. Prism is working out very well for the project I'm working on. Taking advantage of the modularity (and the independence of each module) has allowed us to quickly modify and add to our task workflows. We've managed to keep good separation and isolation which has made unit testing much easier. The navigation objects are working as expected. And the helper objects make the code much more compact, readable, and maintainable.
Prerequisites
I have confirmed my opinion that Prism is not for beginning programmers; there are a number of intermediate topics that are necessary to understand. What is interesting is that I have articles or presentations that are centered around these prerequisites. I didn't plan for this, it just turns out that the topics I've written about happen to revolve around the foundational knowledge required for Prism.
Here's the list from the previous article along with links to my materials:
- Dependency Injection / Inversion of Control
Dependency Injection: A Practical Introduction - Interfaces
IEnumerable, ISaveable, IDontGetIt: Understanding .NET Interfaces - Delegates / Func<T> / Action<T>
Get Func<>-y: Delegates in .NET - Lambda Expressions
Learn to Love Lambdas - Events and Event Handlers
User Controls and Events - Model-View-ViewModel (MVVM)
Overview of the MVVM Design Pattern - Various other Design Patterns
Learn the Lingo: Design Patterns
Useful Features
In the previous article, I mentioned a number of features that I liked. That list is still accurate. I'll go into a little more detail on 2 of these features.
DelegateCommand
The DelegateCommand allows you to create an ICommand object with much less code. Commanding is used to data bind to buttons in the UI. Let's compare some code.
To implement ICommand yourself is not very complicated. But there is a lot of boiler-plate code in a custom object. Here's a sample command implementation (this is taken from the Dependency Injection session samples):
As we can see, there is quite a bit of code here, but we really only care about the last line (that assigns to the People property of the ViewModel). The rest is plumbing.
Compare this to using a DelegateCommand from the Prism framework:
This allows us to do the same thing (assign a value to the People property), but we can do this with just one line of code (the DelegateCommand constructor) instead of needing to create a custom class. The parameter of the DelegateCommand constructor is the "Execute" we want to perform ("Execute" is part of the ICommand interface).
ICommand has another method: "CanExecute". This returns true or false depending on whether the command can be executed. In our example, this always returns "true". But sometimes this is more complicated -- such as if we need to check a property or run some code. The DelegateCommand constructor takes a second parameter which is a delegate for "CanExecute". If you do not provide one, then "CanExecute" is always true.
As you can see, DelegateCommand makes our code much more compact. And if we don't want to use a lambda expression (or if our Execute is several lines of code), then we can use a standard delegate method instead. The result is that it is easier to see the intent of our code. Instead of looking through a custom command object for the Execute method, we have the code right here in the constructor (or a pointer to it in the case of a named delegate).
When you have a number of commands in the application, this makes a huge difference in the amount of code that needs to be written as well as the readability of that code.
NotificationObject
NotificationObject is another class that is provided by the Prism framework. This class implements the INotifyPropertyChanged interface. This interface needs to be implemented in order for data binding to work as expected in XAML. The interface itself is not hard to implement (just a few lines of boiler-plate code). The difference between NotificationObject and manual implementation has to do with the RaisePropertyChanged methods that are available.
When implementing INotifyPropertyChanged, you generally make a call to RaisePropertyChanged in the setter for a Property, such as this:
The RaisePropertyChanged method will alert the UI that the underlying property has been updated and that the UI should refresh the binding. Notice that this version of RaisePropertyChanged takes a string parameter.
In contrast, NotificationObject provides a RaisePropertyChanged method that takes an Expression. This lets us write code like this:
Your first thought might be, "I don't see much of a difference." But there is. Instead of using a string, we use an expression with the actual Property. This means that we get IntelliSense when we are creating this method. It also means we will get a compiler error if we type the property name incorrectly. And most importantly, if we rename our property, our refactoring tools will automatically update the RaisePropertyChanged method as well. Using properties over strings is definitely the way to go.
Nothing You Can't Do Yourself
I had an interesting conversation with someone about Prism. He asked me what I liked about it, and I told him that I really liked the DelegateCommand. His response was that you could create a DelegateCommand object yourself -- it's only about 15 lines of code. And he is absolutely correct.
Prism doesn't perform any magic. It consists of a set of helper classes and methods that you could implement yourself. But we don't have to. The Patterns & Practices team has implemented these for us and put them together in a package that is easy to use.
If you don't like any of the implementations, you are free to change them. Prism comes with all of the source code (including unit tests). So you can enhance or re-implement any of the features. My experience is that the features are well-implemented and have met my needs.
What About Windows 8?
My one disappointment around Prism is that it looks like there won't be an implementation for Windows Store Applications. Prism covers a lot of platforms: WPF, Silverlight 5, and Windows Phone 7.5. The latest rumors are that there will be a different guidance package from the Patterns & Practices team for Windows Store Applications, and I have not been able to hunt down a beta or a release date at this point.
If you are developing Windows 8 desktop application with WPF, then Prism will work just fine. But for Windows Store Applications, we will need to wait for another solution.
[UPDATE: There has been announcement regarding guidance for Windows Store Apps. More info: Update on Windows 8 Guidance]
Wrap Up
As I mentioned, my opinion of Prism has not changed much from my initial impressions of working with it. It has been a very good framework for my current project, and I will keep it in mind for future projects. If you are working on fairly straight-forward applications or applications with just a few screens, then Prism may be more than you need. But if you are working on modular or enterprise-level applications, it may be just what the doctor ordered.
Happy Coding!
Book Review: Professional .NET Framework 2.0
So, I'm a bit behind on commenting on the tech books that I've read recently. The first is Professional .NET Framework 2.0 by Joe Duffy (Amazon link). Note: this book is from 2006 and is currently out-of-print; fortunately there are several sellers in the Amazon marketplace how have new or used copies.
Now, it may seem strange to be reading a .NET 2.0 book from 2006 in 2012, and I would normally agree with that thought. But this book has a few key advantages to some others. It was recommended to me as a way to "fill in some gaps", and it has done just that. I'm not really going to review the entire book here, just provide some comments and samples on what made this valuable to me.
Big Plus - Joe Duffy
One of the big advantages to this book is its author. Joe Duffy was a program manager on the CLR (Common Language Runtime) team at Microsoft. This means that he knows the internals of the CLR and not just the external APIs. And this book provides plenty of insight into the internals, as we'll soon see.
Looking into IL
The biggest learning that I took away from this book is a better understanding of IL (Intermediate Language). This is what the compiler generates from your C# or VB.NET code (or any other .NET language). It is not machine code, but it has more in common with assembly language than a 3G or 4G language that business programmers generally use. The IL will get Just-In-Time compiled (jitted) to the instructions that the machine will actually run.
Now, I've known about IL for a long time, and I've seen people who look at it. Historically, I haven't been one to dig into it myself. But after having Joe Duffy explain IL and how language components get transformed into IL, I've been more interested in digging into the IL to see how things are generated.
Setting up ILDASM
The .NET SDK ships with a way to look at the IL -- this is ildasm.exe (for IL Disassembler). Here's where it's found on my 64-bit machine:
At this location, you'll find a file called "ildasm.exe". You can run this directly, and then use the File -> Open dialog to locate a .NET assembly. But what's even better is to add this as an External Tool in Visual Studio. Then you can just run it against whatever project you're currently working on.
To configure this in Visual Studio (I'm using VS2012, but it's the same in earlier versions), just go to the "Tools" menu and select "External Tools...". From here, you can Add a new tool. When you're done, it should look something like this:
For the Title, you can use whatever you'd like; this is just what will appear on the Tools menu. The command should be the fully-qualified location of the "ildasm.exe" file.
The "Arguments" and "Initial directory" settings are a little more interesting. Notice the arrow to the right of the text box. This gives you a number of variables that you can choose from. For the "Arguments" property, I chose 2 variables. The first "$(TargetName)" is the name of the assembly that is currently active. Note that this does not include the file extension, just the name. Because of this, we also need "$(TargetExt)". This will give us ".exe" or ".dll" depending on what type of assembly we are building.
The "Initial directory" setting tells us where the assembly (from the argument) is located. In this case, "$(TargetDir)" means the output location of the file. So, if we are in the Debug build, it will be "<project path>\bin\Debug\".
After clicking "OK", we now have a new item on our tools menu.
Using ILDASM
So, now that we have Visual Studio configured so that it's easy to get to ildasm, let's see what this looks like. I created a very simple console application that just outputs "Hello, World!" (using Console.WriteLine). I don't think I need to reproduce the code here ;-)
With this project open, I just need to go to Tools -> ILDasm, and it opens up the compiled assembly. Drilling down a bit, we get to the Main method:
The name of my application is "ILDamsTest.exe", and we traverse the tree to get to the Main method. If we double-click on the Main method, we get to the disassembled IL:
Here, we see the IL for the most basic console application. Going through the instructions, we have the following. First, "nop" is a "no op", meaning no operation is performed. This is here so that the compiler can attach a breakpoint to a part of the code that doesn't perform an operation (for example, attaching a breakpoint to the opening curly-brace of the method).
Next, we have "ldstr" which loads the string "Hello, World!" onto the stack. Next, we have a "call" that runs the Console.WriteLine method. Note that the IL has the fully-qualified name for this, including the assembly (mscorlib) and the namespace (System). We can see that this method takes a string argument, and it gets this from the top of the stack. Since we just pushed "Hello, World!" onto the stack, this string will be used for the parameter.
Next we have another "nop", for the closing curly-brace, and finally the "ret" that returns from the method.
Pretty simple. Now let's look at the same thing compiled in "Release" mode:
Notice that we've gone from 5 instructions down to 3; the "nop" instructions are gone. The compiler removed them since we can't breakpoint into release code.
More Fun with IL
So, after I started looking just a little bit at IL, I've become much more curious about how the compiler generates IL. For example, I like to see what code is generated by the "shortcuts" that the C# language and compiler give us.
Here are 2 equivalent blocks of code:
Now, the "foreach" loop will work on any object that implements the IEnumerable interface. That interface has a single method ("GetEnumerator"). (Note: if you want more information on IEnumerable, you can reference the series of articles at Next, Please! A Closer Look at IEnumerable.)
So, the first block of code shows how to use the Enumerator directly (with MoveNext and Current), and the second uses the "foreach" construct. We would expect that these two blocks would compile to the same IL.
But if we look at the IL, we find that they are not compiled to the same code. They are close, but when "foreach" is compiled, it is automatically wrapped in a try/finally block. Try this yourself to see.
And so started my journey into IL and the fun behind it. Some other interesting places to look include anonymous types (to see that the compiler does create a type with an internal name), delegates (to see what a delegate is behind the scenes), lambda expressions, and many others. You will probably come across some compiler optimizations that surprise you as well.
Note: I won't include any links to IL references because I haven't found any good ones. If you want the specification, you can search for "ECMA 335", with a caveat that this is the 500+ page specification document (not overly readable). Professional .NET Framework 2.0 provides a compact reference for IL (along with a number of examples).
[Update: I found an online reference. It's not very descriptive, but it has the CIL Instructions: http://www.en.csharp-online.net/CIL_Instruction_Set.]
Wrap Up
So, even though Professional .NET Framework 2.0 is 6 years old, I found many useful things in it. There are plenty of good topics (including a look at the Garbage Collector and also multi-threading). As you can probably tell, I've become comfortable looking at the IL -- something that I never would have done before. An appendix is dedicated to the IL operations so that you can parse the output of ildasm yourself. Once you start looking at the IL, you come across all sorts of interesting things that will give you a better understanding of how the code you write gets transformed into something closer to what the machine will actually run.
This is not a resource for a beginner or someone new to C#, but it is a great way to take a deeper dive into some of the fundamentals of the .NET Framework.
Happy Coding!
Now, it may seem strange to be reading a .NET 2.0 book from 2006 in 2012, and I would normally agree with that thought. But this book has a few key advantages to some others. It was recommended to me as a way to "fill in some gaps", and it has done just that. I'm not really going to review the entire book here, just provide some comments and samples on what made this valuable to me.
Big Plus - Joe Duffy
One of the big advantages to this book is its author. Joe Duffy was a program manager on the CLR (Common Language Runtime) team at Microsoft. This means that he knows the internals of the CLR and not just the external APIs. And this book provides plenty of insight into the internals, as we'll soon see.
Looking into IL
The biggest learning that I took away from this book is a better understanding of IL (Intermediate Language). This is what the compiler generates from your C# or VB.NET code (or any other .NET language). It is not machine code, but it has more in common with assembly language than a 3G or 4G language that business programmers generally use. The IL will get Just-In-Time compiled (jitted) to the instructions that the machine will actually run.
Now, I've known about IL for a long time, and I've seen people who look at it. Historically, I haven't been one to dig into it myself. But after having Joe Duffy explain IL and how language components get transformed into IL, I've been more interested in digging into the IL to see how things are generated.
Setting up ILDASM
The .NET SDK ships with a way to look at the IL -- this is ildasm.exe (for IL Disassembler). Here's where it's found on my 64-bit machine:
At this location, you'll find a file called "ildasm.exe". You can run this directly, and then use the File -> Open dialog to locate a .NET assembly. But what's even better is to add this as an External Tool in Visual Studio. Then you can just run it against whatever project you're currently working on.
To configure this in Visual Studio (I'm using VS2012, but it's the same in earlier versions), just go to the "Tools" menu and select "External Tools...". From here, you can Add a new tool. When you're done, it should look something like this:
For the Title, you can use whatever you'd like; this is just what will appear on the Tools menu. The command should be the fully-qualified location of the "ildasm.exe" file.
The "Arguments" and "Initial directory" settings are a little more interesting. Notice the arrow to the right of the text box. This gives you a number of variables that you can choose from. For the "Arguments" property, I chose 2 variables. The first "$(TargetName)" is the name of the assembly that is currently active. Note that this does not include the file extension, just the name. Because of this, we also need "$(TargetExt)". This will give us ".exe" or ".dll" depending on what type of assembly we are building.
The "Initial directory" setting tells us where the assembly (from the argument) is located. In this case, "$(TargetDir)" means the output location of the file. So, if we are in the Debug build, it will be "<project path>\bin\Debug\".
After clicking "OK", we now have a new item on our tools menu.
Using ILDASM
So, now that we have Visual Studio configured so that it's easy to get to ildasm, let's see what this looks like. I created a very simple console application that just outputs "Hello, World!" (using Console.WriteLine). I don't think I need to reproduce the code here ;-)
With this project open, I just need to go to Tools -> ILDasm, and it opens up the compiled assembly. Drilling down a bit, we get to the Main method:
The name of my application is "ILDamsTest.exe", and we traverse the tree to get to the Main method. If we double-click on the Main method, we get to the disassembled IL:
Here, we see the IL for the most basic console application. Going through the instructions, we have the following. First, "nop" is a "no op", meaning no operation is performed. This is here so that the compiler can attach a breakpoint to a part of the code that doesn't perform an operation (for example, attaching a breakpoint to the opening curly-brace of the method).
Next, we have "ldstr" which loads the string "Hello, World!" onto the stack. Next, we have a "call" that runs the Console.WriteLine method. Note that the IL has the fully-qualified name for this, including the assembly (mscorlib) and the namespace (System). We can see that this method takes a string argument, and it gets this from the top of the stack. Since we just pushed "Hello, World!" onto the stack, this string will be used for the parameter.
Next we have another "nop", for the closing curly-brace, and finally the "ret" that returns from the method.
Pretty simple. Now let's look at the same thing compiled in "Release" mode:
Notice that we've gone from 5 instructions down to 3; the "nop" instructions are gone. The compiler removed them since we can't breakpoint into release code.
More Fun with IL
So, after I started looking just a little bit at IL, I've become much more curious about how the compiler generates IL. For example, I like to see what code is generated by the "shortcuts" that the C# language and compiler give us.
Here are 2 equivalent blocks of code:
Now, the "foreach" loop will work on any object that implements the IEnumerable interface. That interface has a single method ("GetEnumerator"). (Note: if you want more information on IEnumerable, you can reference the series of articles at Next, Please! A Closer Look at IEnumerable.)
So, the first block of code shows how to use the Enumerator directly (with MoveNext and Current), and the second uses the "foreach" construct. We would expect that these two blocks would compile to the same IL.
But if we look at the IL, we find that they are not compiled to the same code. They are close, but when "foreach" is compiled, it is automatically wrapped in a try/finally block. Try this yourself to see.
And so started my journey into IL and the fun behind it. Some other interesting places to look include anonymous types (to see that the compiler does create a type with an internal name), delegates (to see what a delegate is behind the scenes), lambda expressions, and many others. You will probably come across some compiler optimizations that surprise you as well.
Note: I won't include any links to IL references because I haven't found any good ones. If you want the specification, you can search for "ECMA 335", with a caveat that this is the 500+ page specification document (not overly readable). Professional .NET Framework 2.0 provides a compact reference for IL (along with a number of examples).
[Update: I found an online reference. It's not very descriptive, but it has the CIL Instructions: http://www.en.csharp-online.net/CIL_Instruction_Set.]
Wrap Up
So, even though Professional .NET Framework 2.0 is 6 years old, I found many useful things in it. There are plenty of good topics (including a look at the Garbage Collector and also multi-threading). As you can probably tell, I've become comfortable looking at the IL -- something that I never would have done before. An appendix is dedicated to the IL operations so that you can parse the output of ildasm yourself. Once you start looking at the IL, you come across all sorts of interesting things that will give you a better understanding of how the code you write gets transformed into something closer to what the machine will actually run.
This is not a resource for a beginner or someone new to C#, but it is a great way to take a deeper dive into some of the fundamentals of the .NET Framework.
Happy Coding!
Sunday, November 25, 2012
Drawbacks to Abstraction
Programming to abstractions is an important principle to keep in mind. I have shown in a number of demos (including IEnumerable, ISaveable, IDontGetIt: Understanding .NET Interfaces, Dependency Injection: A Practical Introduction, and T, Earl Grey, Hot: Generics in .NET) how programming to abstractions can make our code more extensible, maintainable, and testable.
Add Abstraction As You Need It
I am a big fan of using abstractions (specifically Interfaces). But I'm also a proponent of adding abstractions only as you need them.
As an example, over the past couple of weeks, I've been working on a project that contained a confirmation dialog. The dialog itself was pretty simple -- just a modal dialog that let you add a message, text for the OK and Cancel buttons, and a callback for when the dialog was closed. To make things more interesting, since the project uses MVVM, we end up making the call to this dialog from the View Model through an Interaction Trigger. What this creates is a couple of layers of code in order to maintain good separation between the UI (the View) and the Presentation Logic (in the View Model).
This confirmation dialog code has been working for a while, but it had a drawback -- it wasn't easily unit testable due to the fact that the interaction trigger called into the View layer. The result is that as I added confirmation messages to various parts of the code, the unit tests would stop working.
I was able to fix this problem by adding an interface (IConfirmationDialog) that had the methods to show the dialog and fire the callback. I used Property Injection for the dialog so that I would have the "real" dialog as the default behavior. But in my unit tests, I injected a mock dialog based on the interface. This let me keep my unit tests intact without changing the default run-time behavior of the dialog. If you'd like to see a specific example of using Property Injection with mocks and unit tests, see the samples in Dependency Injection: A Practical Introduction.
The Drawbacks
When selecting the right tool for the job, we need to know the strengths and the weaknesses of each tool. This lets us make intelligent choices to confirm that the benefits outweigh the drawbacks for whatever tool we choose. When working with abstractions and interfaces, this is no different.
We've talked about the advantages of using abstraction, but what about the drawbacks? The most obvious is complexity. Whenever we add another layer to our code, we make it a bit more complex to navigate. Let's compare two examples (taken from the Dependency Injection samples).
Navigation With a Concrete Type
We'll start by looking at a class that uses a concrete type (this is found in WithoutDependencyInjection.sln in the NoDI.Presentation project).
Here, we have a property (Repository) that is using a concrete type: PersonServiceRepository. Then down in the Execute method, we call the GetPeople method of this object. If we want to navigate to this method, we just put the cursor in GetPeople and then hit F12 (or right-click and select "Go To Definition"). This takes us to the implementation code in the PersonServiceRepository class:
This implementation code is fairly simple since it passes the call through to a WCF service, but we can see the actual implementation of the concrete type.
Navigation With An Interface
If we use an interface rather than a concrete type, then navigation gets a little trickier. The following code is in DependencyInjection.sln in the DI.Presentation project:
This code is very similar to the previous sample. The main difference is that the Repository variable now refers tn an interface (IPersonRepository) rather than the concrete type. You can see that the code in the Execute method is exactly the same.
But if we try to navigate to GetPeople using F12 (as we did above), we get quite a different outcome:
Instead of being navigated to an implementation, we are taken to the method declaration in the interface. In most situations, this is not what we are looking for. Instead, we want to see the implementation. Since we have decoupled our code, Visual Studio cannot easily determine the concrete type, so we are left with only the abstraction.
Mitigating the Navigation Issue
We can mitigate the navigation issue by just doing a search rather than "Go To Definition". For this, we just need to double-click on "GetPeople" (so that it is highlighted), and then use Ctrl+Shift+F (which is the equivalent of "Find in Files"). This brings up the search box already populated:
If we click "Find All", then we can see the various declarations, implementations, and calls for this method:
This snippet shows the interface declaration (IPersonServiceRepository) as well as 2 implementations (PersonCSVRepository and PersonServiceRepository). If we double-click on the PersonCSVRepository, then we are taken to that implementation code:
This code shows the GetPeople implementation for reading from a CSV file. So, there are a few extra steps to finding the implementation we were looking for, but if we know our tools (namely, Visual Studio) and how to use them, then we can still get the information we need.
There are other options as well. Instead of using "Find In Files", we can use "Find All References" (Ctrl+K, R) which will give us similar (but not exactly the same) results as a search. Also, if you have a refactoring tool installed (like RefactorPro or Resharper), then you most likely have an even faster way to search for a particular method. It always pays to have a good understanding of your development environment.
Wrap Up
Everything has a cost. In the case of programming to abstractions, that cost is added complexity and indirection when moving through source files. The benefits include extensibility, maintainability, and testability.
As mentioned, I'm a big fan of adding abstractions as you need them. Once you become familiar with your environment (the types of problems you are trying to solve and the parts that are likely to change), then you can make better up-front decisions about where to add these types of abstractions. Until then, start with the concrete types and add the abstractions when needed. Note: many times you may find yourself needing abstractions immediately in order to create good unit tests, but this will also vary depending on the extent of your testing.
For more information with finding the right level of abstraction, refer to Abstraction: The Goldilocks Principle. In many environments, the benefits will outweigh the costs, but we need to make a conscious decision about where abstractions are appropriate in each particular situation.
Happy Coding!
Add Abstraction As You Need It
I am a big fan of using abstractions (specifically Interfaces). But I'm also a proponent of adding abstractions only as you need them.
As an example, over the past couple of weeks, I've been working on a project that contained a confirmation dialog. The dialog itself was pretty simple -- just a modal dialog that let you add a message, text for the OK and Cancel buttons, and a callback for when the dialog was closed. To make things more interesting, since the project uses MVVM, we end up making the call to this dialog from the View Model through an Interaction Trigger. What this creates is a couple of layers of code in order to maintain good separation between the UI (the View) and the Presentation Logic (in the View Model).
This confirmation dialog code has been working for a while, but it had a drawback -- it wasn't easily unit testable due to the fact that the interaction trigger called into the View layer. The result is that as I added confirmation messages to various parts of the code, the unit tests would stop working.
I was able to fix this problem by adding an interface (IConfirmationDialog) that had the methods to show the dialog and fire the callback. I used Property Injection for the dialog so that I would have the "real" dialog as the default behavior. But in my unit tests, I injected a mock dialog based on the interface. This let me keep my unit tests intact without changing the default run-time behavior of the dialog. If you'd like to see a specific example of using Property Injection with mocks and unit tests, see the samples in Dependency Injection: A Practical Introduction.
The Drawbacks
When selecting the right tool for the job, we need to know the strengths and the weaknesses of each tool. This lets us make intelligent choices to confirm that the benefits outweigh the drawbacks for whatever tool we choose. When working with abstractions and interfaces, this is no different.
We've talked about the advantages of using abstraction, but what about the drawbacks? The most obvious is complexity. Whenever we add another layer to our code, we make it a bit more complex to navigate. Let's compare two examples (taken from the Dependency Injection samples).
Navigation With a Concrete Type
We'll start by looking at a class that uses a concrete type (this is found in WithoutDependencyInjection.sln in the NoDI.Presentation project).
Here, we have a property (Repository) that is using a concrete type: PersonServiceRepository. Then down in the Execute method, we call the GetPeople method of this object. If we want to navigate to this method, we just put the cursor in GetPeople and then hit F12 (or right-click and select "Go To Definition"). This takes us to the implementation code in the PersonServiceRepository class:
This implementation code is fairly simple since it passes the call through to a WCF service, but we can see the actual implementation of the concrete type.
Navigation With An Interface
If we use an interface rather than a concrete type, then navigation gets a little trickier. The following code is in DependencyInjection.sln in the DI.Presentation project:
This code is very similar to the previous sample. The main difference is that the Repository variable now refers tn an interface (IPersonRepository) rather than the concrete type. You can see that the code in the Execute method is exactly the same.
But if we try to navigate to GetPeople using F12 (as we did above), we get quite a different outcome:
Instead of being navigated to an implementation, we are taken to the method declaration in the interface. In most situations, this is not what we are looking for. Instead, we want to see the implementation. Since we have decoupled our code, Visual Studio cannot easily determine the concrete type, so we are left with only the abstraction.
Mitigating the Navigation Issue
We can mitigate the navigation issue by just doing a search rather than "Go To Definition". For this, we just need to double-click on "GetPeople" (so that it is highlighted), and then use Ctrl+Shift+F (which is the equivalent of "Find in Files"). This brings up the search box already populated:
If we click "Find All", then we can see the various declarations, implementations, and calls for this method:
This snippet shows the interface declaration (IPersonServiceRepository) as well as 2 implementations (PersonCSVRepository and PersonServiceRepository). If we double-click on the PersonCSVRepository, then we are taken to that implementation code:
This code shows the GetPeople implementation for reading from a CSV file. So, there are a few extra steps to finding the implementation we were looking for, but if we know our tools (namely, Visual Studio) and how to use them, then we can still get the information we need.
There are other options as well. Instead of using "Find In Files", we can use "Find All References" (Ctrl+K, R) which will give us similar (but not exactly the same) results as a search. Also, if you have a refactoring tool installed (like RefactorPro or Resharper), then you most likely have an even faster way to search for a particular method. It always pays to have a good understanding of your development environment.
Wrap Up
Everything has a cost. In the case of programming to abstractions, that cost is added complexity and indirection when moving through source files. The benefits include extensibility, maintainability, and testability.
As mentioned, I'm a big fan of adding abstractions as you need them. Once you become familiar with your environment (the types of problems you are trying to solve and the parts that are likely to change), then you can make better up-front decisions about where to add these types of abstractions. Until then, start with the concrete types and add the abstractions when needed. Note: many times you may find yourself needing abstractions immediately in order to create good unit tests, but this will also vary depending on the extent of your testing.
For more information with finding the right level of abstraction, refer to Abstraction: The Goldilocks Principle. In many environments, the benefits will outweigh the costs, but we need to make a conscious decision about where abstractions are appropriate in each particular situation.
Happy Coding!
Monday, November 5, 2012
Coming Soon: Desert Code Camp
Desert Code Camp is coming up on November 17th, 2012: http://nov2012.desertcodecamp.com/. If you are in the Phoenix area, be sure to stop by if you can (even if just for part of the day). Code Camps are a great place to meet other developers and talk about what's working and what's not. Plus, it's a full day of free training from folks in the developer community.
I've got 3 sessions scheduled:
T, Earl Grey, Hot: Generics in .NET
Dependency Injection: A Practical Introduction
IEnumerable, ISaveable, IDontGetIt: Understanding .NET Interfaces
Hope to see you there.
Happy Coding!
I've got 3 sessions scheduled:
T, Earl Grey, Hot: Generics in .NET
Dependency Injection: A Practical Introduction
IEnumerable, ISaveable, IDontGetIt: Understanding .NET Interfaces
Hope to see you there.
Happy Coding!
Sunday, October 21, 2012
Unit Testing: A Journey
In the last article (Abstraction: The Goldilocks Principle), we took a look at some steps for determining the right level of abstraction for an application. We skipped a big reason why we might want to add abstractions to our code: testability. Some developers say that unit testing is optional (I used to be in this camp). But most developers agree that in order to be a true professional, we must unit test our code (and this is where I am now).
But this leads us to a question: Should we modify our perfectly good code in order to make it easily testable? I'll approach this from another direction: one of the qualities of "good code" is code that is easily testable.
We'll loop back to why I take this approach in just a moment. First, I want to describe my personal journey with unit testing.
A Rocky Start
Several years back, I didn't think that I needed unit testing. I was working on a very agile team of developers (not formally "big A" Agile, but we self-organized and followed most of the Agile principles). We had a very good relationship with our end users, we had a tight development cycle (which would naturally vary depending on the size of the application), we had a good deployment system, and we had a quick turnaround on fixes after an application was deployed. With a small team, we supported a large number of applications (12 developers with 100 applications) that covered every line of business in the company. In short, we were [expletive deleted] good.
We weren't using unit testing. And quite honestly, we didn't really feel like we were missing anything by not having it. But companies reorganize (like they often do), and our team became part of a larger organization of five combined teams. Some of these other teams did not have the same level of productivity that we had. Someone had the idea that if we all did unit testing, then our code would get instantly better.
So, I had a knee-jerk reaction (I had a lot of those in the past; I've mellowed a lot since then). The problem is that unit testing does not make bad code into good code. Unit testing was being proposed as a "silver bullet". So, I fought against it -- not because unit testing is a bad idea, but because it was presented as a cure-all.
Warming Up
After I have a knee-jerk reaction, my next step is to research the topic to see if I can support my position. And my position was not that unit testing was bad (I knew that unit testing was a good idea); my position was that unit testing does not instantly make bad code into good code. So, I started reading. One of the books I read was The Art of Unit Testing by Roy Osherove. This is an interesting read, but the book is not designed to sell you on the topic of unit testing; it is designed to show you different techniques and assumes that you have already bought into the benefits of unit testing.
Not a Silver Bullet
I pulled passages out of the book that confirmed my position: that if you have developers who are writing bad code, then unit testing will not fix that. Even worse, it is very easy to write bad unit tests (that pass) that give you the false impression that the code is working as intended.
But Good Practice
On a personal basis, I was looking into unit testing to see if it was a tool that I wanted to add to my tool box. I had heard quite a few really smart developers talk about unit testing and Test Driven Development (TDD), so I knew that it was something that I wanted to explore further.
TDD is a topic that really intrigued me. It sounded really good, but I was skeptical about how practical it was to follow TDD in a complex application -- especially one that is using a business object framework or application framework that supplies a lot of the plumbing code.
Actual Unit Testing
I've since moved on and have used unit testing on several projects. Now that I am out of the environment that caused the knee-jerk reaction, I fully embrace unit testing as part of my development activities. And I have to say that I see the benefits frequently.
Unit Tests have helped me to manage a code base with complex business rules. I have several hundred tests that I run frequently. As I refactor code (break up methods and classes; rearrange abstractions; inject new dependencies), I can make sure that I didn't alter the core functionality of my code. A quick "Run Tests" shows that my refactoring did not break anything. When I add new functionality, I can quickly make sure that I don't break any existing functionality.
And there are times when I do break existing functionality. But when I do, I get immediate feedback; I'm not waiting for a tester to find a bug in a feature that I've already checked off. Sometimes, the tests themselves are broken and need to be fixed (for example, a new dependency was introduced in the production code that the test code is not aware of).
I'm not quite at TDD yet, but I can feel myself getting very close. There are a few times that I've been writing unit tests for a function that I just added. As I'm writing out the tests, I go through several test cases and sometimes come across a scenario that I did not code for. In that case, I'll go ahead and write the test (knowing that it will fail), and then go fix the code. So, I've got little bits of TDD poking through my unit testing process. But since I'm writing the code first, I do end up with some areas that are difficult to unit test. I usually end up refactoring these areas into something more testable. With a test-first approach, I might have gotten it "right" the first time.
Over the next couple weeks, I'll be diving into a personal project where I'm going to use TDD to see how it will work in the types of apps that I normally build.
One thing to note: my original position has not changed. Unit Testing is a wonderful tool, and I believe that all developers should be unit testing at some level (hopefully, a level that covers the majority of the code). But unit testing will not suddenly fix broken code nor will it fix a broken development process.
TDD
Now that I've seen the benefits of unit testing firsthand, I've become a big proponent of it. And even though I didn't see the need for it several years back, I look back now and see how it could have been useful in those projects. This would include everything from verifying that automated messages are sent out at the right times to ensuring that business rule parameters are applied properly to verifying that a "cleaning" procedure on user data was running properly. With these types of tests in place, I could have made feature changes with greater confidence.
Advantages
When a developer is using TDD, he/she takes the Red-Green-Refactor approach. There are plenty of books and articles that describe TDD, so I won't go into the details. Here's the short version: The first step (Red) is to write a failing test based on a function that is required in the application. The next step (Green) is to write just enough code for the test to pass. The last step (Refactor) comes once you're a bit further into the process, but it is the time to re-arrange and/or abstract the code so that the pieces fit together.
The idea is that we only write enough code to meet the functionality of our application (and no more). This gets us used to writing small units of code. And when we start composing these small units of code into larger components, we need to maintain the testability (to make sure our tests keep working). This encourages us to stay away from tight coupling and putting in abstraction layers as we need them (and not before). In addition, we'll be more likely to write small classes that work in isolation. And when we do need to compose the classes, we can more easily adhere to the S.O.L.I.D. principles.
Back to the Question
So this brings us back to the question we asked at the beginning: Should we modify our perfectly good code in order to make it easily testable? And the answer is that if we write our tests first, then well-abstracted testable code will be the natural result.
I've done quite a bit of reading and other study of Unit Testing. One of the things that I didn't like about some of the approaches is when someone "cracked open" a class in order to test it. Because the unit they wanted to test was inaccessible (perhaps due to a "protected" access modifier), they wrote a test class that wrapped the code in order to make those members visible. I never liked the idea of writing test classes that wrapped the production classes -- this means that you aren't really testing your production code.
However, if we take the approach of test-first development (with TDD or another methodology), then the classes we build will be inherently testable. In addition, if a member needs to be injected or exposed for testing, it grows more organically with the corresponding tests.
Abstractions
And back to the last article about abstractions: if we approach our code from the test side, we will add abstractions naturally as we need them. If we need to compose two objects, we can use interfaces to ensure loose-coupling and make it easier to mock dependencies in our tests. If we find that we need to inject a method from another class, we can use delegates (and maybe the Strategy pattern) to keep the separation that we need to maintain our tests.
Wrap Up
Whole books have been written on unit testing (by folks with much more experience than I have). The goal here was to describe my personal journey toward making unit tests a part of my regular coding activities. I fought hard against it in one environment; this made me study hard so that I understood the domain; this led to a better understanding of the techniques and pitfalls involved. There are three or four key areas that I have changed my thinking on over the years (one of which is the necessity of unit testing). When I change my mind on a topic, it's usually due to a deep-dive investigation and much thought. Unit testing was no different.
You may be working in a group that is not currently unit testing. You may also think that you are doing okay without it. You may even think that it will take more time to implement. I've been there. Start looking at the cost/benefit studies that have been done on unit testing. The reality is that overall development time is reduced -- remember, development is not simply slapping together new code; it is also maintenance and bug-hunting. I wish that I had done this much sooner; I know that my past code could have benefited from it.
My journey is not yet complete; it will continue. And I expect that I will be making the transition to TDD for my own projects very soon.
Maybe the other developers on your team don't see the benefits of unit testing, and maybe your manager sees it as an extension to the timeline. It's easy to just go along with the group on these things. But at some point, if we truly want to excel as developers, we need to stop being followers and start being leaders.
Happy Coding!
But this leads us to a question: Should we modify our perfectly good code in order to make it easily testable? I'll approach this from another direction: one of the qualities of "good code" is code that is easily testable.
We'll loop back to why I take this approach in just a moment. First, I want to describe my personal journey with unit testing.
A Rocky Start
Several years back, I didn't think that I needed unit testing. I was working on a very agile team of developers (not formally "big A" Agile, but we self-organized and followed most of the Agile principles). We had a very good relationship with our end users, we had a tight development cycle (which would naturally vary depending on the size of the application), we had a good deployment system, and we had a quick turnaround on fixes after an application was deployed. With a small team, we supported a large number of applications (12 developers with 100 applications) that covered every line of business in the company. In short, we were [expletive deleted] good.
We weren't using unit testing. And quite honestly, we didn't really feel like we were missing anything by not having it. But companies reorganize (like they often do), and our team became part of a larger organization of five combined teams. Some of these other teams did not have the same level of productivity that we had. Someone had the idea that if we all did unit testing, then our code would get instantly better.
So, I had a knee-jerk reaction (I had a lot of those in the past; I've mellowed a lot since then). The problem is that unit testing does not make bad code into good code. Unit testing was being proposed as a "silver bullet". So, I fought against it -- not because unit testing is a bad idea, but because it was presented as a cure-all.
Warming Up
After I have a knee-jerk reaction, my next step is to research the topic to see if I can support my position. And my position was not that unit testing was bad (I knew that unit testing was a good idea); my position was that unit testing does not instantly make bad code into good code. So, I started reading. One of the books I read was The Art of Unit Testing by Roy Osherove. This is an interesting read, but the book is not designed to sell you on the topic of unit testing; it is designed to show you different techniques and assumes that you have already bought into the benefits of unit testing.
Not a Silver Bullet
I pulled passages out of the book that confirmed my position: that if you have developers who are writing bad code, then unit testing will not fix that. Even worse, it is very easy to write bad unit tests (that pass) that give you the false impression that the code is working as intended.
But Good Practice
On a personal basis, I was looking into unit testing to see if it was a tool that I wanted to add to my tool box. I had heard quite a few really smart developers talk about unit testing and Test Driven Development (TDD), so I knew that it was something that I wanted to explore further.
TDD is a topic that really intrigued me. It sounded really good, but I was skeptical about how practical it was to follow TDD in a complex application -- especially one that is using a business object framework or application framework that supplies a lot of the plumbing code.
Actual Unit Testing
I've since moved on and have used unit testing on several projects. Now that I am out of the environment that caused the knee-jerk reaction, I fully embrace unit testing as part of my development activities. And I have to say that I see the benefits frequently.
Unit Tests have helped me to manage a code base with complex business rules. I have several hundred tests that I run frequently. As I refactor code (break up methods and classes; rearrange abstractions; inject new dependencies), I can make sure that I didn't alter the core functionality of my code. A quick "Run Tests" shows that my refactoring did not break anything. When I add new functionality, I can quickly make sure that I don't break any existing functionality.
And there are times when I do break existing functionality. But when I do, I get immediate feedback; I'm not waiting for a tester to find a bug in a feature that I've already checked off. Sometimes, the tests themselves are broken and need to be fixed (for example, a new dependency was introduced in the production code that the test code is not aware of).
I'm not quite at TDD yet, but I can feel myself getting very close. There are a few times that I've been writing unit tests for a function that I just added. As I'm writing out the tests, I go through several test cases and sometimes come across a scenario that I did not code for. In that case, I'll go ahead and write the test (knowing that it will fail), and then go fix the code. So, I've got little bits of TDD poking through my unit testing process. But since I'm writing the code first, I do end up with some areas that are difficult to unit test. I usually end up refactoring these areas into something more testable. With a test-first approach, I might have gotten it "right" the first time.
Over the next couple weeks, I'll be diving into a personal project where I'm going to use TDD to see how it will work in the types of apps that I normally build.
One thing to note: my original position has not changed. Unit Testing is a wonderful tool, and I believe that all developers should be unit testing at some level (hopefully, a level that covers the majority of the code). But unit testing will not suddenly fix broken code nor will it fix a broken development process.
TDD
Now that I've seen the benefits of unit testing firsthand, I've become a big proponent of it. And even though I didn't see the need for it several years back, I look back now and see how it could have been useful in those projects. This would include everything from verifying that automated messages are sent out at the right times to ensuring that business rule parameters are applied properly to verifying that a "cleaning" procedure on user data was running properly. With these types of tests in place, I could have made feature changes with greater confidence.
Advantages
When a developer is using TDD, he/she takes the Red-Green-Refactor approach. There are plenty of books and articles that describe TDD, so I won't go into the details. Here's the short version: The first step (Red) is to write a failing test based on a function that is required in the application. The next step (Green) is to write just enough code for the test to pass. The last step (Refactor) comes once you're a bit further into the process, but it is the time to re-arrange and/or abstract the code so that the pieces fit together.
The idea is that we only write enough code to meet the functionality of our application (and no more). This gets us used to writing small units of code. And when we start composing these small units of code into larger components, we need to maintain the testability (to make sure our tests keep working). This encourages us to stay away from tight coupling and putting in abstraction layers as we need them (and not before). In addition, we'll be more likely to write small classes that work in isolation. And when we do need to compose the classes, we can more easily adhere to the S.O.L.I.D. principles.
Back to the Question
So this brings us back to the question we asked at the beginning: Should we modify our perfectly good code in order to make it easily testable? And the answer is that if we write our tests first, then well-abstracted testable code will be the natural result.
I've done quite a bit of reading and other study of Unit Testing. One of the things that I didn't like about some of the approaches is when someone "cracked open" a class in order to test it. Because the unit they wanted to test was inaccessible (perhaps due to a "protected" access modifier), they wrote a test class that wrapped the code in order to make those members visible. I never liked the idea of writing test classes that wrapped the production classes -- this means that you aren't really testing your production code.
However, if we take the approach of test-first development (with TDD or another methodology), then the classes we build will be inherently testable. In addition, if a member needs to be injected or exposed for testing, it grows more organically with the corresponding tests.
Abstractions
And back to the last article about abstractions: if we approach our code from the test side, we will add abstractions naturally as we need them. If we need to compose two objects, we can use interfaces to ensure loose-coupling and make it easier to mock dependencies in our tests. If we find that we need to inject a method from another class, we can use delegates (and maybe the Strategy pattern) to keep the separation that we need to maintain our tests.
Wrap Up
Whole books have been written on unit testing (by folks with much more experience than I have). The goal here was to describe my personal journey toward making unit tests a part of my regular coding activities. I fought hard against it in one environment; this made me study hard so that I understood the domain; this led to a better understanding of the techniques and pitfalls involved. There are three or four key areas that I have changed my thinking on over the years (one of which is the necessity of unit testing). When I change my mind on a topic, it's usually due to a deep-dive investigation and much thought. Unit testing was no different.
You may be working in a group that is not currently unit testing. You may also think that you are doing okay without it. You may even think that it will take more time to implement. I've been there. Start looking at the cost/benefit studies that have been done on unit testing. The reality is that overall development time is reduced -- remember, development is not simply slapping together new code; it is also maintenance and bug-hunting. I wish that I had done this much sooner; I know that my past code could have benefited from it.
My journey is not yet complete; it will continue. And I expect that I will be making the transition to TDD for my own projects very soon.
Maybe the other developers on your team don't see the benefits of unit testing, and maybe your manager sees it as an extension to the timeline. It's easy to just go along with the group on these things. But at some point, if we truly want to excel as developers, we need to stop being followers and start being leaders.
Happy Coding!
Saturday, October 20, 2012
Abstraction: The Goldilocks Principle
Abstractions are an important part of object-oriented programming. One of the primary principles is to program to an abstraction rather than a concrete type. But that leads to a question: What is the right level of abstraction for our applications? If we have too much abstraction, then our applications can become difficult to understand and maintain. If we have too little abstraction, then our applications can become difficult to understand and maintain. This means that there is some "sweet spot" right in the middle. We'll call this the Goldilocks level of abstraction: "Just Right."
So, what is the right level? Unfortunately, there are no hard-and-fast rules that are appropriate for all development scenarios. Like so many other decisions that we have to make in software development, the correct answer is "it depends." No one really likes this answer (especially when you're just getting started and are looking for guidance), but it's the reality of our field.
Here are a few steps that we can use to get started.
Step 1: Know Your Tools
The first step to figuring out what abstractions to use is to understand what types of abstractions are available. This is what I focus on in my technical presentations. I speak about delegates, design patterns, interfaces, generics, and dependency injection (among other things). You can get information on these from my website: http://www.jeremybytes.com/Demos.aspx. The goal of these presentations is to provide an overview of the technologies and how they are used. This includes examples such as how to use delegates to implement the Strategy pattern or how to use interfaces with the Repository and Decorator patterns.
We need to understand our tools before we can use them. I try to show these technologies in such a way that if we run into them in someone else's code, they don't just look like magic incantations. If we can start looking at other people's code, we can get a better feel for how these tools are used in the real world.
A while back, I wrote an article about this: Design Patterns: Understand Your Tools. Although that pertains specifically to design patterns, the principle extends to all of the tools we use. We need to know the benefits and limitations to everything in our toolbox. Only then can we make an informed choice.
I remember seeing a woodworking show on television where the host used only a router (a carpentry router, not a network router). He showed interesting and unintended uses for the router as he used it as a cutting tool and a shaping tool and a sanding tool. He pushed the tool to its limits. But at the same time, he was limiting himself. He could use it as a cutting tool, but not as effectively as a band saw or a jigsaw or a table saw. I understand why this may be appealing: power tools are expensive. But in the software world, many of the "tools" that we use are simply ideas (such as design patterns or delegates or interfaces). There isn't a monetary investment required, but we do need to make a time investment to learn them.
Step 2: Know Your Environment
The next step is to understand your environment. Whether we are working for a company writing business applications, doing custom builds for customers, or writing shrink-wrap software, this means that we need to understand our users and the system requirements. We need to know what things are likely to change and which are not.
As an example, I worked for many years building line-of-business applications for a company that used Microsoft SQL Server. At that company, we always used SQL Server. Out of the 20 or so applications that I worked on, the data store was SQL Server. Because of this, we did not spend time abstracting the data access code so that it could easily use a different data store. Note: we did have proper layering and isolation of the data access code (meaning, all of our database calls were isolated to specific data access methods in a specific layer of the application).
On the other hand, I worked on several applications that used business rules for processing data. These rules were volatile and would change frequently. Because of this, we created rule interfaces that made it very easy to plug-in new rule types.
I should mention that these applications could have benefited from abstraction of the database layer to facilitate unit testing. We were not doing unit testing. In my next article, I will talk more about unit testing (and my particular history with it), and why it really is something we should all be doing.
Step 3: Learn the Smells
A common term that we hear in the developer community is "code smells". This basically comes with experience. As a developer, you look at a bit of code and something doesn't "smell" right -- it just feels off. Sometimes you can't put your finger on something specific; there's just something that makes you uncomfortable.
There are a couple of ways to learn code smells. The preferred way is through mentorship. Find a developer with more experience than you and learn from him/her. As a young developer, I had access to some really smart people on my development team. And by listening to them, I saved myself a lot of pain over the years. If you can learn from someone else, then be sure to take advantage of it.
The less preferred (but very effective) way of learning code smells is through trial and error. I had plenty of this as a young developer as well. I took approaches to applications that I later regretted. And in that environment, I got to live with that regret -- whenever we released a piece of software, we also became primary support for that software. This is a great way to encourage developers to produce highly stable code that is really "done" before release. While these applications were fully functional from a user standpoint, they were more difficult to maintain and add new features than I would have liked. But that's another reality of software development: constant learning. If we don't look at code that we wrote six months ago and say "What was I thinking?", then we probably haven't learned anything in the meantime.
Step 4: Abstract as You Need It
I've been burned by poorly designed applications in the past -- abstractions that added complexity to the application without very much (if any) benefit. As a result, my initial reaction is to lean toward low-abstraction as an initial state. I was happy to come across the advice to "add abstraction as you need it". This is an extremely good approach if you don't (yet) know the environment or what things are likely to change.
As an example, let's go back to database abstractions. It turns out that while working at the company that used SQL Server, I had an application that needed to convert from SQL Server to Oracle. The Oracle database was part of a third-party vendor product. For this application, I added a fully-abstracted repository that was able to talk to either SQL Server or the Oracle database. But I just did this for one application when I needed it.
Too Much Abstraction
As mentioned above, too much abstraction can make an application difficult to understand and maintain. I encountered an application that had a very high level of abstraction. There were new objects at each layer (even if the object had exactly the same properties as one in another layer). The result of this abstraction meant that if someone wanted to add a new field to the UI (and have it stored in the database), the developer would need to modify 17 different code files. In addition, much of the application was wired-up at runtime (rather than compile time), meaning that if you missed a change to a file, you didn't find out about it until you got a runtime error. And since the files were extremely decoupled, it was very difficult to hook up the debugger to the appropriate assemblies.
Too Little Abstraction
At the other end of the spectrum, too little abstraction can make an application difficult to understand and maintain. Another application that I encountered had 2600 lines of code in a single method. It was almost impossible to follow the logic (lots of nested if/else conditions in big blocks of code). And figuring out the proper place to make a change was nearly impossible.
"Just Right" Abstraction
My biggest concern as a developer is finding the right balance -- the Goldilocks Principle: not too much, not too little, but "just right". I've been programming professionally for 12 years now, so I've had the benefit of seeing some really good code and some really bad code (as well as writing some really good code and some really bad code).
Depending on what kind of development work we do, we can end up spending a lot of time supporting and maintaining someone else's code. When I'm writing code, I try to think of the person who will be coming after me. And I ask myself a few key questions. Will this abstraction make sense to someone else? Will this abstraction make the code easier or harder to maintain? How does this fit in with the approach used in the rest of this application? If you are working in a team environment, don't be afraid to grab another developer and talk through a couple of different options. Having another perspective can make the decision a lot easier.
The best piece of advice I've heard that helps me write maintainable code: Always assume that the person who has to maintain your code is a homicidal maniac who knows where you live.
One other area that will impact how much abstraction you add to your code is unit testing. Abstraction often helps us isolate code so that we can make more practical tests. I'll be putting down my experiences and thoughts regarding unit testing in the next article. Until then...
Happy Coding!
So, what is the right level? Unfortunately, there are no hard-and-fast rules that are appropriate for all development scenarios. Like so many other decisions that we have to make in software development, the correct answer is "it depends." No one really likes this answer (especially when you're just getting started and are looking for guidance), but it's the reality of our field.
Here are a few steps that we can use to get started.
Step 1: Know Your Tools
The first step to figuring out what abstractions to use is to understand what types of abstractions are available. This is what I focus on in my technical presentations. I speak about delegates, design patterns, interfaces, generics, and dependency injection (among other things). You can get information on these from my website: http://www.jeremybytes.com/Demos.aspx. The goal of these presentations is to provide an overview of the technologies and how they are used. This includes examples such as how to use delegates to implement the Strategy pattern or how to use interfaces with the Repository and Decorator patterns.
We need to understand our tools before we can use them. I try to show these technologies in such a way that if we run into them in someone else's code, they don't just look like magic incantations. If we can start looking at other people's code, we can get a better feel for how these tools are used in the real world.
A while back, I wrote an article about this: Design Patterns: Understand Your Tools. Although that pertains specifically to design patterns, the principle extends to all of the tools we use. We need to know the benefits and limitations to everything in our toolbox. Only then can we make an informed choice.
I remember seeing a woodworking show on television where the host used only a router (a carpentry router, not a network router). He showed interesting and unintended uses for the router as he used it as a cutting tool and a shaping tool and a sanding tool. He pushed the tool to its limits. But at the same time, he was limiting himself. He could use it as a cutting tool, but not as effectively as a band saw or a jigsaw or a table saw. I understand why this may be appealing: power tools are expensive. But in the software world, many of the "tools" that we use are simply ideas (such as design patterns or delegates or interfaces). There isn't a monetary investment required, but we do need to make a time investment to learn them.
Step 2: Know Your Environment
The next step is to understand your environment. Whether we are working for a company writing business applications, doing custom builds for customers, or writing shrink-wrap software, this means that we need to understand our users and the system requirements. We need to know what things are likely to change and which are not.
As an example, I worked for many years building line-of-business applications for a company that used Microsoft SQL Server. At that company, we always used SQL Server. Out of the 20 or so applications that I worked on, the data store was SQL Server. Because of this, we did not spend time abstracting the data access code so that it could easily use a different data store. Note: we did have proper layering and isolation of the data access code (meaning, all of our database calls were isolated to specific data access methods in a specific layer of the application).
On the other hand, I worked on several applications that used business rules for processing data. These rules were volatile and would change frequently. Because of this, we created rule interfaces that made it very easy to plug-in new rule types.
I should mention that these applications could have benefited from abstraction of the database layer to facilitate unit testing. We were not doing unit testing. In my next article, I will talk more about unit testing (and my particular history with it), and why it really is something we should all be doing.
Step 3: Learn the Smells
A common term that we hear in the developer community is "code smells". This basically comes with experience. As a developer, you look at a bit of code and something doesn't "smell" right -- it just feels off. Sometimes you can't put your finger on something specific; there's just something that makes you uncomfortable.
There are a couple of ways to learn code smells. The preferred way is through mentorship. Find a developer with more experience than you and learn from him/her. As a young developer, I had access to some really smart people on my development team. And by listening to them, I saved myself a lot of pain over the years. If you can learn from someone else, then be sure to take advantage of it.
The less preferred (but very effective) way of learning code smells is through trial and error. I had plenty of this as a young developer as well. I took approaches to applications that I later regretted. And in that environment, I got to live with that regret -- whenever we released a piece of software, we also became primary support for that software. This is a great way to encourage developers to produce highly stable code that is really "done" before release. While these applications were fully functional from a user standpoint, they were more difficult to maintain and add new features than I would have liked. But that's another reality of software development: constant learning. If we don't look at code that we wrote six months ago and say "What was I thinking?", then we probably haven't learned anything in the meantime.
Step 4: Abstract as You Need It
I've been burned by poorly designed applications in the past -- abstractions that added complexity to the application without very much (if any) benefit. As a result, my initial reaction is to lean toward low-abstraction as an initial state. I was happy to come across the advice to "add abstraction as you need it". This is an extremely good approach if you don't (yet) know the environment or what things are likely to change.
As an example, let's go back to database abstractions. It turns out that while working at the company that used SQL Server, I had an application that needed to convert from SQL Server to Oracle. The Oracle database was part of a third-party vendor product. For this application, I added a fully-abstracted repository that was able to talk to either SQL Server or the Oracle database. But I just did this for one application when I needed it.
Too Much Abstraction
As mentioned above, too much abstraction can make an application difficult to understand and maintain. I encountered an application that had a very high level of abstraction. There were new objects at each layer (even if the object had exactly the same properties as one in another layer). The result of this abstraction meant that if someone wanted to add a new field to the UI (and have it stored in the database), the developer would need to modify 17 different code files. In addition, much of the application was wired-up at runtime (rather than compile time), meaning that if you missed a change to a file, you didn't find out about it until you got a runtime error. And since the files were extremely decoupled, it was very difficult to hook up the debugger to the appropriate assemblies.
Too Little Abstraction
At the other end of the spectrum, too little abstraction can make an application difficult to understand and maintain. Another application that I encountered had 2600 lines of code in a single method. It was almost impossible to follow the logic (lots of nested if/else conditions in big blocks of code). And figuring out the proper place to make a change was nearly impossible.
"Just Right" Abstraction
My biggest concern as a developer is finding the right balance -- the Goldilocks Principle: not too much, not too little, but "just right". I've been programming professionally for 12 years now, so I've had the benefit of seeing some really good code and some really bad code (as well as writing some really good code and some really bad code).
Depending on what kind of development work we do, we can end up spending a lot of time supporting and maintaining someone else's code. When I'm writing code, I try to think of the person who will be coming after me. And I ask myself a few key questions. Will this abstraction make sense to someone else? Will this abstraction make the code easier or harder to maintain? How does this fit in with the approach used in the rest of this application? If you are working in a team environment, don't be afraid to grab another developer and talk through a couple of different options. Having another perspective can make the decision a lot easier.
The best piece of advice I've heard that helps me write maintainable code: Always assume that the person who has to maintain your code is a homicidal maniac who knows where you live.
One other area that will impact how much abstraction you add to your code is unit testing. Abstraction often helps us isolate code so that we can make more practical tests. I'll be putting down my experiences and thoughts regarding unit testing in the next article. Until then...
Happy Coding!
Thursday, October 11, 2012
Session Spotlight - Dependency Injection: A Practical Introduction
I'll be speaking at SoCal Code Camp (Los Angeles) on October 13 & 14. If you're not signed up yet, head over to the website and let them know that you're coming: http://www.socalcodecamp.com/.
Also, I've got a brand new session: Dependency Injection: A Practical Introduction
How Do We Get Started With Dependency Injection?
One of the big problems with getting started with Dependency Injection (DI) is that there are a lot of different opinions on exactly what DI is and the best ways to use it. At its core, DI is just a set of patterns for adding good abstraction to our code. The objects we create should be focused on doing one thing and doing it well. If there is functionality that is not core to the the object, then we "outsource" it to another class. This external class is a dependency (our primary object depends on this other class to provide a functional whole in our application).
Dependency Injection gives us a way to keep these external dependencies separate from our objects. Rather than the object being responsible for creating/managing the dependency, we "inject" it from the outside. This makes sure that our classes are loosely coupled which gives us good maintainability, extensibility, and testability.
In this session, I combine my personal experience using DI with the excellent information provided by Mark Seemann in Dependency Injection in .NET (I posted a review on this book a few weeks back).
DI is an enormous topic. I have picked out a few key areas (like examining tight-coupling and loose-coupling) and design patterns that will give us a good starting point for the world of DI. (As a side note, the sample code also gives a quick example of using the Model-View-ViewModel (MVVM) design pattern.)
If you can't make it out to the SoCal Code Camp, you have another chance to see this session at the Desert Code Camp (in the Phoenix, AZ area) in November. And as always, the code samples and walkthrough are posted on my website.
Hope to see you at an upcoming event.
Happy Coding!
Also, I've got a brand new session: Dependency Injection: A Practical Introduction
How Do We Get Started With Dependency Injection?
One of the big problems with getting started with Dependency Injection (DI) is that there are a lot of different opinions on exactly what DI is and the best ways to use it. At its core, DI is just a set of patterns for adding good abstraction to our code. The objects we create should be focused on doing one thing and doing it well. If there is functionality that is not core to the the object, then we "outsource" it to another class. This external class is a dependency (our primary object depends on this other class to provide a functional whole in our application).
Dependency Injection gives us a way to keep these external dependencies separate from our objects. Rather than the object being responsible for creating/managing the dependency, we "inject" it from the outside. This makes sure that our classes are loosely coupled which gives us good maintainability, extensibility, and testability.
In this session, I combine my personal experience using DI with the excellent information provided by Mark Seemann in Dependency Injection in .NET (I posted a review on this book a few weeks back).
DI is an enormous topic. I have picked out a few key areas (like examining tight-coupling and loose-coupling) and design patterns that will give us a good starting point for the world of DI. (As a side note, the sample code also gives a quick example of using the Model-View-ViewModel (MVVM) design pattern.)
If you can't make it out to the SoCal Code Camp, you have another chance to see this session at the Desert Code Camp (in the Phoenix, AZ area) in November. And as always, the code samples and walkthrough are posted on my website.
Hope to see you at an upcoming event.
Happy Coding!
Sunday, September 30, 2012
Session Spotlight - IEnumerable, ISaveable, IDontGetIt: Understanding .NET Interfaces
I'll be speaking at So Cal Code Camp (Los Angeles) on October 13 & 14. If you're not signed up yet, head over to the website to let them know you're coming: http://www.socalcodecamp.com.
Back again: IEnumerable, ISaveable, IDontGetIt: Understanding .NET Interfaces.
Abstraction Through Interfaces
When people talk about interfaces, they often refer to the User Interface -- the screens and controls that allow the user to interact with the application. But "interface" also refers to a very powerful abstraction that lets us add extensibility and loose coupling to our code.
My first encounter with interfaces was in Delphi (Object Pascal) as a junior developer. I understood what they were from a technical standpoint, but I didn't understand why I would want to use them. We went to the Borland conference every year; and as a new developer, I took the opportunity to absorb as much information as I could, even if I didn't understand most of it (this is also a great way to use Code Camp -- to experience technologies you haven't taken the time to look into yet). I was very excited because there was a session in Interfaces at the conference.
"Great," I thought, "I'll go and get some practical examples of how to use interfaces and find out why I would want to use them." So, I get to the session, sit down, and grab my notepad -- ready to spend the next hour getting a practical introduction.
The speaker gets up and starts the presentation. "So, let's say that we have an Foo class. And we also have an IBar interface."
"Noooooooooooooooo!" I screamed (well, screamed inwardly anyway). "I need practical examples. You can't use Foo / Bar / Baz examples." But that's the way it was, and I didn't get anything new out of the session. (I also talked to some of my senior developer coworkers who attended, and they didn't get anything out of it either.)
It was several more years before I had a good grounding in object oriented design and the hows and whys of abstraction. The goal of "IEnumerable, ISaveable, IDontGetIt" is to give you a jump-start to understanding interfaces. We use real examples from the .NET framework and from actual application abstractions that I have coded in my professional career.
In addition to the So Cal Code Camp on Oct 13/14, I'll also be presenting this session at the Desert Code Camp (Phoenix) on Nov 17. And if you're quick, you can also see me present this information this week at a couple of user groups.
Hope to see you at an upcoming event.
Happy Coding!
Back again: IEnumerable, ISaveable, IDontGetIt: Understanding .NET Interfaces.
Abstraction Through Interfaces
When people talk about interfaces, they often refer to the User Interface -- the screens and controls that allow the user to interact with the application. But "interface" also refers to a very powerful abstraction that lets us add extensibility and loose coupling to our code.
My first encounter with interfaces was in Delphi (Object Pascal) as a junior developer. I understood what they were from a technical standpoint, but I didn't understand why I would want to use them. We went to the Borland conference every year; and as a new developer, I took the opportunity to absorb as much information as I could, even if I didn't understand most of it (this is also a great way to use Code Camp -- to experience technologies you haven't taken the time to look into yet). I was very excited because there was a session in Interfaces at the conference.
"Great," I thought, "I'll go and get some practical examples of how to use interfaces and find out why I would want to use them." So, I get to the session, sit down, and grab my notepad -- ready to spend the next hour getting a practical introduction.
The speaker gets up and starts the presentation. "So, let's say that we have an Foo class. And we also have an IBar interface."
"Noooooooooooooooo!" I screamed (well, screamed inwardly anyway). "I need practical examples. You can't use Foo / Bar / Baz examples." But that's the way it was, and I didn't get anything new out of the session. (I also talked to some of my senior developer coworkers who attended, and they didn't get anything out of it either.)
It was several more years before I had a good grounding in object oriented design and the hows and whys of abstraction. The goal of "IEnumerable, ISaveable, IDontGetIt" is to give you a jump-start to understanding interfaces. We use real examples from the .NET framework and from actual application abstractions that I have coded in my professional career.
In addition to the So Cal Code Camp on Oct 13/14, I'll also be presenting this session at the Desert Code Camp (Phoenix) on Nov 17. And if you're quick, you can also see me present this information this week at a couple of user groups.
Hope to see you at an upcoming event.
Happy Coding!
Tuesday, September 25, 2012
Upcoming Speaking Engagements - October 2012
October is shaping up to be a busy month. I have several speaking engagements which will be a lot of fun. Code Camps and User Groups are a great place to get out and talk to
other developers -- find out what works and what doesn't work, and learn
a lot of great new techniques.
Tuesday, October 2, 2012
LA C# User Group - Manhattan Beach, CA
Topic: IEnumerable, ISaveable, IDontGetIt: Understanding .NET Interfaces
Wednesday, October 3, 2012
So Cal .NET - Buena Park, CA
Topic: IEnumerable, ISaveable, IDontGetIt: Understanding .NET Interfaces
Saturday / Sunday, October 6 / 7, 2012
Silicon Valley Code Camp - Los Altos Hills, CA
Topics:
Get Func<>-y: Delegates in .NET
Learn to Love Lambdas
Saturday / Sunday, October 13 / 14, 2012
SoCal Code Camp - Los Angeles, CA
Topics:
Dependency Injection: A Practical Introduction
IEnumerable, ISaveable, IDontGetIt: Understanding .NET Interfaces
Learn to Love Lambdas
T, Earl Grey, Hot: Generics in .NET
Stop by if you can; the more, the merrier.
Happy Coding!
Tuesday, October 2, 2012
LA C# User Group - Manhattan Beach, CA
Topic: IEnumerable, ISaveable, IDontGetIt: Understanding .NET Interfaces
Wednesday, October 3, 2012
So Cal .NET - Buena Park, CA
Topic: IEnumerable, ISaveable, IDontGetIt: Understanding .NET Interfaces
Saturday / Sunday, October 6 / 7, 2012
Silicon Valley Code Camp - Los Altos Hills, CA
Topics:
Get Func<>-y: Delegates in .NET
Learn to Love Lambdas
Saturday / Sunday, October 13 / 14, 2012
SoCal Code Camp - Los Angeles, CA
Topics:
Dependency Injection: A Practical Introduction
IEnumerable, ISaveable, IDontGetIt: Understanding .NET Interfaces
Learn to Love Lambdas
T, Earl Grey, Hot: Generics in .NET
Stop by if you can; the more, the merrier.
Happy Coding!
Tuesday, September 18, 2012
Session Spotlight - Learn to Love Lambdas
I'll be speaking at SoCal Code Camp (Los Angeles) on October 13 & 14. If you're not signed up yet, head over to the website and let them know that you're coming: http://www.socalcodecamp.com/.
Back by popular demand: Learn to Love Lambdas
Lambda Expressions
Lambda expressions are a very powerful tool that can add elegance to our code. The problem is that they look like a secret code the first time you see them. You've probably come across something that looks like this:
It looks daunting, but it's fairly easy to learn. The lambda expression that we have here is just an anonymous delegate. If we were to use a more verbose syntax, we get something like this:
We are probably a bit more comfortable with this syntax. Here, we can tell that we're hooking up some sort of event handler, and it has standard event handler arguments: object and some type of EventArgs. Inside the method body, we are assigning the Result property of the EventArgs to display in a list box.
This anonymous delegate is equivalent to the lambda expression above. To create the lambda expression, we just replace the "delegate" keyword (before the parameters) with the "goes to" operator (=>) after the parameters. The parameter types are optional, so we can get rid of those. Also, since the method body only has one statement, we can remove the curly braces. This leaves us with a compact and readable syntax.
Delegates abound in the .NET framework (and in add-on frameworks like the Prism library). Whenever we have LINQ extension methods, event handlers, callbacks, or tasks, we can use lambda expressions to add compact blocks of code right where they are most useful. And in some instances (like when working with the Prism variants of RaisePropertyChanged) we can make our code more conducive to refactoring by replacing strings with compiler-checked pieces of code.
Once you get used to lambda expressions, they are extremely readable. In this session, we'll learn the lambda syntax, use lambdas for asynchronous callbacks, see a really cool feature that makes code safer, and see how lambdas were designed to be used extensively in LINQ. If you'd like to Learn to Love Lambdas, be sure to stop by my session at a code camp near you.
In addition to the SoCal Code Camp on Oct 13/14, I'll also be presenting this session at the Silicon Valley Code Camp on Oct 6/7 (this session is scheduled for 2:45 p.m. Sunday afternoon -- last session of the day).
Also be sure to check out the walkthrough and code samples here: Jeremy Bytes.
Hope to see you at an upcoming event.
Happy Coding!
Back by popular demand: Learn to Love Lambdas
Lambda Expressions
Lambda expressions are a very powerful tool that can add elegance to our code. The problem is that they look like a secret code the first time you see them. You've probably come across something that looks like this:
It looks daunting, but it's fairly easy to learn. The lambda expression that we have here is just an anonymous delegate. If we were to use a more verbose syntax, we get something like this:
We are probably a bit more comfortable with this syntax. Here, we can tell that we're hooking up some sort of event handler, and it has standard event handler arguments: object and some type of EventArgs. Inside the method body, we are assigning the Result property of the EventArgs to display in a list box.
This anonymous delegate is equivalent to the lambda expression above. To create the lambda expression, we just replace the "delegate" keyword (before the parameters) with the "goes to" operator (=>) after the parameters. The parameter types are optional, so we can get rid of those. Also, since the method body only has one statement, we can remove the curly braces. This leaves us with a compact and readable syntax.
Delegates abound in the .NET framework (and in add-on frameworks like the Prism library). Whenever we have LINQ extension methods, event handlers, callbacks, or tasks, we can use lambda expressions to add compact blocks of code right where they are most useful. And in some instances (like when working with the Prism variants of RaisePropertyChanged) we can make our code more conducive to refactoring by replacing strings with compiler-checked pieces of code.
Once you get used to lambda expressions, they are extremely readable. In this session, we'll learn the lambda syntax, use lambdas for asynchronous callbacks, see a really cool feature that makes code safer, and see how lambdas were designed to be used extensively in LINQ. If you'd like to Learn to Love Lambdas, be sure to stop by my session at a code camp near you.
In addition to the SoCal Code Camp on Oct 13/14, I'll also be presenting this session at the Silicon Valley Code Camp on Oct 6/7 (this session is scheduled for 2:45 p.m. Sunday afternoon -- last session of the day).
Also be sure to check out the walkthrough and code samples here: Jeremy Bytes.
Hope to see you at an upcoming event.
Happy Coding!
Wednesday, September 12, 2012
Session Spotlight - T, Earl Grey, Hot: Generics in .NET
I'll be speaking at SoCal Code Camp (Los Angeles) on October 13 & 14. If you're not signed up yet, head over to the website and let them know that you're coming: http://www.socalcodecamp.com/.
Also, I've got a brand new session: T, Earl Grey, Hot: Generics in .NET
Generics in .NET
Most C# developers have worked with generics to a certain extent (often by using classes from the base class library like List<T>). But we don't have to stop by merely consuming classes with generics; we can also create our own classes and methods that take advantage of this great framework feature.
We got generics way back in .NET 2.0, and they really were transformational at the time (now we're hopefully used to seeing them). Using generics, we can add type-safety to our code while still maintaining extensibility. By making our code type-safe, we are more likely to catch errors at compile time and also avoid strange behavior that might come from casting objects to different types. Our code also becomes more extensible -- it can work with types that it may not have been originally intended to work with.
Generics also offer us some performance enhancements (which, granted, seem fairly minor when talking about the processing resources of modern devices) by reducing casting calls and boxing/unboxing of value types.
In the session "T, Earl Grey, Hot: Generics in .NET", we'll take a look at the basic features of generics (by comparing non-generic and generic versions of classes from the base class library), see how we can incorporate generics into our own classes and methods in a practical way, and explore the various options to take full advantage of generics in our code (like "default" and generic constraints).
If you can't make it out to the SoCal Code Camp, you have another chance to see this session at the Desert Code Camp (in the Phoenix, AZ area) in November.
Hope to see you at an upcoming event.
Happy Coding!
Also, I've got a brand new session: T, Earl Grey, Hot: Generics in .NET
Generics in .NET
Most C# developers have worked with generics to a certain extent (often by using classes from the base class library like List<T>). But we don't have to stop by merely consuming classes with generics; we can also create our own classes and methods that take advantage of this great framework feature.
We got generics way back in .NET 2.0, and they really were transformational at the time (now we're hopefully used to seeing them). Using generics, we can add type-safety to our code while still maintaining extensibility. By making our code type-safe, we are more likely to catch errors at compile time and also avoid strange behavior that might come from casting objects to different types. Our code also becomes more extensible -- it can work with types that it may not have been originally intended to work with.
Generics also offer us some performance enhancements (which, granted, seem fairly minor when talking about the processing resources of modern devices) by reducing casting calls and boxing/unboxing of value types.
In the session "T, Earl Grey, Hot: Generics in .NET", we'll take a look at the basic features of generics (by comparing non-generic and generic versions of classes from the base class library), see how we can incorporate generics into our own classes and methods in a practical way, and explore the various options to take full advantage of generics in our code (like "default" and generic constraints).
If you can't make it out to the SoCal Code Camp, you have another chance to see this session at the Desert Code Camp (in the Phoenix, AZ area) in November.
Hope to see you at an upcoming event.
Happy Coding!
Monday, September 3, 2012
Book Review: Dependency Injection in .NET
I just finished reading Dependency Injection in .NET by Mark Seemann (Amazon link). This is an excellent description of Dependency Injection (DI), the associated patterns, and several of the main DI frameworks. Seemann has pulled together a wide range of resources (books, magazine articles, blog posts) and created a comprehensive work. It is apparent that this is based on a ton of research and personal experience in the DI world.
This turned out to be a good and timely read for me. A couple months ago, I started working on a WPF project using Prism, and Dependency Injection plays a big role in that. A colleague of mine (and former co-worker) highly recommended this book, and so I picked up a copy. (Disclaimer: I am a "book person"; that is how I learn best. Other people have different learning methods, so feel free to take this from a "book person" perspective.)
Part 1
Part 1 is an overview of Dependency Injection. Seemann describes what DI is, and also dispels some misconceptions about DI. He covers the benefits of DI including late binding, extensibility, parallel development, maintainability, and testability. He also lays a foundation by showing good and bad examples of implementing DI and how each impacts achieving the desired benefits. Finally, he introduces the topics that will be used in the rest of the book, including the design patterns, DI features, and the DI containers that are available.
Part 2
Part 2 covers the patterns and anti-patterns in DI. The patterns include Constructor Injection, Property Injection, Method Injection, and Ambient Context. Seemann does a good job of covering these patterns including both the benefits and the drawbacks of each. It becomes apparent very quickly that he favors Constructor Injection wherever possible and that the other patterns should be used only when Constructor Injection is not possible. (As a side note: I came to this same opinion in my fairly-short time working with DI.)
The anti-patterns include Control Freak, Bastard Injection, Constrained Construction, and Service Locator. Of these, the Service Locator is the most controversial. Many people consider Service Locator to be a valid DI pattern (rather than an anti-pattern). Seemann himself admits that he promoted the use of Service Locator for many years. He eventually came to the point where the shortcomings of the pattern outweighed the benefits. Prism (the Microsoft Patterns and Practices application guidance) has a service locator baked in (to reference either Unity or MEF for DI out-of-the-box). In my experience with Prism, we have used the service locator pattern, and I can see the benefits and shortcomings. At this point in the project, the scale leans towards the benefits, and we are willing to work with the drawbacks.
Part 2 also has a chapter on "DI Refactorings". This is a very practical review of what types of DI situations you'll run across in the real world -- like dealing with cyclic dependencies and short-lived dependencies. These are great topics to cover. Because the DI containers that we normally work with add the layer of abstraction, sometimes we forget about some potential issues (by assuming that the container is dealing with it).
Part 3
Part 3 covers some big DI topics: Object Composition, Object Lifetime, and Interception. Object Composition has to do with how we use DI to compose our objects. Seemann shows examples for how to compose objects with various technologies -- from easy implementations with console and WPF applications, to difficult implementations with ASP.NET applications and PowerShell commandlets.
Object Lifetime has to do with how the dependencies themselves are managed. Do you always return the same instance of a dependency no matter who asks for it (Singleton)? Or do you return a new instance of a dependency each time (Transient)? Or do something inbetween? Seemann covers the various lifetimes and the pros and cons for each. As with everything, we need to consider our particular situation and select the right answer for the task at hand.
Finally, Seemann covers Interception. This is the idea of using the DI container to inject cross-cutting concerns into our application. For example, we could have the container inject logging or error handling into each of our dependencies. This was a very interesting topic to read about. He also covers Aspect Oriented Programming (AOP) and compares/contrasts it with using DI to cover the same concerns.
Part 4
Part 4 covers several DI containers. What I really liked about this section is that Seemann uses the same examples in each chapter. This means that instead of focusing on the example itself, we can focus on the differences in the container implementations. The containers covered are not exhaustive (5 containers), but they are some of the most widely-used ones.
The containers include Castle Windsor, StructureMap, Spring.NET, Autofac, and Unity. Using each container, examples cover configuration, managing lifetime, interception, and how working with difficult APIs (the difficult APIs refers to the dependencies that we are registering/resolving, not the container APIs themselves). Not all containers support all features out-of-the-box, and Seemann shows some possible work-arounds for the features that are not implemented directly by the container. The end result is that most of the containers work pretty much the same (which is good), but there are slight variations. The container you select can be based on personal preference or particular needs (for example Spring.NET is actually a much larger framework that includes DI functionality; you may want to use Spring.NET for those other features or because you also use Spring in the Java world).
Lastly, Part 4 covers MEF (Managed Extensibility Framework). While MEF is not a DI container, it is often used for DI purposes. (This is true of the Prism framework -- it supports using MEF as a DI container.) Seemann shows that while MEF can be used for DI, it probably should not be. Again, this is a controversial topic: I have seen blog articles where people moved from Unity to MEF for DI and see no reason why anyone would want to start a new project with Unity. Seemann lays out some very good points regarding how DI is implemented using MEF and how it varies greatly from the other containers.
Resources
Where Dependency Injection in .NET really shines is in its use of external references. Many sections point to books, magazine articles or blog articles where you can research topics further. Seemann has gone to a lot of work to collect these resources together in one place. I will be spending a lot of time going through the links collected at the back of the book. On a good note (personally), I found that several of the books mentioned are already in my collection.
Wrap Up
Dependency Injection in .NET is an excellent resource. There are not very many DI books on the market, and it is great to see that this particular book is so well executed. I would recommend this to any developer who is interested in improving his/her use of Dependency Injection.
Happy Coding!
This turned out to be a good and timely read for me. A couple months ago, I started working on a WPF project using Prism, and Dependency Injection plays a big role in that. A colleague of mine (and former co-worker) highly recommended this book, and so I picked up a copy. (Disclaimer: I am a "book person"; that is how I learn best. Other people have different learning methods, so feel free to take this from a "book person" perspective.)
Part 1
Part 1 is an overview of Dependency Injection. Seemann describes what DI is, and also dispels some misconceptions about DI. He covers the benefits of DI including late binding, extensibility, parallel development, maintainability, and testability. He also lays a foundation by showing good and bad examples of implementing DI and how each impacts achieving the desired benefits. Finally, he introduces the topics that will be used in the rest of the book, including the design patterns, DI features, and the DI containers that are available.
Part 2
Part 2 covers the patterns and anti-patterns in DI. The patterns include Constructor Injection, Property Injection, Method Injection, and Ambient Context. Seemann does a good job of covering these patterns including both the benefits and the drawbacks of each. It becomes apparent very quickly that he favors Constructor Injection wherever possible and that the other patterns should be used only when Constructor Injection is not possible. (As a side note: I came to this same opinion in my fairly-short time working with DI.)
The anti-patterns include Control Freak, Bastard Injection, Constrained Construction, and Service Locator. Of these, the Service Locator is the most controversial. Many people consider Service Locator to be a valid DI pattern (rather than an anti-pattern). Seemann himself admits that he promoted the use of Service Locator for many years. He eventually came to the point where the shortcomings of the pattern outweighed the benefits. Prism (the Microsoft Patterns and Practices application guidance) has a service locator baked in (to reference either Unity or MEF for DI out-of-the-box). In my experience with Prism, we have used the service locator pattern, and I can see the benefits and shortcomings. At this point in the project, the scale leans towards the benefits, and we are willing to work with the drawbacks.
Part 2 also has a chapter on "DI Refactorings". This is a very practical review of what types of DI situations you'll run across in the real world -- like dealing with cyclic dependencies and short-lived dependencies. These are great topics to cover. Because the DI containers that we normally work with add the layer of abstraction, sometimes we forget about some potential issues (by assuming that the container is dealing with it).
Part 3
Part 3 covers some big DI topics: Object Composition, Object Lifetime, and Interception. Object Composition has to do with how we use DI to compose our objects. Seemann shows examples for how to compose objects with various technologies -- from easy implementations with console and WPF applications, to difficult implementations with ASP.NET applications and PowerShell commandlets.
Object Lifetime has to do with how the dependencies themselves are managed. Do you always return the same instance of a dependency no matter who asks for it (Singleton)? Or do you return a new instance of a dependency each time (Transient)? Or do something inbetween? Seemann covers the various lifetimes and the pros and cons for each. As with everything, we need to consider our particular situation and select the right answer for the task at hand.
Finally, Seemann covers Interception. This is the idea of using the DI container to inject cross-cutting concerns into our application. For example, we could have the container inject logging or error handling into each of our dependencies. This was a very interesting topic to read about. He also covers Aspect Oriented Programming (AOP) and compares/contrasts it with using DI to cover the same concerns.
Part 4
Part 4 covers several DI containers. What I really liked about this section is that Seemann uses the same examples in each chapter. This means that instead of focusing on the example itself, we can focus on the differences in the container implementations. The containers covered are not exhaustive (5 containers), but they are some of the most widely-used ones.
The containers include Castle Windsor, StructureMap, Spring.NET, Autofac, and Unity. Using each container, examples cover configuration, managing lifetime, interception, and how working with difficult APIs (the difficult APIs refers to the dependencies that we are registering/resolving, not the container APIs themselves). Not all containers support all features out-of-the-box, and Seemann shows some possible work-arounds for the features that are not implemented directly by the container. The end result is that most of the containers work pretty much the same (which is good), but there are slight variations. The container you select can be based on personal preference or particular needs (for example Spring.NET is actually a much larger framework that includes DI functionality; you may want to use Spring.NET for those other features or because you also use Spring in the Java world).
Lastly, Part 4 covers MEF (Managed Extensibility Framework). While MEF is not a DI container, it is often used for DI purposes. (This is true of the Prism framework -- it supports using MEF as a DI container.) Seemann shows that while MEF can be used for DI, it probably should not be. Again, this is a controversial topic: I have seen blog articles where people moved from Unity to MEF for DI and see no reason why anyone would want to start a new project with Unity. Seemann lays out some very good points regarding how DI is implemented using MEF and how it varies greatly from the other containers.
Resources
Where Dependency Injection in .NET really shines is in its use of external references. Many sections point to books, magazine articles or blog articles where you can research topics further. Seemann has gone to a lot of work to collect these resources together in one place. I will be spending a lot of time going through the links collected at the back of the book. On a good note (personally), I found that several of the books mentioned are already in my collection.
Wrap Up
Dependency Injection in .NET is an excellent resource. There are not very many DI books on the market, and it is great to see that this particular book is so well executed. I would recommend this to any developer who is interested in improving his/her use of Dependency Injection.
Happy Coding!
Monday, August 27, 2012
Steal My Windows 8 Idea: Share With Grandma
A couple weeks ago, I attended a user group where Danny Warren from InterKnowlogy presented on Windows 8 (Danny is @dannydwarren on Twitter and blogs here). He showed some really cool Windows 8 contracts including Search and Sharing. (For more info on Sharing, Danny points to a blog article about sharing here: Windows 8 and the future of XAML: Part 5: More contracts in WinRT/Windows 8.)
My brain puts things together slowly sometimes. I could tell during the presentation that Sharing was a very important feature. And over the last week or so, the idea has really settled in, and I'm really looking forward to having that feature available in my day-to-day activities.
Before Windows 8 Sharing
We love to get data from one application to another: in a photo album, we can send a photo to Facebook; in a browser, we can email a story to a friend. And historically, it has been up to each application to support the sharing. This meant that we were locked into whatever systems the application supported. I don't know about you, but after I find an application I like, I tend to hang onto it for a while. And even though it's really cool that my photo album supports sending to MySpace, it's not very relevant anymore.
This leaves me with a hard decision: do I look for another photo album and learn a new interface? Or do I settle for doing manual uploads to my favorite sharing site? Neither option is very appealing. Wouldn't it be cool if I could keep my photo album application and add whatever sharing I want?
Share Target: I Love This Idea
In Windows 8, Microsoft has separated the responsibilities of this process. The sharing application (a "Share Source") just has to expose some piece of data (whether it's text, an image, a URL, or something else). Then any application that knows how to consume that data (a "Share Target") can get access to it -- with the user's permission, of course.
I think of this as the "Send To" menu that we had in Windows XP. You could open up the "Send To" folder (under the user profile) and add shortcuts to applications. I would always add "Notepad" to my Send To options. That way, I could always right-click on a file, and, regardless of type, I could "Send To" Notepad. This was great for bypassing the default editor for a particular file. (As a side note: I've been weaned off of this functionality in Windows 7. I'm sure there's a way of customizing the Send To menu, but it's not as obvious as it was in XP).
The Share Target takes this a step further. Now, I can have any number of applications that are designated as Share Targets for photos. From inside my photo album, I just have to "Share" a photo, and then I get to select from all of the applications that can accept that photo. This means that 2 years from now when everyone is using a new photo sharing site / social network, I just need to get the latest Share Target application, and my original photo album gets to stay exactly the same.
To me, this is a brilliant idea. Several of the phone OSes have integrated Facebook, Twitter, or G+ to make it easy to post photos or send updates. But those are still bound to the OS. With Windows 8, we can share with whatever we want (assuming someone has written an application for it), and we're not locked down to whatever was popular at the time a particular piece of software was written.
Steal My Idea: Share With Grandma
So, I thought of a good idea for this feature: Share With Grandma. This would be a Share Target application where you could configure how tech-savvy Grandma is -- from Pony Express to Gadget Granny. The application would decide how to get the shared information to Grandma based on these settings.
For example, let's say that I want to share a picture with Grandma. If she has limited technical skills, then the application could send an email attachment. If she is a little bit more comfortable, then maybe it sends her a link to a Facebook update. If she's a Gadget Granny, then maybe it posts to a shared photo stream that automatically shows up on her tablet. The same type of thing could be done for text or URLs.
In all honesty, I probably won't get around to programming this. So, feel free to steal my idea (but at least give me a mention in the credits).
I see Windows 8 Sharing as a huge opportunity to come up with some really creative and useful ways of using the data that we've already got. It's time to start busting out some code.
Happy Coding!
My brain puts things together slowly sometimes. I could tell during the presentation that Sharing was a very important feature. And over the last week or so, the idea has really settled in, and I'm really looking forward to having that feature available in my day-to-day activities.
Before Windows 8 Sharing
We love to get data from one application to another: in a photo album, we can send a photo to Facebook; in a browser, we can email a story to a friend. And historically, it has been up to each application to support the sharing. This meant that we were locked into whatever systems the application supported. I don't know about you, but after I find an application I like, I tend to hang onto it for a while. And even though it's really cool that my photo album supports sending to MySpace, it's not very relevant anymore.
This leaves me with a hard decision: do I look for another photo album and learn a new interface? Or do I settle for doing manual uploads to my favorite sharing site? Neither option is very appealing. Wouldn't it be cool if I could keep my photo album application and add whatever sharing I want?
Share Target: I Love This Idea
In Windows 8, Microsoft has separated the responsibilities of this process. The sharing application (a "Share Source") just has to expose some piece of data (whether it's text, an image, a URL, or something else). Then any application that knows how to consume that data (a "Share Target") can get access to it -- with the user's permission, of course.
I think of this as the "Send To" menu that we had in Windows XP. You could open up the "Send To" folder (under the user profile) and add shortcuts to applications. I would always add "Notepad" to my Send To options. That way, I could always right-click on a file, and, regardless of type, I could "Send To" Notepad. This was great for bypassing the default editor for a particular file. (As a side note: I've been weaned off of this functionality in Windows 7. I'm sure there's a way of customizing the Send To menu, but it's not as obvious as it was in XP).
The Share Target takes this a step further. Now, I can have any number of applications that are designated as Share Targets for photos. From inside my photo album, I just have to "Share" a photo, and then I get to select from all of the applications that can accept that photo. This means that 2 years from now when everyone is using a new photo sharing site / social network, I just need to get the latest Share Target application, and my original photo album gets to stay exactly the same.
To me, this is a brilliant idea. Several of the phone OSes have integrated Facebook, Twitter, or G+ to make it easy to post photos or send updates. But those are still bound to the OS. With Windows 8, we can share with whatever we want (assuming someone has written an application for it), and we're not locked down to whatever was popular at the time a particular piece of software was written.
Steal My Idea: Share With Grandma
So, I thought of a good idea for this feature: Share With Grandma. This would be a Share Target application where you could configure how tech-savvy Grandma is -- from Pony Express to Gadget Granny. The application would decide how to get the shared information to Grandma based on these settings.
For example, let's say that I want to share a picture with Grandma. If she has limited technical skills, then the application could send an email attachment. If she is a little bit more comfortable, then maybe it sends her a link to a Facebook update. If she's a Gadget Granny, then maybe it posts to a shared photo stream that automatically shows up on her tablet. The same type of thing could be done for text or URLs.
In all honesty, I probably won't get around to programming this. So, feel free to steal my idea (but at least give me a mention in the credits).
I see Windows 8 Sharing as a huge opportunity to come up with some really creative and useful ways of using the data that we've already got. It's time to start busting out some code.
Happy Coding!
Sunday, August 19, 2012
Dependency Injection: How Do You Find the Balance?
As a developer, I am constantly trying to find the right balance -- to figure out the right level of abstraction for the current project. If we add too much (unneeded) abstraction, we end up with code that may be more difficult to debug and maintain. If we add too little (needed) abstraction, we end up with code that is difficult to extend and maintain. Somewhere in-between, we have a good balance that leads to the optimum level of maintainability for the current environment.
Technique 1: Add It When You Need It
I'm a big fan of not adding abstraction until you need it. This is a technique that Robert C. Martin recommends in Agile Principles, Patterns, and Practices in C# (Amazon Link). This is my initial reaction to abstractions in code -- primarily because I've been burned by some really badly implemented abstractions in the past. After dealing with abstractions that didn't add benefit to the application (and only complicated maintenance), my kneejerk reaction is to not add abstractions until really necessary.
This is not to say that abstractions are bad. We just need to make sure that they are relevant to the code that we are building. The bad implementations that I've run across have generally been the result of what I call "white paper architecture". This happens when the application designer reads a white paper on how to architect an application and decides to implement it without considering the specific implications in the environment. I'll give 2 examples.
Example 1: I ended up as primary support on an application that made use of base classes for the forms. In itself, this isn't a bad thing. The problem was in the implementation. If you did not use the base class, then the form would not work at all. This led to much gnashing of teeth. In a more useful scenario, a base class would add specific functionality. But if the base class was not used, the form would still work (just without the extra features).
Example 2: I helped someone out on another project (fortunately, I didn't end up supporting this application myself). This application was abstracted out too far. In order to add a new field (meaning, from the data store to the screen), it was necessary to modify 17 files (from data storage, through ORM, objects on the server side, DTOs on the server side, through the service, DTOs on the client side, objects on the client side, to the presentation layer). And unfortunately, if you missed a file it did not result in a compile-time error; it would show up as a run-time error.
After coming across several application like these, I've adopted the YAGNI principle (You Aren't Gonna Need It). If you do need it later, then you can add it.
Technique 2: Know Your Environment
Unfortunately, Technique 1 isn't always feasible. It is often time consuming to go back into an application and add the abstractions as you need them. When we're asked as developers to keep a specific delivery velocity, we're not often given time to go back and refactor things later. So, a more practical option comes with experience: know the environment that you're working in.
As an example, for many years I worked in an environment that used Microsoft SQL Server. That was our database platform, and every application that we built used SQL Server. Because of this, I didn't spend time doing a full abstraction of the data layer. This doesn't mean that I had database calls sprinkled through the code. What it means is that I had a logical separation of the database calls (meaning that DB calls were only made in specific parts of the library) but didn't have a physical separation (for example, with a repository interface that stood in front of the database).
Was this bad design? I know several folks who would immediately say "Yes, that's terrible design." But I would argue that it was good design for the application environment.
Out of the 20 or so applications that I built while at that company, a grand total of one application needed to support a different database (an application that pulled data from a vendor product that used an Oracle backend). For that one application, I added a database abstraction layer (this was actually a conversion -- the vendor product was originally using SQL Server and was changed to Oracle during an upgrade). So what makes more sense? To add an unused abstraction to 20 applications? Or to add the necessary abstraction to the one application that actually needed it?
Now, if I was building an application for a different environment that needed to support different data stores (such as software that would be delivered to different customer sites), I would design things much differently. You can see a simple example of how I would design this here: IEnumerable, ISaveable, IDontGetIt: Understanding .NET Interfaces.
Unfortunately, this type of decision can only be made if you know your environment well. It usually takes years of experience in that environment to know which things are likely to change and which things are likely to stay the same. When you walk into a new environment, it can be very difficult to figure out how to make these distinctions.
Dependency Injection: Getting Things "Just Right"
My current project is a WPF project using Prism. Prism is a collection of libraries and guidance around building XAML applications, and a big part of that guidance is around Dependency Injection (DI). I've been doing quite a bit of programming (and thinking) around dependency injection over the last couple months, and I'm still trying to find the balance -- the Goldilocks "Just Right" between "Too Loosely Coupled" and "Too Tightly Coupled".
Did I just say "Too Loosely Coupled"? Is that even possible? We're taught that loose coupling is a good thing -- something we should always be striving for. And I would venture to guess that there are many developers out there who would say that there's no such thing as "too loosely coupled."
But the reason that loose coupling is promoted so highly is that our problem is usually the opposite -- the default state of application developers is to have tight coupling. Loose coupling is encouraged because it's not our instinctual reaction.
I'm currently reading Mark Seemann's Dependency Injection in .NET (Amazon Link). This is an excellent book (disclaimer: I've only read half of it so far, but I don't expect that my evaluation will change). Seemann describes many of the patterns and anti-patterns in Dependency Injection along with the benefits and costs (which helps us decide when/where to use specific patterns).
An important note: Seemann specifically says that the sample application that he shows will be more complicated than most DI samples he's seen. He does this because DI doesn't make sense in a "simple" application; the value really shines in complex applications that have many functions that should be broken out. With the functions broken out into separate classes, it makes sense to make sure that the classes are loosely coupled so that we can add/remove/change/decorate implementations without needing to modify all of our code. This means that not all applications benefit from DI; the benefits come once we hit a certain level of complexity.
So, now we have to decide how much dependency injection is "Just Right". As an example, Seemann describes the Service Locator as an anti-pattern. But Prism has a built-in Service Locator. So, should we use the Prism Service Locator or not? And that's where we come back to the balance of "it depends."
In the application I'm working on, we are using the Service Locator pattern, and it seems to be working well for those parts of the library. I have run into a few interesting issues (specifically when writing unit tests for these classes), and it turns out that Seemann points out exactly the issues that I've been thinking about.
I don't really have time to go into the details here. As an example, when using the Service Locator, it is difficult to see the specific dependencies for a class. As we have been modifying modules during our build, sometimes the unit tests are breaking because a new dependency was added (which is resolved by the Service Locator), but it doesn't stop the code from compiling. We then need to modify our unit tests by adding/mocking the new dependency.
[Editor's Note: I've published an article talking more about the pros and cons of the Service Locator pattern: Dependency Injection: The Service Locator Pattern.]
As with everything, there are pros and cons. For the time being, I'm content with using the Service Locator for our application. There are some "gotchas" that I need to look out for (but that's true with whatever patterns I'm using). Seemann also notes that he was once a proponent of Service Locator and moved away from it after he discovered better approaches that would eliminate the disadvantages that he was running across. It may be that I come to that same conclusion after working with Service Locator for a while. Time will tell.
How Do You Do Dependency Injection?
Now it's time to start a conversation. How do you use Dependency Injection? What has worked well for you in different types of applications and environments? Do you have any favorite DI references / articles that have pointed you in a direction that works well for you?
As an aside, Mark Seemann's book has tons of reference articles -- most pages have some sort of footnote referring to a book or article on the topic. It is very evident that Seemann has researched the topic very thoroughly. I'm going to try to read through as many of these references as I can find time for.
Drop your experiences in the comments, and we can all learn from each other.
Happy Coding!
Technique 1: Add It When You Need It
I'm a big fan of not adding abstraction until you need it. This is a technique that Robert C. Martin recommends in Agile Principles, Patterns, and Practices in C# (Amazon Link). This is my initial reaction to abstractions in code -- primarily because I've been burned by some really badly implemented abstractions in the past. After dealing with abstractions that didn't add benefit to the application (and only complicated maintenance), my kneejerk reaction is to not add abstractions until really necessary.
This is not to say that abstractions are bad. We just need to make sure that they are relevant to the code that we are building. The bad implementations that I've run across have generally been the result of what I call "white paper architecture". This happens when the application designer reads a white paper on how to architect an application and decides to implement it without considering the specific implications in the environment. I'll give 2 examples.
Example 1: I ended up as primary support on an application that made use of base classes for the forms. In itself, this isn't a bad thing. The problem was in the implementation. If you did not use the base class, then the form would not work at all. This led to much gnashing of teeth. In a more useful scenario, a base class would add specific functionality. But if the base class was not used, the form would still work (just without the extra features).
Example 2: I helped someone out on another project (fortunately, I didn't end up supporting this application myself). This application was abstracted out too far. In order to add a new field (meaning, from the data store to the screen), it was necessary to modify 17 files (from data storage, through ORM, objects on the server side, DTOs on the server side, through the service, DTOs on the client side, objects on the client side, to the presentation layer). And unfortunately, if you missed a file it did not result in a compile-time error; it would show up as a run-time error.
After coming across several application like these, I've adopted the YAGNI principle (You Aren't Gonna Need It). If you do need it later, then you can add it.
Technique 2: Know Your Environment
Unfortunately, Technique 1 isn't always feasible. It is often time consuming to go back into an application and add the abstractions as you need them. When we're asked as developers to keep a specific delivery velocity, we're not often given time to go back and refactor things later. So, a more practical option comes with experience: know the environment that you're working in.
As an example, for many years I worked in an environment that used Microsoft SQL Server. That was our database platform, and every application that we built used SQL Server. Because of this, I didn't spend time doing a full abstraction of the data layer. This doesn't mean that I had database calls sprinkled through the code. What it means is that I had a logical separation of the database calls (meaning that DB calls were only made in specific parts of the library) but didn't have a physical separation (for example, with a repository interface that stood in front of the database).
Was this bad design? I know several folks who would immediately say "Yes, that's terrible design." But I would argue that it was good design for the application environment.
Out of the 20 or so applications that I built while at that company, a grand total of one application needed to support a different database (an application that pulled data from a vendor product that used an Oracle backend). For that one application, I added a database abstraction layer (this was actually a conversion -- the vendor product was originally using SQL Server and was changed to Oracle during an upgrade). So what makes more sense? To add an unused abstraction to 20 applications? Or to add the necessary abstraction to the one application that actually needed it?
Now, if I was building an application for a different environment that needed to support different data stores (such as software that would be delivered to different customer sites), I would design things much differently. You can see a simple example of how I would design this here: IEnumerable, ISaveable, IDontGetIt: Understanding .NET Interfaces.
Unfortunately, this type of decision can only be made if you know your environment well. It usually takes years of experience in that environment to know which things are likely to change and which things are likely to stay the same. When you walk into a new environment, it can be very difficult to figure out how to make these distinctions.
Dependency Injection: Getting Things "Just Right"
My current project is a WPF project using Prism. Prism is a collection of libraries and guidance around building XAML applications, and a big part of that guidance is around Dependency Injection (DI). I've been doing quite a bit of programming (and thinking) around dependency injection over the last couple months, and I'm still trying to find the balance -- the Goldilocks "Just Right" between "Too Loosely Coupled" and "Too Tightly Coupled".
Did I just say "Too Loosely Coupled"? Is that even possible? We're taught that loose coupling is a good thing -- something we should always be striving for. And I would venture to guess that there are many developers out there who would say that there's no such thing as "too loosely coupled."
But the reason that loose coupling is promoted so highly is that our problem is usually the opposite -- the default state of application developers is to have tight coupling. Loose coupling is encouraged because it's not our instinctual reaction.
I'm currently reading Mark Seemann's Dependency Injection in .NET (Amazon Link). This is an excellent book (disclaimer: I've only read half of it so far, but I don't expect that my evaluation will change). Seemann describes many of the patterns and anti-patterns in Dependency Injection along with the benefits and costs (which helps us decide when/where to use specific patterns).
An important note: Seemann specifically says that the sample application that he shows will be more complicated than most DI samples he's seen. He does this because DI doesn't make sense in a "simple" application; the value really shines in complex applications that have many functions that should be broken out. With the functions broken out into separate classes, it makes sense to make sure that the classes are loosely coupled so that we can add/remove/change/decorate implementations without needing to modify all of our code. This means that not all applications benefit from DI; the benefits come once we hit a certain level of complexity.
So, now we have to decide how much dependency injection is "Just Right". As an example, Seemann describes the Service Locator as an anti-pattern. But Prism has a built-in Service Locator. So, should we use the Prism Service Locator or not? And that's where we come back to the balance of "it depends."
In the application I'm working on, we are using the Service Locator pattern, and it seems to be working well for those parts of the library. I have run into a few interesting issues (specifically when writing unit tests for these classes), and it turns out that Seemann points out exactly the issues that I've been thinking about.
I don't really have time to go into the details here. As an example, when using the Service Locator, it is difficult to see the specific dependencies for a class. As we have been modifying modules during our build, sometimes the unit tests are breaking because a new dependency was added (which is resolved by the Service Locator), but it doesn't stop the code from compiling. We then need to modify our unit tests by adding/mocking the new dependency.
[Editor's Note: I've published an article talking more about the pros and cons of the Service Locator pattern: Dependency Injection: The Service Locator Pattern.]
As with everything, there are pros and cons. For the time being, I'm content with using the Service Locator for our application. There are some "gotchas" that I need to look out for (but that's true with whatever patterns I'm using). Seemann also notes that he was once a proponent of Service Locator and moved away from it after he discovered better approaches that would eliminate the disadvantages that he was running across. It may be that I come to that same conclusion after working with Service Locator for a while. Time will tell.
How Do You Do Dependency Injection?
Now it's time to start a conversation. How do you use Dependency Injection? What has worked well for you in different types of applications and environments? Do you have any favorite DI references / articles that have pointed you in a direction that works well for you?
As an aside, Mark Seemann's book has tons of reference articles -- most pages have some sort of footnote referring to a book or article on the topic. It is very evident that Seemann has researched the topic very thoroughly. I'm going to try to read through as many of these references as I can find time for.
Drop your experiences in the comments, and we can all learn from each other.
Happy Coding!