Showing posts with label Abstraction. Show all posts
Showing posts with label Abstraction. Show all posts

Friday, July 9, 2021

A Collection of 2020 Recorded Presentations

2020 was "interesting". One good thing that came out of it is that I had the chance to speak remotely for some user groups and conferences that I would not normally get to attend in person. Many of those presentations were recorded. Here's a list for anyone who is interested

Abstract Art: Getting Abstraction "Just Right" (Apr 2020)

Modern Devs Charlotte

Abstraction is awesome. And abstraction is awful. Too little, and our applications are difficult to extend and maintain. Too much, and our applications are difficult to extend and maintain. Finding the balance is the key to success. The first step is to identify your natural tendency as an under-abstractor or an over-abstractor. Once we know that, we can work on real-world techniques to dial in the level of abstraction that is "just right" for our applications.


I'll Get Back to You: Task, Await, and Asynchronous Methods in C# (Apr 2020)

Enterprise Developers Guild

There's a lot of confusion about async/await, Task/TPL, and asynchronous and parallel programming in general. So let's start with the basics and look at how we can consume asynchronous methods using Task and then see how the "await" operator can makes things easier for us. Along the way, we’ll look at continuations, cancellation, and exception handling.


A Banjo Made Me a Better Developer (May 2020)

Dev Around the Sun

What does a banjo have to do with software development? They both require learning. And picking up a banjo later in life showed me 3 things that I've brought into my developer life. (1) You can learn; a growth mindset removes blockages. (2) You don't have to be afraid to ask for help; experienced banjoists/developers can point you in the right direction. (3) You don't have to be perfect before you share what you've learned; it's okay to show what you have "in progress". In combination, these have made me a better banjo player and a better developer.


I'll Get Back to You: Task, Await, and Asynchronous Methods in C# (Jun 2020)

Houston .NET User Group

There's a lot of confusion about async/await, Task/TPL, and asynchronous and parallel programming in general. So let's start with the basics and look at how we can consume asynchronous methods using Task and then see how the "await" operator can makes things easier for us. Along the way, we’ll look at continuations, cancellation, and exception handling.


What's New in C# 8 Interfaces (and how to use them effectively) (Jun 2020)

Southeast Valley .NET User Group
North West Valley .NET User Group

C# 8 brings new features to interfaces, including default implementation, access modifiers, and static members. We'll look at these new features and see where they are useful and where they should be avoided. With some practical tips, "gotchas", and plenty of examples, we'll see how to use these features effectively in our code.


What's New in C# 8 Interfaces (and how to use them effectively) (Jul 2020)

Tulsa Developers Association

C# 8 brings new features to interfaces, including default implementation, access modifiers, and static members. We'll look at these new features and see where they are useful and where they should be avoided. With some practical tips, "gotchas", and plenty of examples, we'll see how to use these features effectively in our code.


I'll Get Back to You: Task, Await, and Asynchronous Methods in C# (Aug 2020)

Code PaLOUsa

There's a lot of confusion about async/await, Task/TPL, and asynchronous and parallel programming in general. So let's start with the basics and look at how we can consume asynchronous methods using Task and then see how the "await" operator can makes things easier for us. Along the way, we’ll look at continuations, cancellation, and exception handling.


Get Func-y: Understanding Delegates in C# (Oct 2020)

St Pete .NET

Delegates are the gateway to functional programming. So let's understand delegates and how we can change the way we program by using functions as parameters, return types, variables, and properties. In addition, we'll see how the built in delegate types, Func and Action, are waiting to make our lives easier. By the time we're done, we'll see how delegates can add elegance, extensibility, and safety to our programming. And as a bonus, you'll have a few functional programming concepts to take with you.


More to Come!

I have a few more recordings from 2021 presentations, and I'll post those soon. As always, if you'd like me to visit your user group, drop me a line. I'll see what I can do about being there online or in person (once in-person stuff is a bit more back to normal).

Happy Coding!

Wednesday, February 11, 2015

YAGNI - Reaping the Benefits

People take different approaches to software, and we find that some of these work better for us than others. I'm a big fan of the YAGNI principle ("You Ain't Gonna Need It"), but historically it hasn't been much of a problem for me. But I came across an example of how it helped me recently -- even though I wasn't conscious of it at the time.

YAGNI: You Ain't Gonna Need It
The basic idea behind YAGNI is that we shouldn't code based on speculation of what we might need in the future. Business processes change, user needs change, and our software needs to change with it. If we build what we think we *might* need, odds are that we'll be wrong and that part of the software will remain unused. And that leaves us with code that is more difficult to navigate and debug without giving any benefit.

Historically, this has not been a big problem for me. My nature is as an under-abstractor. This means that I tend to lean away from abstraction. I add abstractions as I need them (and generally not before) because I've been burned by badly implemented abstractions that were put into an application "just in case".

So when we think about YAGNI, we think about the requirements that we have *now*. This doesn't mean that we don't think about the future; it just means that we don't build it yet. We should be thinking about the possibilities of our applications, and we should make sure that we don't code ourselves into a corner when we're writing them. But rather than adding the full abstraction (such as a plug-in architecture), we think about where we would put it if we needed it. Then we make sure that we leave a spot where we can add it later.

Making Things Easier
Old Application
One thing that I've been writing about recently is an application rewrite of my home automation software. Articles are collected here: Rewriting a Legacy Application. This is software that I've wanted to rewrite for a really long time, but I was too wrapped up in the complexities of the old application. There were a lot of features that had grown over the years, and it seemed like a daunting task to rewrite the whole thing.

So, I took a more practical approach: distilling things down to just the features that I actually needed -- the features that I used on a daily basis. This helped me create the requirements for a minimum viable product (MVP).

This process let me distill the entire application down to 3 features that I actually used. The other things (like UI, network access, and schedule editor) were all nice to have, but the more I thought about it, the more I figured that I didn't need them. For example, there's not much need for a UI. This application is always running on a machine in my house that's plugged into my TV, but I rarely look at the screen on that machine (it primarily just plays music and runs the lights while my TV does other things).

Rather than a huge, daunting task, this became fairly simple. In fact, I implemented the entire replacement in just a few days. Since then I've been refining it and adding new features, but that base replacement functionality went very quickly.

Leaving Room for Changing Priorities
So, I really took the YAGNI approach to most of the features in the old application. And that has turned out to be true. I've been running the software in production for a little over a month, and I haven't missed the features that I left out.

More importantly, I've been free to go in a different direction. The biggest example of that was when I added the ability to create schedule items relative to the current sunset or sunrise times. That original ugly code is in a much better state now (and it keeps getting better).

This code has been in production for a few weeks now. I currently have 2 items that are scheduled relative to sunset. A light in the living room goes on 30 minutes prior to sunset (when it's just starting to get dark), and a light in the bedroom goes on 1 hour after sunset. These are both ambient lights, and I've enjoyed watching them come on at different times as the days get longer. When I put the code in, the first light came on around 4:45 p.m., and now it comes on a little after 5:00 p.m.

But since the application is fairly "raw" as far as the architecture is concerned, I haven't committed to going in a particular direction. This has made it easy to implement this new feature (which had nothing to do with the old application). And I look forward to adding more features as I need them.

Wrap Up
So rather than getting bogged down in things I *might* need or I think I *should* need, I'm able to focus on the features that I *actually* need.

I do have some examples of where YAGNI has helped me in the past, but it's nice to see that the principle still helps me in my current coding activities -- even when I'm not consciously thinking about it.

For more information on YAGNI and other principles to help us with abstraction, check out the materials I have for Abstract Art: Getting Things "Just Right", including a Pluralsight course, slides, and a variety of blog articles.

Happy Coding!

Friday, December 5, 2014

Are You an Over-Abstractor or an Under-Abstractor?

I really like to teach people about different techniques of abstraction. If you don't believe me, just check my website: Interfaces, Dependency Injection, Design Patterns, and even the BackgroundWorker Component (and also my courses on Pluralsight).

One of the reasons that I talk about these particular topics is that they are things that took me a really long time to understand myself, so I like to help people take a short-cut to understanding. But that leads to a problem. Once we learn about them, we might want to use them everywhere, and we end up building a complex mess that's hard to maintain.

But over-abstraction is not our only problem. If we have too little abstraction, then our applications can be rigid and hard to maintain. So as developers, our goal is to find the sweet spot right in the middle. This is why I came up with "Abstract Art: Getting Things 'Just Right'" (which I give as a live presentation and as a course on Pluralsight). We're really looking to follow the Goldilocks Principle when it comes to abstraction: not too much, not too little, but "just right".

The Goldilocks Principle (from the presentation slides)

Observing Developers
So why do we, as developers, have a problem getting abstraction right? There are a number of factors, but the biggest one has to do with our natural tendencies. I'm not sure if anyone has done an official study on this, but I formed this opinion based on my observations of developers that I've worked with (and also by reflecting on myself).

The premise is that every developer has a default state as either an over-abstractor or an under-abstractor. Over-abstractors like to build stuff and think about all the possible uses of an application in the future. This may lead to overly complex implementations for features that will probably never get used. Under-abstractors like to keep things simple. This may lead to things like global variables, tightly-coupled components, and "god classes" with all sorts of functionality.

This is just our "default state" -- meaning, without any external influences, we lean one way or the other. But as we gain experience as developers, we run into difficulties with our default state and so we go in the other direction. This often results in us swinging too far, and we end up correcting back the other way.

The Pendulum Effect (from the Pluralsight course)

Eventually, we figure out the dangers of both over-abstraction and under-abstraction through trial-and-error. The main reason I wrote my presentation and Pluralsight course is to give people some best practices so they can skip some of these bad experiences.

One Group is Not Better Than the Other
The first time I gave this presentation, I ran into a bit of a problem: one of the attendees thought that I was promoting one group over the other -- for example saying that under-abstraction is better than over-abstraction. And that is definitely *not* the case.

Both groups have problems. With too much abstraction, we end up with code that is overly-complex, difficult to debug, and can cause frustration in people who are not intimately familiar with the code (which may include the original developer 6 months later). Ultimately, the code is difficult to maintain.

With too little abstraction, we end up with code that is rigid. It is hard to find bugs and very difficult to test. Ultimately, the code is difficult to maintain.

So we have a common result: code that is difficult to maintain. What we need to do is figure out how to recognize our natural tendency so that we can watch for the warning signs and hopefully end up with abstractions that are "just right" for our application.

Do You Meet More Over-Abstractors or Under-Abstractors?
In my presentations, I would love to do an informal survey to see if the developers identify themselves as over-abstractors or under-abstractors. But because I want to be careful not to create "teams" in the room, I don't do that.

I've been curious if there are more over-abstractors or under-abstractors in our field (or if there's an even split). I've been able to make the case of both groups.

The Case for Over-Abstractors
The main reason I would think that there are more over-abstractors in our field is that what we do is all about problem solving. We are developers because we like to come up with creative solutions and then implement those solutions by building things.

And a big part of that is the building. We don't want to use code that someone else wrote; we want to write it ourselves. This is often referred to as the "not invented here" syndrome.

Another part of this is that we like to plan for the future. Sure, we could build a robot that fetches a soda from the refrigerator. But wouldn't it be cooler if the robot would also vacuum as it goes? (Yeah, I know, we already have a vacuum cleaner that works well, but it will still be cool.) This is a fairly common mindset that we see in developers -- we are creative people.

Based on this, I might think that there are more over-abstractors in our industry.

The Case for Under-Abstractors
The main reason that I  would think there are more under-abstractors in our field is that there are a lot of developers who do not understand abstraction or what it is good for. As a young developer, I had a lot of trouble understanding why I would want to use interfaces in my code. I could tell they were important, but I couldn't find the right person to explain it to me. (This is one of the main reasons that I like to talk about Interfaces -- I don't want developers to go through the same struggle that I did.)

I come across a lot of developers who are like I was: they do not understand the basics of abstraction. I meet some of these developers when I'm helping them with their code; I meet some of these developers at my presentations. And my goal is to give them a better understanding of the topic while I have a chance to talk to them.

Based on this, I might think that there are more under-abstractors in our industry.

Any Ideas?
So, what are your thoughts? How do you identify yourself? Do you have more over-abstractors or under-abstractors where you work? What about the developers you meet at conferences or other events?

If you're not sure where you fall, you can check out the "Default State Quiz" that's part of my Pluralsight course.

By nature, I am an under-abstractor, but I've also swung into the over-abstraction world for a while. After my many years of experience, I like to think that I'm pretty well balanced, but I still have a tendency to lean toward my "default state" from time to time.

The first step to overcoming my default state was to recognize it. Since I've done that, I can now catch the times when I'm leaning too far toward under-abstraction. This gives me a chance to review some best practices to steer myself to a more balanced state.

Feel free to leave a comment with your thoughts (or drop me a note privately). I'm interested to hear what you think about the topic.

Happy Coding!

Wednesday, October 29, 2014

Pluralsight Learning Path: Getting to Great with C#

Pluralsight has a lot of courses (like hundreds and hundreds). So it can be daunting to figure out where you need to start. To help get people pointed in a direction that works for them, Pluralsight has a collection of Learning Paths that highlight a set of courses to help you meet a goal -- whether you're working toward certification or want to learn to build a particular type of application.

A new learning path was published today: Getting to Great with C#.

Getting to Great with C#
  • Object-Oriented Programming Fundamentals in C#
  • Defensive Coding in C#
  • Clean Code: Writing Code for Humans
  • C# Interfaces*
  • Abstract Art: Getting Things "Just Right"*
  • Dependency Injection On-Ramp*
  • SOLID Principles of Object Oriented Design
  • Design Patterns On-Ramp*
  • Design Patterns Library
Be sure to follow the link (here it is again)  to get the details on the goals of the learning path and descriptions of all the courses.

The really cool part: I authored 4 of these courses (marked with *). I've had courses included in other learning paths, but I'm excited and honored to have so many of my courses included in a single collection (plus, I know the other authors and courses, and I'm in very good company).

Happy Coding!

Monday, October 6, 2014

New Pluralsight Course: Abstract Art: Getting Things "Just Right"

I'm happy to announce that my latest Pluralsight course has just been published: Abstract Art: Getting Things "Just Right" (Pluralsight link).

I've been talking and writing about this topic for quite a while, so I was glad for the chance to put everything down in one place. I'm a big fan of the proper use of abstraction. When we use it correctly, it can make our applications easier to extend, maintain, and test.

And many of the techniques of abstraction took me a long time to grab on to when I was a young programmer. To save other developers from having to go through that struggle, I often talk about Interfaces, Design Patterns, and Dependency Injection.

And used properly, these things are really awesome. But we can also take things too far. If we have too much abstraction in our code, then things get pretty awful. Our code can become overly complex, confusing to navigate, and difficult to debug.

Our goal is to find the balance. Part of this is to look at ourselves and see what our natural tendencies are. Does your nature lean toward over-abstraction? Does your nature lean toward under-abstraction? Once we recognize this, we can use it to our advantage to avoid the danger areas.

And the course is full of practical advice. There is no one right answer -- no particular abstraction technique that applies to every situation. So, we'll look at several principles that we can apply to our code. If we keep these in mind, then we'll be headed in the right direction.

So, let's build on the experience of this girl:


And get things "Just Right".

Happy Coding!

Wednesday, September 24, 2014

October 2014 Speaking Engagements

I'll be traveling a little bit in October. If you're in Silicon Valley or the Phoenix area, be sure to stop by. These events should be a lot of fun.

Thursday, October 9, 2014
South Bay .NET User Group
Mountain View, CA
Meetup Event
o Abstract Art: Getting Things "Just Right"

Saturday & Sunday, October 11 & 12, 2014
Silicon Valley Code Camp
Los Altos Hills, CA
My Sessions
o Clean Code: Homicidal Maniacs Read Code, Too!
o Learn to Love Lambdas
o Community, Career, and You: A Microsoft MVP Panel

This will be my third time going to the Silicon Valley Code Camp. It is an amazing experience. Tons of sessions on all different topics to choose from (there are about 25 sessions to choose from in each time slot). I've met a lot of great people over the last two years, and I'm looking forward to being there again this year.

Saturday, October 18, 2014
Desert Code Camp
Chandler, AZ
Code Camp Site
o Abstract Art: Getting Things "Just Right"
o Dependency Injection: A Practical Introduction
o Learn to Love Lambdas

Desert Code Camp is one of my favorite events (I know, I'm not supposed to have favorites). This will be my 9th time at this event. The Phoenix area has a great developer community, and I always have a lot of fun when I go out there. I'm looking forward to talking to some old friends and also meet a lot of new people.

Abstract Art: Getting Things "Just Right" (the movie)
You may be wondering about my latest Pluralsight course. It's currently going through the final production processes and should be released the week of October 6th.

[Update 10/06/2014: It's published! http://www.pluralsight.com/courses/abstract-art-getting-things-just-right]

In the course, we look at how abstraction is awesome and how abstraction is awful. If we have too much abstraction, then our applications are complex and hard to maintain. If we have too little abstraction, then our applications are rigid and hard to maintain. Our goal is to figure out how to get just the right amount of abstraction in our applications.

We'll do this by looking at ourselves and how our own nature as developers can get in the way of achieving this goal. And the course is chock full of practical advice -- real practices that we can use to help us get things "just right".

And of course, you can also see the live version (which is a bit condensed) at upcoming events. Hope to see you in person soon!

Happy Coding!

Thursday, June 12, 2014

Upcoming Pluralsight Course -- Abstract Art: Getting Things "Just Right"

I'm happy to announce that I've just started working on a new course for Pluralsight -- Abstract Art: Getting Things "Just Right". Most of the topics that I talk about are things that took me a long time to really understand well. This is no different.

Abstraction is awesome: it can help make our applications easier to maintain, test, and extend. Abstraction is also awful: it can make our code difficult to understand, navigate, and debug. And there's the problem. If we have too little, our applications can be rigid and difficult to maintain. If we have too much, our applications can be confusing and difficult to maintain. We need to find just the right amount of abstraction for a particular application and environment.

Why is it so hard to get abstraction "just right"? Because it goes against our nature as developers. We have natural tendencies that put us into one of two groups: under-abstractors and over-abstractors. Under-abstractors tend to shy away from abstractions: "We'll just hard-code this and copy/paste that, and it will be fine." Over-abstractors tend to add more abstraction than is actually useful: "We'd better put in these 3 layers and add a plug-in architecture because we might need them later." And even though no one likes to identify with either of these categories, we need to admit to who we are deep down.

The good news is that we can overcome our natural tendencies to find the right balance for our applications. And that's what this course is all about. We'll take a look at some of the mistakes and missteps I've made in building applications over the years, how there's a tendency to over-correct when adjusting our approach, and how I worked with another developer to come up with really good techniques to find the right balance.

There will be lots of practical advice and guidance to help us along the path getting abstraction "just right".

So, stay tuned for progress reports, and let me know if you have any suggestions on things you'd like to see.

Happy Coding!

Sunday, August 18, 2013

Do I Really Need a Repository?

I use the Repository pattern in a number of my presentations, whether talking about Interfaces, Generics, or Dependency Injection. The primary reason I use this is because it's a very easy problem to describe: we need to access data from different data sources (Web Service, SQL, Oracle, NoSQL, XML, CSV), and we want to keep the complexity out of our core application code.

But the question often comes up: Do I really need a Repository in my application?

And I'll give my standard answer: It depends.

Let's take a closer look at the Repository pattern and how it fits (or doesn't fit) with some scenarios that I've dealt with.

What is a Repository?
We'll start with a definition of the Repository pattern:
Mediates between the domain and data mapping layers using a collection-like interface for accessing domain objects. 
[Fowler, et al. Patterns of Enterprise Application Architecture. Addison-Wesley, 2003.]
The idea behind having a repository is to insulate the application from the data access code. Here's a fairly typical interface that shows a CRUD (Create, Read, Update, Delete) repository:


This means that from our application code, we simply call "Repository.GetPeople()" to get a collection of Person objects from the data store. Our application code does not need to know what's on the other side of that "GetPeople" call. It could be a SQL query; it could be a service call; it could be reading from a local file. The application is insulated from that knowledge.

[Editor's Note: For more exploration of Repositories, see "Applying the Interface Segregation Principle to a Repository".]

But do we really need this in our application?

Add Abstraction as You Need It
Developers generally fall into two groups: over-abstractors and under-abstractors. Neither group is better than the other; they are just natural tendencies. But the challenge for both groups is to find just the right level of abstraction which is generally somewhere in-between.  (For a closer look, see Abstraction: The Goldilocks Principle.)

By nature, I am an under-abstractor. This is because early in my career, I supported an application that was severely over-abstracted. The application was extremely difficult to maintain, and even the senior devs had trouble getting expected behavior out of the code base. So, my first inclination is to limit abstraction as much as possible.

But I've learned some good lessons over the years, which included creating an under-abstracted application that was difficult to maintain. So now I'm always looking to find the balance. Here is the best advice that I've come across so far:
Add abstraction as you need it.
And this literally means "as you need it", not "because you think you might need it at some point in the future." This has helped keep me out of a lot of trouble, and it's also helped me when working with teammates who are over-abstractors by nature.

How does this apply to repositories?

Add a Repository When You Need It
A Repository is a layer of abstraction. So, let's look at some times when we might or might not need an explicit repository.

Scenario 1: No Repository
I worked for many years in corporate development in a fixed environment. All of our custom-built applications used Microsoft SQL Server for data. That was it. And having that consistency made it very easy to support our application portfolio.

But what this also meant is that we did not usually need an explicit Repository. It was very unlikely that we would swap out the data storage layer; it was always SQL Server. So, we did not have a repository.

Now, even though we didn't have a physical repository layer, we still kept all of our data access code nicely corralled. We did not have SQL queries scattered across the code base; we kept them in well-defined parts of our business layer.

Scenario 2: Repository
But there was a time when I needed to add a Repository. One of my applications imported data from a third-party application. That application was going through an upgrade and some changes. One of those changes was switching from Microsoft SQL Server to an Oracle Database.

So, for this application, I added a Repository layer. This allowed me to prepare the application for the switch-over once that third-party application went into production. Based on a configuration change, I could switch the application from using a SQL Server repository to an Oracle repository. This meant that I could develop, test, and deploy my updated code before the switch-over happened. And the upgrade process went very smoothly.

Scenario 3: Maybe Repository
Another reason we may want to add a repository is to facilitate automated testing. With a repository layer in place, it's easy to swap out a fake or mock repository that we can use for testing. But whether we need this will be based on how our objects are laid out.

For example, we may be working with smart-entities; a smart-entity is basically a collection of properties with retrieval and update capabilities. In this case, there really isn't anything to test. The data access methods populate the properties and update the data store. Since we don't really have any logic to test in these entities, we would not get much advantage by adding a repository layer.

On the other hand, we may be working with robust business objects; a robust business object is one which has business logic (business rules and validation) built in. In this case, we do have logic that we want to check in automated tests. A repository layer would make those tests easier, so we would get an advantage from that.

Wrap Up
Do we really need to have a repository in our application? It depends. A repository is a layer of abstraction. If we add abstraction where we don't need it, then we add unnecessary complexity to our application. So, let's add abstraction as we need it.

We've seen two main reasons why we may want to add a repository layer: (1) to swap-out data access code for different data stores, and (2) to facilitate unit testing. If either of these apply, then we should consider adding a repository.

I use the Repository pattern in my presentations because it is an easily-understandable abstraction. But whether we actually need it in our own applications depends on our environment. In most of the applications I've written, I have not needed it. But in the few where I have needed it, it has been a great asset.

Happy Coding!

Tuesday, July 2, 2013

New Pluralsight Course: C# Interfaces

As mentioned about six weeks ago, I've been working on a course for Pluralsight.  I'm happy to announce that the course is now available.

C# Interfaces
Do you want code that's maintainable, extensible, and easily testable?  If so, then C# interfaces are here to help. In this course, we’ll take a look at how we can use interfaces effectively in our code. We'll start at the beginning ("What are interfaces?") and then explore why we want to use them. Along the way we'll create and implement own interfaces, see how to explicitly implement interfaces, and take a look at dynamic loading, unit testing, and dependency injection. All of which is made possible with interfaces.
Check out the course for yourself at Pluralsight: C# Interfaces.

Happy Coding!

Wednesday, March 20, 2013

Dependency Injection Composition Root: Are We Just Moving the Coupling?

I spoke on Dependency Injection at a user group yesterday, and an excellent question came up regarding the composition root and the coupling in our application.  I wanted to explore this in a bit more detail.

The Scenario
In the sample code, we take a look at an application that is split into layers (each layer is a separate project):
  • View
  • ViewModel
  • Repository
  • Service
The issue with our initial application is that the layers are all tightly coupled.  The View is responsible for instantiating the ViewModel; the ViewModel is responsible for instantiating the Repository; the Repository is responsible for instantiating the Service.

The code from the ViewModel constructor looks like this:


There are several drawbacks to this (as is mentioned in the presentation).  One is that due to the tight coupling between the ViewModel and the Repository, it would be difficult to add a different repository type (like one that uses a SQL or CSV data store).  We would end up modifying our ViewModel with some sort of conditional to decide what sort of repository to use.

But the real question is whether the ViewModel should even care what kind of repository it's using?  The answer is No.  To fix this, we added a layer of abstraction (a repository interface) and used Constructor Injection to pass the responsibility of deciding what concrete type is used to someone else.  We end up with the following code in our ViewModel:


This is good for our ViewModel: it is no longer tightly coupled to a particular repository; it only knows about an abstraction (the interface).  We can remove the project reference to the PersonServiceRepository assembly.  If we want to use a CSV repository (that implements IPersonRepository), then the ViewModel does not need to change (and this is what the presentation code actually shows).  Success!

Moving Responsibility
But someone has to be responsible for deciding which concrete Repository we're using.  The sample code moves this into the UI application (we'll have a bit more explanation of why this was chosen for this sample in just a bit).

The App.xaml.cs has the startup code for our application.  Here's where we put the objects together:


What we have here is commonly referred to as the "Composition Root".  This is the spot in our application where our objects are assembled -- we're taking our construction blocks and snapping them together.  This is where we instantiate our Repository and pass it to the ViewModel.  Then we take our instantiated ViewModel and pass it to our View (MainWindow.xaml).

If we want to change to a different Repository, we just update our composition root.  We don't need to change our ViewModel or any other code in our application.

The Question: Are We Just Moving the Tight Coupling?
This is where the question came up.  We've moved where we are instantiating our objects into the core application project.  But now the application needs to have references to the PersonServiceRepository as well as the MainWindowViewModel and everything else in our application.  Are we just moving the tight coupling?

The answer for this example is technically "Yes".  But that's not necessarily a bad thing (and if we don't like it, there are other solutions as well).  Let's take a closer look.

I put together the sample code in order to show the basics of Dependency Injection.  This means that I simplified code in several places so that we can better understand the basic concepts.  In the "real" applications that I've worked on, we did not have a hard-coded composition root, it was more dynamic (we'll look at this concept in a bit).

But even with the tight coupling in our composition root, we have loose coupling between our components (View, ViewModel, Repository).  This means we still get the advantages that we were looking for.  If we want to add a new Repository type, we don't need to modify our ViewModel.  We do need to modify our composition root, but that's okay because our composition root is actually responsible for snapping the blocks together.

In the full sample, we also see that even with the tight coupling in the composition root, we still get the advantage of extensibility (by easily adding different repositories) and testability (by having "seams" in the code that make it easier to isolate code for unit tests).

But, we have some other options.

Bootstrapping
When using dependency injection and/or creating modular applications, there's a concept of "bootstrapping" the application.  The bootstrapper is where we put the code that the application needs to find everything that it needs.

We could easily add another layer to this application:
  • Bootstrapper
  • View
  • ViewModel
  • Repository
  • Service
The bootstrapper would be a separate project from our View.  This project would know about all of the construction blocks (or at least where to find them).

This gives us a good place to start looking at the dynamic loading features of our tools.  The sample code shows how we can use Unity to explicitly register our dependencies (through code or configuration).  But we can also write a little bit of code to scan through a set of assemblies and automatically register the types that it finds there.  If you want to pursue this route, you can take a look at Mark Seemann's book (Dependency Injection in .NET) or perhaps look at a larger application framework (such as Prism) that helps with modularization and dynamic loading.

Lazy Instantiation
Our sample shows "eager loading" -- that is, we're creating all of our objects up-front (whether we use them immediately or not).  This is probably not the best approach for an application of any significant size.  Again, the sample is purposefully simple so that we can concentrate on the core DI topics.

But we don't have to create our objects at the beginning.  For example, when using Unity, we just need to put our registrations somewhere before we use the objects.  The most convenient place is to register the objects that we need in the composition root / bootstrapper.

Registration does not instantiate any of these objects -- it simply puts them in a catalog so that the Unity container knows where to find the concrete types when it needs them.  The objects are created when we start to resolve the objects from the container.  Our sample container code shows a "Resolve" of the MainWindow right in the composition root.  This has the effect of instantiating all of the required objects (the Repository, ViewModel, and View).

But we could resolve the View only when we are about to use it.  In that case, none of the objects are instantiated up front.  They get loaded as we need them.

This is a great approach, especially if you are doing modular programming.  And this is exactly what we did in a recent project that uses Prism.  In that project, we had a number of isolated modules (each module contained the Views and ViewModels for a particular function).  When a module is created, it registers all of it's types with the Unity container.  Then when a particular View is requested (through the Navigation features of Prism), that View is resolved from the container.  At that point, the View is instantiated (along with any dependencies).

Lots of Options
There are a lot of options for us to choose from.  Ultimately, which direction we go depends on the requirements of our application.  If we don't need the flexibility of swapping out components at runtime, then we can use compile-time configuration (which still gives us the advantages of extensibility and testability).  If we have a large number of independent functions, then we can look at creating isolated modules that only create objects as they are needed.

Remember that Dependency Injection is really a set of patterns that allows us to write loosely-coupled code.  The specific implementation is up to us.  There are plenty of examples as well as tools and frameworks to choose from.  Take a quick look to see which ones will fit in best with your environment.

Happy Coding!

Sunday, November 25, 2012

Drawbacks to Abstraction

Programming to abstractions is an important principle to keep in mind.  I have shown in a number of demos (including IEnumerable, ISaveable, IDontGetIt: Understanding .NET Interfaces, Dependency Injection: A Practical Introduction, and T, Earl Grey, Hot: Generics in .NET) how programming to abstractions can make our code more extensible, maintainable, and testable.

Add Abstraction As You Need It
I am a big fan of using abstractions (specifically Interfaces).  But I'm also a proponent of adding abstractions only as you need them.

As an example, over the past couple of weeks, I've been working on a project that contained a confirmation dialog.  The dialog itself was pretty simple -- just a modal dialog that let you add a message, text for the OK and Cancel buttons, and a callback for when the dialog was closed.  To make things more interesting, since the project uses MVVM, we end up making the call to this dialog from the View Model through an Interaction Trigger.  What this creates is a couple of layers of code in order to maintain good separation between the UI (the View) and the Presentation Logic (in the View Model).

This confirmation dialog code has been working for a while, but it had a drawback -- it wasn't easily unit testable due to the fact that the interaction trigger called into the View layer.  The result is that as I added confirmation messages to various parts of the code, the unit tests would stop working.

I was able to fix this problem by adding an interface (IConfirmationDialog) that had the methods to show the dialog and fire the callback.  I used Property Injection for the dialog so that I would have the "real" dialog as the default behavior.  But in my unit tests, I injected a mock dialog based on the interface.  This let me keep my unit tests intact without changing the default run-time behavior of the dialog.  If you'd like to see a specific example of using Property Injection with mocks and unit tests, see the samples in Dependency Injection: A Practical Introduction.

The Drawbacks
When selecting the right tool for the job, we need to know the strengths and the weaknesses of each tool.  This lets us make intelligent choices to confirm that the benefits outweigh the drawbacks for whatever tool we choose.  When working with abstractions and interfaces, this is no different.

We've talked about the advantages of using abstraction, but what about the drawbacks?  The most obvious is complexity.  Whenever we add another layer to our code, we make it a bit more complex to navigate.  Let's compare two examples (taken from the Dependency Injection samples).

Navigation With a Concrete Type
We'll start by looking at a class that uses a concrete type (this is found in WithoutDependencyInjection.sln in the NoDI.Presentation project).


Here, we have a property (Repository) that is using a concrete type: PersonServiceRepository.  Then down in the Execute method, we call the GetPeople method of this object.  If we want to navigate to this method, we just put the cursor in GetPeople and then hit F12 (or right-click and select "Go To Definition").  This takes us to the implementation code in the PersonServiceRepository class:
This implementation code is fairly simple since it passes the call through to a WCF service, but we can see the actual implementation of the concrete type.

Navigation With An Interface
If we use an interface rather than a concrete type, then navigation gets a little trickier.  The following code is in DependencyInjection.sln in the DI.Presentation project:


This code is very similar to the previous sample.  The main difference is that the Repository variable now refers tn an interface (IPersonRepository) rather than the concrete type.  You can see that the code in the Execute method is exactly the same.

But if we try to navigate to GetPeople using F12 (as we did above), we get quite a different outcome:

Instead of being navigated to an implementation, we are taken to the method declaration in the interface.  In most situations, this is not what we are looking for.  Instead, we want to see the implementation.  Since we have decoupled our code, Visual Studio cannot easily determine the concrete type, so we are left with only the abstraction.

Mitigating the Navigation Issue
We can mitigate the navigation issue by just doing a search rather than "Go To Definition".  For this, we just need to double-click on "GetPeople" (so that it is highlighted), and then use Ctrl+Shift+F (which is the equivalent of "Find in Files").  This brings up the search box already populated:


If we click "Find All", then we can see the various declarations, implementations, and calls for this method:


This snippet shows the interface declaration (IPersonServiceRepository) as well as 2 implementations (PersonCSVRepository and PersonServiceRepository).  If we double-click on the PersonCSVRepository, then we are taken to that implementation code:


This code shows the GetPeople implementation for reading from a CSV file.  So, there are a few extra steps to finding the implementation we were looking for, but if we know our tools (namely, Visual Studio) and how to use them, then we can still get the information we need.

There are other options as well.  Instead of using "Find In Files", we can use "Find All References" (Ctrl+K, R) which will give us similar (but not exactly the same) results as a search.  Also, if you have a refactoring tool installed (like RefactorPro or Resharper), then you most likely have an even faster way to search for a particular method.  It always pays to have a good understanding of your development environment.

Wrap Up
Everything has a cost.  In the case of programming to abstractions, that cost is added complexity and indirection when moving through source files.  The benefits include extensibility, maintainability, and testability.

As mentioned, I'm a big fan of adding abstractions as you need them.  Once you become familiar with your environment (the types of problems you are trying to solve and the parts that are likely to change), then you can make better up-front decisions about where to add these types of abstractions.  Until then, start with the concrete types and add the abstractions when needed.  Note: many times you may find yourself needing abstractions immediately in order to create good unit tests, but this will also vary depending on the extent of your testing.

For more information with finding the right level of abstraction, refer to Abstraction: The Goldilocks Principle. In many environments, the benefits will outweigh the costs, but we need to make a conscious decision about where abstractions are appropriate in each particular situation.

Happy Coding!

Sunday, October 21, 2012

Unit Testing: A Journey

In the last article (Abstraction: The Goldilocks Principle), we took a look at some steps for determining the right level of abstraction for an application.  We skipped a big reason why we might want to add abstractions to our code: testability.  Some developers say that unit testing is optional (I used to be in this camp).  But most developers agree that in order to be a true professional, we must unit test our code (and this is where I am now).

But this leads us to a question: Should we modify our perfectly good code in order to make it easily testable?  I'll approach this from another direction: one of the qualities of "good code" is code that is easily testable.

We'll loop back to why I take this approach in just a moment.  First, I want to describe my personal journey with unit testing.

A Rocky Start
Several years back, I didn't think that I needed unit testing.  I was working on a very agile team of developers (not formally "big A" Agile, but we self-organized and followed most of the Agile principles).  We had a very good relationship with our end users, we had a tight development cycle (which would naturally vary depending on the size of the application), we had a good deployment system, and we had a quick turnaround on fixes after an application was deployed.  With a small team, we supported a large number of applications (12 developers with 100 applications) that covered every line of business in the company.  In short, we were [expletive deleted] good.

We weren't using unit testing.  And quite honestly, we didn't really feel like we were missing anything by not having it.  But companies reorganize (like they often do), and our team became part of a larger organization of five combined teams.  Some of these other teams did not have the same level of productivity that we had. Someone had the idea that if we all did unit testing, then our code would get instantly better.

So, I had a knee-jerk reaction (I had a lot of those in the past; I've mellowed a lot since then).  The problem is that unit testing does not make bad code into good code.  Unit testing was being proposed as a "silver bullet".  So, I fought against it -- not because unit testing is a bad idea, but because it was presented as a cure-all.

Warming Up
After I have a knee-jerk reaction, my next step is to research the topic to see if I can support my position.  And my position was not that unit testing was bad (I knew that unit testing was a good idea); my position was that unit testing does not instantly make bad code into good code.  So, I started reading.  One of the books I read was The Art of Unit Testing by Roy Osherove.  This is an interesting read, but the book is not designed to sell you on the topic of unit testing; it is designed to show you different techniques and assumes that you have already bought into the benefits of unit testing.

Not a Silver Bullet
I pulled passages out of the book that confirmed my position: that if you have developers who are writing bad code, then unit testing will not fix that.  Even worse, it is very easy to write bad unit tests (that pass) that give you the false impression that the code is working as intended.

But Good Practice
On a personal basis, I was looking into unit testing to see if it was a tool that I wanted to add to my tool box.  I had heard quite a few really smart developers talk about unit testing and Test Driven Development (TDD), so I knew that it was something that I wanted to explore further.

TDD is a topic that really intrigued me.  It sounded really good, but I was skeptical about how practical it was to follow TDD in a complex application -- especially one that is using a business object framework or application framework that supplies a lot of the plumbing code.

Actual Unit Testing
I've since moved on and have used unit testing on several projects.  Now that I am out of the environment that caused the knee-jerk reaction, I fully embrace unit testing as part of my development activities.  And I have to say that I see the benefits frequently.

Unit Tests have helped me to manage a code base with complex business rules.  I have several hundred tests that I run frequently.  As I refactor code (break up methods and classes; rearrange abstractions; inject new dependencies), I can make sure that I didn't alter the core functionality of my code.  A quick "Run Tests" shows that my refactoring did not break anything.  When I add new functionality, I can quickly make sure that I don't break any existing functionality.

And there are times when I do break existing functionality.  But when I do, I get immediate feedback; I'm not waiting for a tester to find a bug in a feature that I've already checked off.  Sometimes, the tests themselves are broken and need to be fixed (for example, a new dependency was introduced in the production code that the test code is not aware of).

I'm not quite at TDD yet, but I can feel myself getting very close.  There are a few times that I've been writing unit tests for a function that I just added.  As I'm writing out the tests, I go through several test cases and sometimes come across a scenario that I did not code for.  In that case, I'll go ahead and write the test (knowing that it will fail), and then go fix the code.  So, I've got little bits of TDD poking through my unit testing process.  But since I'm writing the code first, I do end up with some areas that are difficult to unit test.  I usually end up refactoring these areas into something more testable.  With a test-first approach, I might have gotten it "right" the first time.

Over the next couple weeks, I'll be diving into a personal project where I'm going to use TDD to see how it will work in the types of apps that I normally build.

One thing to note: my original position has not changed.  Unit Testing is a wonderful tool, and I believe that all developers should be unit testing at some level (hopefully, a level that covers the majority of the code).  But unit testing will not suddenly fix broken code nor will it fix a broken development process.

TDD
Now that I've seen the benefits of unit testing firsthand, I've become a big proponent of it.  And even though I didn't see the need for it several years back, I look back now and see how it could have been useful in those projects.  This would include everything from verifying that automated messages are sent out at the right times to ensuring that business rule parameters are applied properly to verifying that a "cleaning" procedure on user data was running properly.  With these types of tests in place, I could have made feature changes with greater confidence.

Advantages
When a developer is using TDD, he/she takes the Red-Green-Refactor approach.  There are plenty of books and articles that describe TDD, so I won't go into the details.  Here's the short version: The first step (Red) is to write a failing test based on a function that is required in the application.  The next step (Green) is to write just enough code for the test to pass.  The last step (Refactor) comes once you're a bit further into the process, but it is the time to re-arrange and/or abstract the code so that the pieces fit together.

The idea is that we only write enough code to meet the functionality of our application (and no more).  This gets us used to writing small units of code.  And when we start composing these small units of code into larger components, we need to maintain the testability (to make sure our tests keep working).  This encourages us to stay away from tight coupling and putting in abstraction layers as we need them (and not before).  In addition, we'll be more likely to write small classes that work in isolation.  And when we do need to compose the classes, we can more easily adhere to the S.O.L.I.D. principles.

Back to the Question
So this brings us back to the question we asked at the beginning: Should we modify our perfectly good code in order to make it easily testable?  And the answer is that if we write our tests first, then well-abstracted testable code will be the natural result.

I've done quite a bit of reading and other study of Unit Testing.  One of the things that I didn't like about some of the approaches is when someone "cracked open" a class in order to test it.  Because the unit they wanted to test was inaccessible (perhaps due to a "protected" access modifier), they wrote a test class that wrapped the code in order to make those members visible.  I never liked the idea of writing test classes that wrapped the production classes -- this means that you aren't really testing your production code.

However, if we take the approach of test-first development (with TDD or another methodology), then the classes we build will be inherently testable.  In addition, if a member needs to be injected or exposed for testing, it grows more organically with the corresponding tests.

Abstractions
And back to the last article about abstractions: if we approach our code from the test side, we will add abstractions naturally as we need them.  If we need to compose two objects, we can use interfaces to ensure loose-coupling and make it easier to mock dependencies in our tests.  If we find that we need to inject a method from another class, we can use delegates (and maybe the Strategy pattern) to keep the separation that we need to maintain our tests.

Wrap Up
Whole books have been written on unit testing (by folks with much more experience than I have).  The goal here was to describe my personal journey toward making unit tests a part of my regular coding activities.  I fought hard against it in one environment; this made me study hard so that I understood the domain; this led to a better understanding of the techniques and pitfalls involved.  There are three or four key areas that I have changed my thinking on over the years (one of which is the necessity of unit testing).  When I change my mind on a topic, it's usually due to a deep-dive investigation and much thought.  Unit testing was no different.

You may be working in a group that is not currently unit testing.  You may also think that you are doing okay without it.  You may even think that it will take more time to implement.  I've been there.  Start looking at the cost/benefit studies that have been done on unit testing.  The reality is that overall development time is reduced -- remember, development is not simply slapping together new code; it is also maintenance and bug-hunting.  I wish that I had done this much sooner; I know that my past code could have benefited from it.

My journey is not yet complete; it will continue.  And I expect that I will be making the transition to TDD for my own projects very soon. 

Maybe the other developers on your team don't see the benefits of unit testing, and maybe your manager sees it as an extension to the timeline.  It's easy to just go along with the group on these things.  But at some point, if we truly want to excel as developers, we need to stop being followers and start being leaders.

Happy Coding!

Saturday, October 20, 2012

Abstraction: The Goldilocks Principle

Abstractions are an important part of object-oriented programming.  One of the primary principles is to program to an abstraction rather than a concrete type.  But that leads to a question: What is the right level of abstraction for our applications?  If we have too much abstraction, then our applications can become difficult to understand and maintain.  If we have too little abstraction, then our applications can become difficult to understand and maintain.  This means that there is some "sweet spot" right in the middle.  We'll call this the Goldilocks level of abstraction: "Just Right."

So, what is the right level?  Unfortunately, there are no hard-and-fast rules that are appropriate for all development scenarios.  Like so many other decisions that we have to make in software development, the correct answer is "it depends."  No one really likes this answer (especially when you're just getting started and are looking for guidance), but it's the reality of our field.

Here are a few steps that we can use to get started.

Step 1: Know Your Tools
The first step to figuring out what abstractions to use is to understand what types of abstractions are available.  This is what I focus on in my technical presentations.  I speak about delegates, design patterns, interfaces, generics, and dependency injection (among other things).  You can get information on these from my website: http://www.jeremybytes.com/Demos.aspx.  The goal of these presentations is to provide an overview of the technologies and how they are used.  This includes examples such as how to use delegates to implement the Strategy pattern or how to use interfaces with the Repository and Decorator patterns.

We need to understand our tools before we can use them.  I try to show these technologies in such a way that if we run into them in someone else's code, they don't just look like magic incantations.  If we can start looking at other people's code, we can get a better feel for how these tools are used in the real world.

A while back, I wrote an article about this: Design Patterns: Understand Your Tools.  Although that pertains specifically to design patterns, the principle extends to all of the tools we use.  We need to know the benefits and limitations to everything in our toolbox.  Only then can we make an informed choice.

I remember seeing a woodworking show on television where the host used only a router (a carpentry router, not a network router).  He showed interesting and unintended uses for the router as he used it as a cutting tool and a shaping tool and a sanding tool.  He pushed the tool to its limits.  But at the same time, he was limiting himself.  He could use it as a cutting tool, but not as effectively as a band saw or a jigsaw or a table saw.  I understand why this may be appealing: power tools are expensive.  But in the software world, many of the "tools" that we use are simply ideas (such as design patterns or delegates or interfaces).  There isn't a monetary investment required, but we do need to make a time investment to learn them.

Step 2: Know Your Environment
The next step is to understand your environment.  Whether we are working for a company writing business applications, doing custom builds for customers, or writing shrink-wrap software, this means that we need to understand our users and the system requirements.  We need to know what things are likely to change and which are not.

As an example, I worked for many years building line-of-business applications for a company that used Microsoft SQL Server.  At that company, we always used SQL Server.  Out of the 20 or so applications that I worked on, the data store was SQL Server.  Because of this, we did not spend time abstracting the data access code so that it could easily use a different data store.  Note: we did have proper layering and isolation of the data access code (meaning, all of our database calls were isolated to specific data access methods in a specific layer of the application).

On the other hand, I worked on several applications that used business rules for processing data.  These rules were volatile and would change frequently.  Because of this, we created rule interfaces that made it very easy to plug-in new rule types.

I should mention that these applications could have benefited from abstraction of the database layer to facilitate unit testing.  We were not doing unit testing.  In my next article, I will talk more about unit testing (and my particular history with it), and why it really is something we should all be doing.

Step 3: Learn the Smells
A common term that we hear in the developer community is "code smells". This basically comes with experience.  As a developer, you look at a bit of code and something doesn't "smell" right -- it just feels off. Sometimes you can't put your finger on something specific; there's just something that makes you uncomfortable.

There are a couple of ways to learn code smells.  The preferred way is through mentorship.  Find a developer with more experience than you and learn from him/her.  As a young developer, I had access to some really smart people on my development team.  And by listening to them, I saved myself a lot of pain over the years.  If you can learn from someone else, then be sure to take advantage of it.

The less preferred (but very effective) way of learning code smells is through trial and error.  I had plenty of this as a young developer as well.  I took approaches to applications that I later regretted.  And in that environment, I got to live with that regret -- whenever we released a piece of software, we also became primary support for that software.  This is a great way to encourage developers to produce highly stable code that is really "done" before release.  While these applications were fully functional from a user standpoint, they were more difficult to maintain and add new features than I would have liked.  But that's another reality of software development: constant learning.  If we don't look at code that we wrote six months ago and say "What was I thinking?", then we probably haven't learned anything in the meantime.

Step 4: Abstract as You Need It
I've been burned by poorly designed applications in the past -- abstractions that added complexity to the application without very much (if any) benefit.  As a result, my initial reaction is to lean toward low-abstraction as an initial state.  I was happy to come across the advice to "add abstraction as you need it".  This is an extremely good approach if you don't (yet) know the environment or what things are likely to change.

As an example, let's go back to database abstractions.  It turns out that while working at the company that used SQL Server, I had an application that needed to convert from SQL Server to Oracle.  The Oracle database was part of a third-party vendor product.  For this application, I added a fully-abstracted repository that was able to talk to either SQL Server or the Oracle database.  But I just did this for one application when I needed it.

Too Much Abstraction
As mentioned above, too much abstraction can make an application difficult to understand and maintain.  I encountered an application that had a very high level of abstraction.  There were new objects at each layer (even if the object had exactly the same properties as one in another layer).  The result of this abstraction meant that if someone wanted to add a new field to the UI (and have it stored in the database), the developer would need to modify 17 different code files.  In addition, much of the application was wired-up at runtime (rather than compile time), meaning that if you missed a change to a file, you didn't find out about it until you got a runtime error.  And since the files were extremely decoupled, it was very difficult to hook up the debugger to the appropriate assemblies.

Too Little Abstraction
At the other end of the spectrum, too little abstraction can make an application difficult to understand and maintain.  Another application that I encountered had 2600 lines of code in a single method.  It was almost impossible to follow the logic (lots of nested if/else conditions in big blocks of code).  And figuring out the proper place to make a change was nearly impossible.

"Just Right" Abstraction
My biggest concern as a developer is finding the right balance -- the Goldilocks Principle: not too much, not too little, but "just right".  I've been programming professionally for 12 years now, so I've had the benefit of seeing some really good code and some really bad code (as well as writing some really good code and some really bad code).

Depending on what kind of development work we do, we can end up spending a lot of time supporting and maintaining someone else's code.  When I'm writing code, I try to think of the person who will be coming after me.  And I ask myself a few key questions.  Will this abstraction make sense to someone else?  Will this abstraction make the code easier or harder to maintain?  How does this fit in with the approach used in the rest of this application?  If you are working in a team environment, don't be afraid to grab another developer and talk through a couple of different options.  Having another perspective can make the decision a lot easier.

The best piece of advice I've heard that helps me write maintainable code: Always assume that the person who has to maintain your code is a homicidal maniac who knows where you live.

One other area that will impact how much abstraction you add to your code is unit testing.  Abstraction often helps us isolate code so that we can make more practical tests.  I'll be putting down my experiences and thoughts regarding unit testing in the next article.  Until then...

Happy Coding!

Sunday, September 30, 2012

Session Spotlight - IEnumerable, ISaveable, IDontGetIt: Understanding .NET Interfaces

I'll be speaking at So Cal Code Camp (Los Angeles) on October 13 & 14. If you're not signed up yet, head over to the website to let them know you're coming: http://www.socalcodecamp.com.

Back again: IEnumerable, ISaveable, IDontGetIt: Understanding .NET Interfaces.

Abstraction Through Interfaces
When people talk about interfaces, they often refer to the User Interface -- the screens and controls that allow the user to interact with the application.  But "interface" also refers to a very powerful abstraction that lets us add extensibility and loose coupling to our code.

My first encounter with interfaces was in Delphi (Object Pascal) as a junior developer.  I understood what they were from a technical standpoint, but I didn't understand why I would want to use them.  We went to the Borland conference every year; and as a new developer, I took the opportunity to absorb as much information as I could, even if I didn't understand most of it (this is also a great way to use Code Camp -- to experience technologies you haven't taken the time to look into yet).  I was very excited because there was a session in Interfaces at the conference.

"Great," I thought, "I'll go and get some practical examples of how to use interfaces and find out why I would want to use them."  So, I get to the session, sit down, and grab my notepad -- ready to spend the next hour getting a practical introduction.

The speaker gets up and starts the presentation.  "So, let's say that we have an Foo class.  And we also have an IBar interface."

"Noooooooooooooooo!" I screamed (well, screamed inwardly anyway).  "I need practical examples.  You can't use Foo / Bar / Baz examples."  But that's the way it was, and I didn't get anything new out of the session.  (I also talked to some of my senior developer coworkers who attended, and they didn't get anything out of it either.)

It was several more years before I had a good grounding in object oriented design and the hows and whys of abstraction.  The goal of "IEnumerable, ISaveable, IDontGetIt" is to give you a jump-start to understanding interfaces.  We use real examples from the .NET framework and from actual application abstractions that I have coded in my professional career.

In addition to the So Cal Code Camp on Oct 13/14, I'll also be presenting this session at the Desert Code Camp (Phoenix) on Nov 17.  And if you're quick, you can also see me present this information this week at a couple of user groups.

Hope to see you at an upcoming event.

Happy Coding!

Sunday, August 19, 2012

Dependency Injection: How Do You Find the Balance?

As a developer, I am constantly trying to find the right balance -- to figure out the right level of abstraction for the current project.  If we add too much (unneeded) abstraction, we end up with code that may be more difficult to debug and maintain.  If we add too little (needed) abstraction, we end up with code that is difficult to extend and maintain.  Somewhere in-between, we have a good balance that leads to the optimum level of maintainability for the current environment.

Technique 1: Add It When You Need It
I'm a big fan of not adding abstraction until you need it.  This is a technique that Robert C. Martin recommends in Agile Principles, Patterns, and Practices in C# (Amazon Link).  This is my initial reaction to abstractions in code -- primarily because I've been burned by some really badly implemented abstractions in the past.  After dealing with abstractions that didn't add benefit to the application (and only complicated maintenance), my kneejerk reaction is to not add abstractions until really necessary.

This is not to say that abstractions are bad.  We just need to make sure that they are relevant to the code that we are building.  The bad implementations that I've run across have generally been the result of what I call "white paper architecture".  This happens when the application designer reads a white paper on how to architect an application and decides to implement it without considering the specific implications in the environment.  I'll give 2 examples.

Example 1: I ended up as primary support on an application that made use of base classes for the forms.  In itself, this isn't a bad thing.  The problem was in the implementation.  If you did not use the base class, then the form would not work at all.  This led to much gnashing of teeth.  In a more useful scenario, a base class would add specific functionality.  But if the base class was not used, the form would still work (just without the extra features).

Example 2: I helped someone out on another project (fortunately, I didn't end up supporting this application myself).  This application was abstracted out too far.  In order to add a new field (meaning, from the data store to the screen), it was necessary to modify 17 files (from data storage, through ORM, objects on the server side, DTOs on the server side, through the service, DTOs on the client side, objects on the client side, to the presentation layer).  And unfortunately, if you missed a file it did not result in a compile-time error; it would show up as a run-time error.

After coming across several application like these, I've adopted the YAGNI principle (You Aren't Gonna Need It).  If you do need it later, then you can add it.

Technique 2: Know Your Environment
Unfortunately, Technique 1 isn't always feasible.  It is often time consuming to go back into an application and add the abstractions as you need them.  When we're asked as developers to keep a specific delivery velocity, we're not often given time to go back and refactor things later.  So, a more practical option comes with experience: know the environment that you're working in.

As an example, for many years I worked in an environment that used Microsoft SQL Server.  That was our database platform, and every application that we built used SQL Server.  Because of this, I didn't spend time doing a full abstraction of the data layer.  This doesn't mean that I had database calls sprinkled through the code.  What it means is that I had a logical separation of the database calls (meaning that DB calls were only made in specific parts of the library) but didn't have a physical separation (for example, with a repository interface that stood in front of the database).

Was this bad design?  I know several folks who would immediately say "Yes, that's terrible design."  But I would argue that it was good design for the application environment.

Out of the 20 or so applications that I built while at that company, a grand total of one application needed to support a different database (an application that pulled data from a vendor product that used an Oracle backend).  For that one application, I added a database abstraction layer (this was actually a conversion -- the vendor product was originally using SQL Server and was changed to Oracle during an upgrade).  So what makes more sense?  To add an unused abstraction to 20 applications?  Or to add the necessary abstraction to the one application that actually needed it?

Now, if I was building an application for a different environment that needed to support different data stores (such as software that would be delivered to different customer sites), I would design things much differently.  You can see a simple example of how I would design this here: IEnumerable, ISaveable, IDontGetIt: Understanding .NET Interfaces.

Unfortunately, this type of decision can only be made if you know your environment well.  It usually takes years of experience in that environment to know which things are likely to change and which things are likely to stay the same.  When you walk into a new environment, it can be very difficult to figure out how to make these distinctions.

Dependency Injection: Getting Things "Just Right"
My current project is a WPF project using Prism.  Prism is a collection of libraries and guidance around building XAML applications, and a big part of that guidance is around Dependency Injection (DI).  I've been doing quite a bit of programming (and thinking) around dependency injection over the last couple months, and I'm still trying to find the balance -- the Goldilocks "Just Right" between "Too Loosely Coupled" and "Too Tightly Coupled".

Did I just say "Too Loosely Coupled"?  Is that even possible? We're taught that loose coupling is a good thing -- something we should always be striving for.  And I would venture to guess that there are many developers out there who would say that there's no such thing as "too loosely coupled."

But the reason that loose coupling is promoted so highly is that our problem is usually the opposite -- the default state of application developers is to have tight coupling.  Loose coupling is encouraged because it's not our instinctual reaction.

I'm currently reading Mark Seemann's Dependency Injection in .NET (Amazon Link).  This is an excellent book (disclaimer: I've only read half of it so far, but I don't expect that my evaluation will change).  Seemann describes many of the patterns and anti-patterns in Dependency Injection along with the benefits and costs (which helps us decide when/where to use specific patterns).

An important note: Seemann specifically says that the sample application that he shows will be more complicated than most DI samples he's seen.  He does this because DI doesn't make sense in a "simple" application; the value really shines in complex applications that have many functions that should be broken out.  With the functions broken out into separate classes, it makes sense to make sure that the classes are loosely coupled so that we can add/remove/change/decorate implementations without needing to modify all of our code.  This means that not all applications benefit from DI; the benefits come once we hit a certain level of complexity.

So, now we have to decide how much dependency injection is "Just Right".  As an example, Seemann describes the Service Locator as an anti-pattern.  But Prism has a built-in Service Locator.  So, should we use the Prism Service Locator or not?  And that's where we come back to the balance of "it depends."

In the application I'm working on, we are using the Service Locator pattern, and it seems to be working well for those parts of the library.  I have run into a few interesting issues (specifically when writing unit tests for these classes), and it turns out that Seemann points out exactly the issues that I've been thinking about.

I don't really have time to go into the details here.  As an example, when using the Service Locator, it is difficult to see the specific dependencies for a class.  As we have been modifying modules during our build, sometimes the unit tests are breaking because a new dependency was added (which is resolved by the Service Locator), but it doesn't stop the code from compiling.  We then need to modify our unit tests by adding/mocking the new dependency.

[Editor's Note: I've published an article talking more about the pros and cons of the Service Locator pattern: Dependency Injection: The Service Locator Pattern.]

As with everything, there are pros and cons.  For the time being, I'm content with using the Service Locator for our application.  There are some "gotchas" that I need to look out for (but that's true with whatever patterns I'm using).  Seemann also notes that he was once a proponent of Service Locator and moved away from it after he discovered better approaches that would eliminate the disadvantages that he was running across.  It may be that I come to that same conclusion after working with Service Locator for a while.  Time will tell.

How Do You Do Dependency Injection?
Now it's time to start a conversation.  How do you use Dependency Injection?  What has worked well for you in different types of applications and environments?  Do you have any favorite DI references / articles that have pointed you in a direction that works well for you?

As an aside, Mark Seemann's book has tons of reference articles -- most pages have some sort of footnote referring to a book or article on the topic.  It is very evident that Seemann has researched the topic very thoroughly.  I'm going to try to read through as many of these references as I can find time for.

Drop your experiences in the comments, and we can all learn from each other.

Happy Coding!