Monday, August 27, 2012

Steal My Windows 8 Idea: Share With Grandma

A couple weeks ago, I attended a user group where Danny Warren from InterKnowlogy presented on Windows 8 (Danny is @dannydwarren on Twitter and blogs here).  He showed some really cool Windows 8 contracts including Search and Sharing.  (For more info on Sharing, Danny points to a blog article about sharing here: Windows 8 and the future of XAML: Part 5: More contracts in WinRT/Windows 8.)

My brain puts things together slowly sometimes.  I could tell during the presentation that Sharing was a very important feature.  And over the last week or so, the idea has really settled in, and I'm really looking forward to having that feature available in my day-to-day activities.

Before Windows 8 Sharing
We love to get data from one application to another: in a photo album, we can send a photo to Facebook; in a browser, we can email a story to a friend.  And historically, it has been up to each application to support the sharing. This meant that we were locked into whatever systems the application supported.  I don't know about you, but after I find an application I like, I tend to hang onto it for a while.  And even though it's really cool that my photo album supports sending to MySpace, it's not very relevant anymore.

This leaves me with a hard decision: do I look for another photo album and learn a new interface?  Or do I settle for doing manual uploads to my favorite sharing site?  Neither option is very appealing.  Wouldn't it be cool if I could keep my photo album application and add whatever sharing I want?

Share Target: I Love This Idea
In Windows 8, Microsoft has separated the responsibilities of this process.  The sharing application (a "Share Source") just has to expose some piece of data (whether it's text, an image, a URL, or something else).  Then any application that knows how to consume that data (a "Share Target") can get access to it -- with the user's permission, of course.

I think of this as the "Send To" menu that we had in Windows XP.  You could open up the "Send To" folder (under the user profile) and add shortcuts to applications.  I would always add "Notepad" to my Send To options.  That way, I could always right-click on a file, and, regardless of type, I could "Send To" Notepad.  This was great for bypassing the default editor for a particular file.  (As a side note: I've been weaned off of this functionality in Windows 7.  I'm sure there's a way of customizing the Send To menu, but it's not as obvious as it was in XP).

The Share Target takes this a step further.  Now, I can have any number of applications that are designated as Share Targets for photos.  From inside my photo album, I just have to "Share" a photo, and then I get to select from all of the applications that can accept that photo.  This means that 2 years from now when everyone is using a new photo sharing site / social network, I just need to get the latest Share Target application, and my original photo album gets to stay exactly the same.

To me, this is a brilliant idea.  Several of the phone OSes have integrated Facebook, Twitter, or G+ to make it easy to post photos or send updates.  But those are still bound to the OS.  With Windows 8, we can share with whatever we want (assuming someone has written an application for it), and we're not locked down to whatever was popular at the time a particular piece of software was written.

Steal My Idea: Share With Grandma
So, I thought of a good idea for this feature: Share With Grandma.  This would be a Share Target application where you could configure how tech-savvy Grandma is -- from Pony Express to Gadget Granny.  The application would decide how to get the shared information to Grandma based on these settings.

For example, let's say that I want to share a picture with Grandma.  If she has limited technical skills, then the application could send an email attachment.  If she is a little bit more comfortable, then maybe it sends her a link to a Facebook update.  If she's a Gadget Granny, then maybe it posts to a shared photo stream that automatically shows up on her tablet.  The same type of thing could be done for text or URLs.

In all honesty, I probably won't get around to programming this.  So, feel free to steal my idea (but at least give me a mention in the credits).

I see Windows 8 Sharing as a huge opportunity to come up with some really creative and useful ways of using the data that we've already got.  It's time to start busting out some code.

Happy Coding!

Sunday, August 19, 2012

Dependency Injection: How Do You Find the Balance?

As a developer, I am constantly trying to find the right balance -- to figure out the right level of abstraction for the current project.  If we add too much (unneeded) abstraction, we end up with code that may be more difficult to debug and maintain.  If we add too little (needed) abstraction, we end up with code that is difficult to extend and maintain.  Somewhere in-between, we have a good balance that leads to the optimum level of maintainability for the current environment.

Technique 1: Add It When You Need It
I'm a big fan of not adding abstraction until you need it.  This is a technique that Robert C. Martin recommends in Agile Principles, Patterns, and Practices in C# (Amazon Link).  This is my initial reaction to abstractions in code -- primarily because I've been burned by some really badly implemented abstractions in the past.  After dealing with abstractions that didn't add benefit to the application (and only complicated maintenance), my kneejerk reaction is to not add abstractions until really necessary.

This is not to say that abstractions are bad.  We just need to make sure that they are relevant to the code that we are building.  The bad implementations that I've run across have generally been the result of what I call "white paper architecture".  This happens when the application designer reads a white paper on how to architect an application and decides to implement it without considering the specific implications in the environment.  I'll give 2 examples.

Example 1: I ended up as primary support on an application that made use of base classes for the forms.  In itself, this isn't a bad thing.  The problem was in the implementation.  If you did not use the base class, then the form would not work at all.  This led to much gnashing of teeth.  In a more useful scenario, a base class would add specific functionality.  But if the base class was not used, the form would still work (just without the extra features).

Example 2: I helped someone out on another project (fortunately, I didn't end up supporting this application myself).  This application was abstracted out too far.  In order to add a new field (meaning, from the data store to the screen), it was necessary to modify 17 files (from data storage, through ORM, objects on the server side, DTOs on the server side, through the service, DTOs on the client side, objects on the client side, to the presentation layer).  And unfortunately, if you missed a file it did not result in a compile-time error; it would show up as a run-time error.

After coming across several application like these, I've adopted the YAGNI principle (You Aren't Gonna Need It).  If you do need it later, then you can add it.

Technique 2: Know Your Environment
Unfortunately, Technique 1 isn't always feasible.  It is often time consuming to go back into an application and add the abstractions as you need them.  When we're asked as developers to keep a specific delivery velocity, we're not often given time to go back and refactor things later.  So, a more practical option comes with experience: know the environment that you're working in.

As an example, for many years I worked in an environment that used Microsoft SQL Server.  That was our database platform, and every application that we built used SQL Server.  Because of this, I didn't spend time doing a full abstraction of the data layer.  This doesn't mean that I had database calls sprinkled through the code.  What it means is that I had a logical separation of the database calls (meaning that DB calls were only made in specific parts of the library) but didn't have a physical separation (for example, with a repository interface that stood in front of the database).

Was this bad design?  I know several folks who would immediately say "Yes, that's terrible design."  But I would argue that it was good design for the application environment.

Out of the 20 or so applications that I built while at that company, a grand total of one application needed to support a different database (an application that pulled data from a vendor product that used an Oracle backend).  For that one application, I added a database abstraction layer (this was actually a conversion -- the vendor product was originally using SQL Server and was changed to Oracle during an upgrade).  So what makes more sense?  To add an unused abstraction to 20 applications?  Or to add the necessary abstraction to the one application that actually needed it?

Now, if I was building an application for a different environment that needed to support different data stores (such as software that would be delivered to different customer sites), I would design things much differently.  You can see a simple example of how I would design this here: IEnumerable, ISaveable, IDontGetIt: Understanding .NET Interfaces.

Unfortunately, this type of decision can only be made if you know your environment well.  It usually takes years of experience in that environment to know which things are likely to change and which things are likely to stay the same.  When you walk into a new environment, it can be very difficult to figure out how to make these distinctions.

Dependency Injection: Getting Things "Just Right"
My current project is a WPF project using Prism.  Prism is a collection of libraries and guidance around building XAML applications, and a big part of that guidance is around Dependency Injection (DI).  I've been doing quite a bit of programming (and thinking) around dependency injection over the last couple months, and I'm still trying to find the balance -- the Goldilocks "Just Right" between "Too Loosely Coupled" and "Too Tightly Coupled".

Did I just say "Too Loosely Coupled"?  Is that even possible? We're taught that loose coupling is a good thing -- something we should always be striving for.  And I would venture to guess that there are many developers out there who would say that there's no such thing as "too loosely coupled."

But the reason that loose coupling is promoted so highly is that our problem is usually the opposite -- the default state of application developers is to have tight coupling.  Loose coupling is encouraged because it's not our instinctual reaction.

I'm currently reading Mark Seemann's Dependency Injection in .NET (Amazon Link).  This is an excellent book (disclaimer: I've only read half of it so far, but I don't expect that my evaluation will change).  Seemann describes many of the patterns and anti-patterns in Dependency Injection along with the benefits and costs (which helps us decide when/where to use specific patterns).

An important note: Seemann specifically says that the sample application that he shows will be more complicated than most DI samples he's seen.  He does this because DI doesn't make sense in a "simple" application; the value really shines in complex applications that have many functions that should be broken out.  With the functions broken out into separate classes, it makes sense to make sure that the classes are loosely coupled so that we can add/remove/change/decorate implementations without needing to modify all of our code.  This means that not all applications benefit from DI; the benefits come once we hit a certain level of complexity.

So, now we have to decide how much dependency injection is "Just Right".  As an example, Seemann describes the Service Locator as an anti-pattern.  But Prism has a built-in Service Locator.  So, should we use the Prism Service Locator or not?  And that's where we come back to the balance of "it depends."

In the application I'm working on, we are using the Service Locator pattern, and it seems to be working well for those parts of the library.  I have run into a few interesting issues (specifically when writing unit tests for these classes), and it turns out that Seemann points out exactly the issues that I've been thinking about.

I don't really have time to go into the details here.  As an example, when using the Service Locator, it is difficult to see the specific dependencies for a class.  As we have been modifying modules during our build, sometimes the unit tests are breaking because a new dependency was added (which is resolved by the Service Locator), but it doesn't stop the code from compiling.  We then need to modify our unit tests by adding/mocking the new dependency.

[Editor's Note: I've published an article talking more about the pros and cons of the Service Locator pattern: Dependency Injection: The Service Locator Pattern.]

As with everything, there are pros and cons.  For the time being, I'm content with using the Service Locator for our application.  There are some "gotchas" that I need to look out for (but that's true with whatever patterns I'm using).  Seemann also notes that he was once a proponent of Service Locator and moved away from it after he discovered better approaches that would eliminate the disadvantages that he was running across.  It may be that I come to that same conclusion after working with Service Locator for a while.  Time will tell.

How Do You Do Dependency Injection?
Now it's time to start a conversation.  How do you use Dependency Injection?  What has worked well for you in different types of applications and environments?  Do you have any favorite DI references / articles that have pointed you in a direction that works well for you?

As an aside, Mark Seemann's book has tons of reference articles -- most pages have some sort of footnote referring to a book or article on the topic.  It is very evident that Seemann has researched the topic very thoroughly.  I'm going to try to read through as many of these references as I can find time for.

Drop your experiences in the comments, and we can all learn from each other.

Happy Coding!

Wednesday, August 1, 2012

Los Angeles .NET Developers Group

If you're in the Los Angeles area, I'll be speaking at the Los Angeles .NET Devleopers Group on Monday, August 6th.  More info is available here: http://www.ladotnet.org/events/74859482/.

The topic is "Learn the Lingo: Design Patterns".  We'll take a look at what design patterns are, why they are important to us, and how we already use them in our everyday code without even realizing it.  In addition to the benefits, we'll take a look at the costs to help us determine when we should be using various patterns.  Once we are familiar with Design Patterns, we can start to use them deliberately and hopefully get to the point where we are using them automatically (where appropriate, of course).  Hope to see you there.

Happy Coding!