Monday, November 10, 2014

Optimizing Conway's Game of Life with Parallel.For

I've been continuing my coding practice with Conway's Game of Life. Last time, we added a simple UI (just a console application). This time, we'll look at seeing if we can get our calculations to run faster.

As a reminder, you can get to the other articles and download the code from here: http://www.jeremybytes.com/Downloads.aspx#ConwayTDD.

Ready to be Parallelized
When I wrote the game rule method (in our first article), I kept in mind that I may want to parallelize the code at some point. Here's the method we created:


So what makes this conducive to running in parallel? First, I created this as a static method. This means that the method does not belong to a particular instance of a class; it stands on its own.

Next, the method itself does not modify any state or instance data. The method takes in 2 values (the current state and the number of live neighbors) and returns a new value (the new state of the cell). Since "CellState" is an enum (a value type), this means that the return value is a separate value even if we return the parameter unmodified.

Because this method is an atomic operation that does not modify shared state, we can call this method at the same time on different threads without worry.

The Serial Call
When we process our grid, our current method call processes all of the operations serially -- that is, only one call runs at a time. Here's that code:


I mentioned that I wasn't a big fan of having the nested "for" loops here; and that remains true. But we need the indexers into the multi-dimensional array, so we don't have much of a choice with the way things are set up right now.

The outer "for" loop deals with the rows of our array, and the inner "for" loop deals with the columns. Inside the loop, we get the number of live neighbors for a cell and then get the new state for that cell. Once we have that, we assign it to the same X-Y position in the new grid.

Adding a Parallel.For
My first idea was to take the "for" loops and simply change them to "Parallel.For" loops. This didn't feel right, but I wasn't sure where else to start.

The Parallel.For method takes 3 parameters: the start index, the end index, and a delegate that represents the action to be performed. Since I love lambdas, I used a lambda expression for the delegate.

Here's the code:


This is a bit confusing since we have nested Parallel.For loops. The outer loop goes from "0" to the height of the grid, and we use the variable "i" for the indexer (just like our original outer "for" loop).

The body of the lambda expression is the nested Parallel.For loop. This loop goes from "0" to the width of the grid, and we use the variable "j" for the indexer (just like our original inner "for" loop).

Then inside the body of that lambda expression, we call the code to get the number of live neighbors, calculate the new cell state, and update our new grid.

This Smells Bad
I knew when I was writing this that it was not the right way to go. But let's run some metrics to verify that.

There's no way that we would be able to see the difference by simply running our console UI application. To see noticeable differences, we'll need to call the calculations lots of times. So, I created a new project (called "Conway.PerformanceTest") with a console application to do just that.


This application creates a new LifeGrid using the same dimensions as our UI application. The difference is that instead of displaying results, we call the "UpdateState" method many times in a row (based on the "iterations" variable). And we've added a stopwatch so that we can see how long each of these takes.

The first block has our original nested for loop; the last block has our new nested Parallel.For loop. Here are the results of the run:


(Note: even though the source code says "10,000" iterations, this output shows a run of "1,000" iterations.)

What we can see is that I really screwed up the performance. Instead of taking 285 milliseconds to process 1,000 iterations, the "optimized" version took 485 milliseconds -- almost twice as long.

And I'm not very surprised by this result. The nested parallel loops would end up spinning up a lot of threads. I'm sure that the application spends more time managing threads than doing the actual calculation.

So, we've over-parallelized this. Let's try again.

A Bit Less Parallel
For the next try, I decided that I would only use Parallel.For for *one* of the loops -- not for both. Here's the code for that:


This shows that we have a "Parallel.For" for our rows (the "i" dimension), but we have a normal "for" loop for the columns (the "j" dimension). The result is that we will process multiple rows at one time even though we're processing the columns sequentially within each row.

The results of this are much better:


Now we can see that our original nested "for" loop was 287 milliseconds, and our updated single-level "Parallel.For" loop is 192 milliseconds. So we actually cut off some of the processing time here.

More Iterations
But what happens if we add more iterations? Will we see the same performance difference? Let's give it a try with 10,000 iterations.


The results show us that things pretty much scale. When we do 10 times more iterations, our durations are 10 times longer.

Considerations
What we've seen is that we don't just want to tack "parallel" on to any loop that we have. We have to balance the time it takes to do the calculations with the time required to manage the threads. In this case, since our calculation was quick and easy, we ended up spending more time dealing with threading than we did with our calculations. If our calculation had been more complex (such as finding prime numbers), then it might have made sense to keep the nested parallel loops. But not this code.

Additionally, this code will perform differently based on the "shape" of our grid. We set this up for 25 x 70. This means we have 25 rows and 70 columns. If we to reverse this, then it may be better to set up the "parallel" on the columns rather than the rows.

Finally, we need to look at the other things we are doing inside our parallel loops -- we need to verify that we aren't risking updating the same shared state or risking race conditions.

Inside our loop, we call "GetLiveNeighbors". This deals with instance data: we're looking at the grid to find the neighbors of a particular cell. Then we're counting how many of these neighbors are alive. But we are only reading data here (no updates), so we don't need to worry about race conditions with the data.

Next, we're calling "GetNewState". As we've already seen, this does not deal with any instance data.

Finally, we are setting the cell state in the new grid -- this is where we set "nextState[i, j]". This is potentially dangerous. We are dealing with a shared object here (the "nextState" grid). But since we have the indexers from our loop, we know that we will only update each cell 1 time -- so we don't need to worry about 2 threads trying to update the same array value.

Wrap Up
We need to be careful when we're trying to parallelize our applications. If we're not careful, we may end up making performance worse. One way that we can avoid these problems is to look at the different patterns for parallelizing code. A good resource for this is Parallel Programming with .NET (Jeremy's review). This covers a number of different problem spaces and the techniques that we can use.

This is all just programming by experimentation. And I'm doing this mainly for coding practice -- part of this is exploration, part of this is to try new things, part of this is to find out what doesn't work. This process helps us get a better understanding of how different techniques work so that we can be more effective in our day-to-day programming.

Happy Coding!

Microsoft MVP Summit -- A Great Opportunity

It's no secret that I'm a Microsoft MVP. One of the great benefits of this is the chance to attend the Microsoft MVP Summit in Redmond, WA -- and that's exactly what I did last week.

For non-MVPs, you may have seen tweets and photos of MVPs having a lot of fun, but it doesn't seem like a very substantive event. This is a false impression. The reason is that much of what goes on at the Summit falls under NDA (which is the "non-disclosure agreement" that we all sign), so we're not allowed to talk about it.

What I can say is that there is a lot of cool stuff coming. Fortunately, we'll get to hear about some big things this week (Nov 12 & 13): http://www.visualstudio.com/en-us/connect-event-vs.aspx. This is definitely something to look forward to (and I can't wait to talk more about my favorite stuff after it's announced).

[Update 11/14/2014: If you missed the live event, you can still experience it. The link will take you to recordings from both days. So you can still get in on the announcements and demos.]

What I Like About the Summit
There's a lot of stuff going on at the MVP Summit. MVPs from all over the world converge in one spot for a week. Here are the things that I find most useful about the event.

Talking to Other MVPs
I spend as much time as possible talking to other MVPs. It's great to have breakfast with new people and talk to people from other disciplines. I'm a Visual C# MVP, but I had a chance to talk to folks who specialize in Excel, Access, and Office 365. Hearing about how some people are using Office 365 and Azure to get businesses running in the cloud is really cool.

Also, the MVP community is like a family. There are a couple of folks who I haven't seen since the last summit (mainly people from overseas). And we continued conversations like we were old friends. In addition, I met a lot of new people. And you never know who you might end up hanging out with. I started up a conversation with someone on Monday, and ended up talking again at lunch, dinner, and game nights throughout the week.

(I won't list all of the people I talked to because I'm afraid of leaving someone out. I had good conversations with probably about 30 people and had shorter encounters with about a dozen more.)

Talking to Microsoft Folks
Another great part of the event is talking to people from the various product groups at Microsoft. One of my MVP friends set up time to talk to people from product groups that he doesn't normally interact with. He did this because he had some concerns about particular behavior in a product. Since everyone is all in one spot, people from the product teams are open and willing to meet with you.

And this is very interactive. The people from the product groups really want to hear what the MVPs have to say. They see us as a primary conduit to the community at large, so they value our opinions -- whether it is talking about new language syntax, how a new feature is exposed in Visual Studio, or concerns about a proposed technology.

I won't give any specific examples because I'm not exactly sure what's under NDA and what's not at this point, and I tend to error on the side of caution.

Hearing About New Stuff
And speaking of NDA, we also get to see a lot of preview stuff at the Summit. These are the things that you might hear MVPs get excited about but can't say anything. (You might have seen a lot of people recommending the Microsoft Connect event on Nov 12 & 13 -- http://www.visualstudio.com/en-us/connect-event-vs.aspx -- There's a reason for this.)

There's big stuff coming, and Microsoft gives us a "head start" so that we can be ready to start blogging, shipping code samples, and talking about the new stuff as soon as it is publicly announced.

Wrap Up
I'm honored to be a Microsoft MVP. It's awesome that Microsoft has this program to recognize the people who are active in the community (for me, it's the developer community, but there are communities around other products as well).

The MVP Summit has given me access to meet MVPs from around the world, make a lot of new friends, talk to the folks at Microsoft who are making decisions on the next versions of products, and learn about what's coming soon.

And even though you only see the "family reunion" parts on the public side, there's a lot more going on that we're not allowed to talk about. For MVPs, it's an opportunity not to be missed.

Happy Coding!

Sunday, November 2, 2014

Adding a Simple UI for Conway's Game of Life

In a previous article, we created a rules library for Conway's Game of Life (code download). This was just the rules for whether a cell lives or dies based on its neighbors. But we did not implement any type of UI. That's what we'll do here.

The Rules
As a reminder, our rules engine consisted of a single method:


This takes the current state of a cell and the number of live neighbors that it has. But this doesn't tell us anything about the grid of cells or the state of that grid. For that, we'll need to create a new object.

The Grid
So, I added a new class to the Conway.Library project called "LifeGrid". Here are the fields and constructor:


I made some tough decisions here. I don't like the idea of dealing with a multi-dimensional array (If you know anything about me, you know that I love generic lists). To make things easier to start out with, I hard-coded the dimensions of the grid to 5 x 5.

The reason that I have a "current state" and "next state" is to make it easier to run the rules. When we get the rules for a single cell, it depends on the current state of all of it's neighbors (8 neighbors in our case). That means we can't change the state of any of our cells until after we've processed the rules for all of the cells. We could do this by taking a snapshot, but I decided to create "current" and "next". You can think of this as similar to the way that we used to double-buffer graphic animations to make them faster to load.

After instantiating both of our states, I initialize the current state to all "Dead". (An easier way might have been to flip the enum so that "Dead" comes first (i.e. position 0). That way it would automatically be in that state when it's instantiated. I expect to do a lot of refactoring of this code, so I might end up doing that later.)

Running the Rules
With the grid in place, we need to run the rules on all of the cells. Here's our method for UpdateState:


For this, we loop through both dimentions of our array. This is one of the reasons I don't like dealing with them. The nested "for" loops rub me the wrong way. Inside the loop, we get the number of live neighbors for the current cell (we'll look at this method in just a bit). Once we have that, we can call our GetNewState method that we created before. Once we have that result, we can assign it to the same position in the "next state" grid.

After completing the loops, we assign the "next state" to the CurrentState property. This replaces the old values with the new values. Then we create a new instance for the nextState variable.

Getting the number of live neighbors for each cell is an interesting problem.


I'll admit that I haven't looked up other implementations of Conway's Game of Life for this coding session. That's because I wanted to try to figure things out and then optimize as needed.

There are a lot of nasty conditionals here. The method takes the X-Y coordinates of the current cell. Then it has nested "for" loops to go through the cells to the left, right, top, bottom, and diagonals. It does this by subtracting 1 and adding 1 to the current position.

But we have a problem. These "for" loops will hit the current cell. We want to skip this, which is why we check for "i == 0 && j == 0".

The next trick is that we need to make sure that the neighbor cell is still on the grid. So we check to make sure that the X-Y values are greater than 0 and less than 5 (our current grid size).

Once we've validated that we're looking at a cell that is a neighbor of the current cell and not off the grid, we check to see if it is alive. If so, we increment our liveNeighbors count. Then we just return the total when we're done.

This completes the implementation of our grid. We'll make a few adjustments to it so that we can have an arbitrary size, but this is enough for us to hook this up to a UI.

Creating a Console UI
We're going to start out pretty simple: we're just going to hook up a console application to our grid. This will output the cell state as dots or Os. To create this, I just added a new project to our solution called "Conway.ConsoleUI". As you might imagine, this is a console application. Then we just need to add a reference to our Conway.Library project.

Here's our console application:

For this, we create an instance of our LifeGrid class. Then we set some cells to a live state (since they default to all dead).

After that, we show the grid (and we'll see this method in just a bit). Then we have a while loop hooked up so that we can update the grid every time we press the Enter key.

Here's how we show the grid:


I was able to get away with not using nested "for" loops. This is because we don't care about the position of the cells, we just need to print them out. We're going to output a character for each cell, and then we just need to put in a line break when we reach the end of the row.

I did brute force this. The "x" variable keeps track of our current column. When we reach the end, I reset "x" and then output a line break.

Visual Verification
Up to this point, we really don't know if anything will work. Let's run the application to see what we get:


And when we press "Enter":


And if we keep pressing "Enter", we'll see these states alternate. That's good. This is a "Blinker" (Conway's Patterns).

Arbitrary Grid Size
Our next step will be to allow for an arbitrary grid size. This isn't all that difficult. For our LifeGrid class, we just need to add some field to hold the state. Then we update anywhere we have a "5" to the appropriate field.

Here are the basics:


The constructor now takes parameters for the height and width. These values are used to initialize the grid.

And these values are used in the UpdateState method as well:


So, not a whole lot of work.

The console application takes a little more work. First, we update the Main method to pass in the grid size (we'll stick with 5x5 for now):


And then we need to update our ShowGrid method:


This is little more interesting because we need to know the length of the row (to put in our line break). Instead of using "5", we check the array. We get the upper bound of the array and add 1. This is because our 5x5 grid would have an upper bound of "4", and we want to have the same row length value that we had before.

When we run the application, we see the same output:



So let's make a bigger grid.

A Bigger Grid
Since we have an arbitrary grid size, we can make it bigger by just changing our Main method:


And here's the output:


It looks like we have a problem. Our grid is going the wrong way. It turns out that there's a problem with our "ShowGrid" method. Here's the fix:


Before, we were checking the upper bound of dimension "0" (which happens to be our height). Instead, we should be checking the upper bound of dimension "1" (the width). We didn't run into this problem earlier because we were using a square grid -- both dimensions were the same.

With this update, we get the output we expect:



Adding Randomization
Making a bigger grid isn't really all that interesting. If we have to manually set the initial state, it could get pretty tedious. What we'll do instead is create a randomization method that we can run on our grid.

To make this easy to call, I added this to the LifeGrid class:


This loops through our grid and sets Alive/Dead values based on an random number.

Here's our updated Main method:


We just call "Randomize" after we create our grid. And here's the output:


And here's Generation 2:


And Generation 5:


And Generation 15:


Eventually this settles down into one of the static states: items that don't move or patterns that repeat.

Output Optimization
Let's try this with a larger value. Let's try 25 x 65. Here's the output after several generations:


The output is interesting, but there's a problem. This is slow. For small grids (like our 5 x 5), the display was instant. When we press the Enter key, there's no delay when the new grid shows up.

But on the larger grid, there's a definite re-draw that is apparent. And this ruins the illusion of watching Conway's Game of Life. The fun part is to watch generations go by quickly and seeing the patterns evolve. When the screen redraws from top to bottom, there's a break in that flow.

But we can fix this by changing the way that we're displaying our grid. As a reminder, here's our current ShowGrid method:


This calls "Console.Write" a lot -- once for each character. For a 5 x 5, that's only 30 times (including the new lines). But for a 25 x 65 grid, that's 1,650 calls.

So we'll minimize the number of times that we call "Console.Write". In fact, we'll only call it once:


All we've done is add a StringBuilder. Instead of directly writing to the console, we add the characters to the StringBuilder. Then at the very end, we call Console.Write one time.

This makes our display much faster. When pressing the Enter key quickly, there's no perceptible redraw (at least on my i7 laptop). If we hold the Enter key down, then there is some flicker, but this is definitely acceptable for what we have.

More to Add
Now we have a working application that displays Conway's Game of Life in action. But we're not done yet.

Back when I created the original rule method (GetNewState), I was thinking that this problem (recalculating all of the states) would be a good candidate for parallelization. And that's why I wrote the method the way that I did:


I created this as a static method: that means that we're not allowed to interact with any instance objects that are part of the class. The result is that we don't modify any shared state in this method (good). Next, since CellState (the type of our return value) is a value type, I don't have to worry about accidentally modifying state of the parameters. We will always get a new copy of the value (even if we don't change it in the method).

This means that if we run this method on different threads with different parameters, we don't have to worry about them interacting with each other.

But to take advantage of this, we'll need to make some changes to how we are updating the state in our LifeGrid class. We'll explore this in a future article.

In addition, I'd like to support some different UIs (like a WPF or Windows Phone application) that we could make a bit prettier.

Wrap Up
In implementing the UI for Conway's Game of Life, we had to take a look at how we are storing the grid data, how we update it, and how we display it on the screen. As mentioned, I'm not a big fan of having the nested "for" loops in so many methods, but we may be able to take care of that as we look at things further.

It's always good to practice our craft. Sometimes that means taking a well-known problem and implementing our own solutions (even when there are lots of implementations already). This lets us focus on things that we care about (such as parallelization) and also work our way out of problems that we might create for ourselves.

Practice is how we get better.

Happy Coding!

Wednesday, October 29, 2014

Pluralsight Learning Path: Getting to Great with C#

Pluralsight has a lot of courses (like hundreds and hundreds). So it can be daunting to figure out where you need to start. To help get people pointed in a direction that works for them, Pluralsight has a collection of Learning Paths that highlight a set of courses to help you meet a goal -- whether you're working toward certification or want to learn to build a particular type of application.

A new learning path was published today: Getting to Great with C#.

Getting to Great with C#
  • Object-Oriented Programming Fundamentals in C#
  • Defensive Coding in C#
  • Clean Code: Writing Code for Humans
  • C# Interfaces*
  • Abstract Art: Getting Things "Just Right"*
  • Dependency Injection On-Ramp*
  • SOLID Principles of Object Oriented Design
  • Design Patterns On-Ramp*
  • Design Patterns Library
Be sure to follow the link (here it is again)  to get the details on the goals of the learning path and descriptions of all the courses.

The really cool part: I authored 4 of these courses (marked with *). I've had courses included in other learning paths, but I'm excited and honored to have so many of my courses included in a single collection (plus, I know the other authors and courses, and I'm in very good company).

Happy Coding!

Tuesday, October 28, 2014

Rewriting a Legacy App - Part 4: Completing the MVP with Scheduling

It's time to wrap things up for the re-write of the legacy application (at least the immediate features). In Part 1, we looked at the existing home automation application and came up with a minimum viable product (MVP). As a reminder, here are the features that we need:

Needed for MVP:
  1. Send commands through the serial dongle
    This is the purpose of the system.
  2. Send commands for 8 devices on House Code "A"
    These are the only devices that are actively used.
  3. Fire on/off/dim events on a schedule
    To maintain the air conditioner functionality (and current lighting schedule).
In Part 2, we built a test library to send commands through the serial port to the home automation hardware (thus fulfilling requirement #1). In Part 3, we looked at generalizing the functionality so that we could send commands to various devices (thus fulfilling requirement #2).

All that's left is to create a scheduler. Sounds pretty simple, but I also needed to create some wrappers and test objects along the way.

[Update: Code download and links to the entire series of articles: Rewriting a Legacy App]

Concealing Details
As mentioned in Part 3, I wasn't really happy with the way that I left the objects. Here's how we interacted with the SerialCommander and MessageGenerator:


Rather than having the application need to know the details of how to generate the message and then call into the commander, we'll wrap things up into a single class that handles those details. Here's what I want my application code to look like:


So, I created a HouseController object that wraps up the message generation and serial port interaction. This has a SendCommand method that takes a device number and a command. From the HouseController class:


This method uses the static MessageGenerator class to get the message, and then it uses the a private instance of the SerialCommander object (the "Commander" property) to interact with the serial port.

Faking Hardware Interaction
As I was going through creating these objects and moving code around, I ran into a restriction that I found to be a bit of a hassle. I needed to have the serial dongle plugged into my development machine in order for the code to run. If I ran the code without the dongle plugged in, I got the following error when calling into the SerialCommander object:


Since I was comfortable that the hardware interaction was working, I wanted to be able to continue to develop even when the hardware was not plugged in to my dev machine -- specifically, I wanted to keep it plugged in to the machine running the legacy application so that it would continue to function.

In order to do this, I created a simple interface: ICommander. Since the SerialCommander only has one publicly-exposed method, this interface is pretty simple:


With this interface in place, I could add the abstraction to the HouseController class (through the Controller property) and provide a fake implementation that didn't require the hardware.

The FakeCommander implementation is pretty simple:


It doesn't need to actually do anything, but I echo the message to the console window so that I could see that the method was getting called.

Inside the HouseController class, I set up the property so that we could easily use Property Injection to swap out a fake commander:


For more information on Property Injection, check out my materials on Dependency Injection (and specifically a blog article on the Property Injection pattern).

If we do nothing (that is, we do not set this property directly), then the first time it is used, it will automatically create an instance of the SerialCommander object. This is the default behavior that we would like to have in our production environment.

But for testing, we can override the property with a fake or test implementation. Here's our updated application code that injects the FakeCommander:


Notice that after we create the HouseController object, but before we call any methods on it, we set the "Commander" property to our fake object. This lets us run our application without actually interacting with a serial port.

Scheduling Data
What we really need to do is implement a schedule. For that, we'll need to be able to read the data from a persistent location (and in the future, add some UI that will make it easy to manage this data). I decided to keep things pretty simple. I created a ScheduleItem object that represents the data that we need:


And I created a Schedule object which is a collection of ScheduleItems. In addition, this object is able to load data from a CSV file from the file system. I decided on a CSV file because it is fairly simple, and I already had code that I could borrow from another project.

Here's what the schedule file looks like:


And here's what the loading code looks like (in the Schedule class):


There isn't any error handling here, so if the data is bad, we'll run into problems. But we're good for the MVP. We'll work on making this more robust as we need to.

Now that we have the schedule data, we need to execute it somehow.

Executing Scheduled Commands
I added the scheduling functionality into the HouseController class. I'm just using a Timer for this (the Timer is coming from System.Timers -- this seemed to be the best for my needs):


In this code, we set up a timer that is set to fire every 30 seconds. In the constructor, we hook up the event handler and start the timer. In addition, notice the "Schedule" property. This gets instantiated when this object is constructed (and the constructor code of the Schedule loads up the data from the file -- we'll look at this in just a bit).

When the timer fires, it runs the following code:


This code looks at the schedule items and filters the list based on items that are (1) enabled and (2) are scheduled to occur within 1 minute of the current time (I'll talk about this 1 minute window in just a bit). This calculation is done in the TimeDurationFromNow helper method:


This code is dealing with "TimeOfDay" because we want to ignore the date portion of the datetime values and just look at the times. The "Duration" method will give us the absolute value of our subtraction (so we'll always end up with a positive number).

Once we have a list of items that we need to process based on the current time, the scheduled time, and whether they are enabled, we just loop through them and call the "SendCommand" method for each one.

The last step in the timer event handler is to spit out some debug code to our console. This way we can check to make sure that our filter is running correctly.

Testing the Schedule
Testing schedulers is always fun. One way is to update the data file so that the scheduled time is in the not-too-distant future, then start the application and wait. This is not a fun way of doing things, so I cheat just a little bit.

Here's the constructor of our Schedule class:


In addition to loading the schedule from the file, I manually add 3 new schedule items for the not-too-distant future. This way, I don't have to wait very long, and I don't have to constantly update the data file.

Now, if we run our console application (and wait several minutes), we see the following output:


This may look a bit strange, so let's walk through this. The first 4 lines are coming straight from the Main method that we saw earlier. It outputs "Starting Test", turns on device #5, turns off device #5, and then outputs "Test Completed". Everything after that is from the Timer event running.

The first time the timer "ticks", it processes one record (our first test record from above). Remember that the timer fires 30 seconds after our application starts, and our first schedule item is set for 1 minute after the application starts. Since this is within the 1 minute window that we have defined above, that first command is run.

Then we have the output that shows us that 1 schedule item was processed, that we have 17 total items in our schedule (which includes 14 items from the file and our 3 items in code), and that 8 of the schedule items are active (in our file, the "Summer" items are marked as inactive).

If we look at the next set of records (from 5:35), we see that 2 schedule items are processed. The first command is sent (again) since it is still within the 1 minute window, and the second is sent since it enters that 1 minute window.

And if we follow this through, we can see all 3 of our test records go through the process.

Why the 1 Minute Window?
So you've probably noticed that we end up sending each command 3 times (30 seconds apart). This is a result of having the 1 minute window set up in our scheduler. But why do we need this?

This is based on my experience with the hardware. What I've found is that not every command gets registered by the system. I don't know if this is because the commands aren't received or if there is temporary interference in the power lines that prevent the transmission to the module. I just know that it is not 100% reliable.

Because of this, in the legacy application, I set up this processing window so that schedule commands would be sent multiple times. In practice this isn't a problem. If I send a command to turn off the air conditioner and it is already off, then nothing happens. And this is better than a command getting "missed" and a device staying on longer than intended (or not coming on as intended).

I'll need to do some long-term testing to see if this is still needed. But since the problems are intermittent, there's no way that I'll really know until the code is running for a while.

This Completes the MVP
The code isn't pretty (but it's not ugly, either). And there are still a lot of features that I'd like to add. But this completes our minimum viable product -- that is, a product that we can put into production to replace the current legacy system. Let's quickly review:

Needed for MVP:
  1. Send commands through the serial dongle
    This is the purpose of the system.
  2. Send commands for 8 devices on House Code "A"
    These are the only devices that are actively used.
  3. Fire on/off/dim events on a schedule
    To maintain the air conditioner functionality (and current lighting schedule).
We have fulfilled all of these requirements. We can send commands through our serial dongle; we can communicate with 8 devices on house code "A"; and we can send commands based on a persistable schedule. So that's all we need!

More To Come
But even though this is all I *need* to replace the existing application, this is not all that I want from the system. I would like to have a UI to set the schedule (and enable/disable schedule sets). I also want to code up the "dimming" commands (I deferred this for the MVP since I decided it was not critical to initial implementation).

I would also like to have a network-aware interface so that I can send commands remotely. And I would also like something that would keep a record of the state of each device. This last one is a bit difficult to handle. There is no way for us to query the system, which means that we need to maintain state information ourselves. And there's the added difficulty that if someone turns a device on or off using the physical remote, there's no way for our application to know about that.

Wrap Up
This has been an interesting exercise to look at an existing application and work through implementing a minimum viable product. What I discovered was that the "minimum" really wasn't that much compared to what existed in the legacy system. There were a lot of features that aren't needed (but were nice to have) and some features that aren't used at all.

The result is a very small set of features that could be implemented very quickly. It took just a few days to get everything running. Being able to release software so quickly leaves us with a good sense of accomplishment (a usable product) and gives us motivation to keep moving forward with other features.

Running through these types of exercises are important practice for us. It gives us better perspective when we're dealing with our business users. There are a lot of things that are wanted, but we can focus in on one or two items that are really needed and implement them quickly. This makes our users happy. And ultimately, this loops back around to my musing on No Estimates and Partnering with the Business.

What goes around, comes around.

Happy Coding!