So, I've been making some UI changes to my Digit Recognizer project to try to make it easier to visualize the data. Last time, we used the F# code so that we could play with the code a bit. You can read about it here: Exploring the Digit Recognizer with F#.
But we were displaying the source bitmap alongside the prediction from our machine learning algorithm:
This is pretty cool, but it's a bit hard to read, and it doesn't size well. So today, we'll group the bitmap and textbox together in a button. That way, we can click on the button to show which items are incorrect:
This is one step closer to easily analyzing the data visually (which is where I'm headed with this).
The actual display output will change visually depending on what resolution you're using (I'm going to be looking at this in the future). I'm running this application at 1280x800 and showing a little under 200 digits here.
The code for this is available in GitHub: jeremybytes/digit-display -- specifically, this code references the UIAwesome branch.
Grouping the Items
Previously, I created a bitmap (to represent the source data) and a textbox (to represent the prediction) and added them to the wrap panel individually. This lead to some issues when resizing the screen (the items would not always be grouped together).
Here's the code for that (where I started):
So, I added a button that would contain both the bitmap and the textbox. Here are the button-related bits:
This creates a button and adds the items that we created earlier. Then we add the button to the wrap panel.
This keeps the items "paired" so that we don't have to worry about resizing the application and throwing off our display.
The other thing I wanted to do was add the ability to flag the incorrect items. This obviously takes human intervention. Previously, I was scanning through the screen and counting the incorrect ones. Now that I had a button, I could add in some functionality to help with that.
In the code snippet above, you'll notice that there is a "ToggleCorrectness" method hooked up to the button click event handler. Here's that code:
When we click on one of the buttons, it will toggle the background color (to light red) and also increment our error count which displays at the top of the screen. We can also toggle back to "correct" if we hit a button by accident.
There were a couple other minor changes that I had to make as well -- such as making the bitmap transparent. You can check the code if you're really curious about that.
The result of this is that we can visually see the bitmaps along with the predictions and then flag the items that are incorrect:
Here we can see that there are 6 errors in our data (there might be a couple more if you look closely -- if I couldn't tell what the digit should be, then I didn't fault the computer for getting it wrong).
This makes it easier to see the items that are incorrect. It's still a bit tedious, though. Since it needs a human (me) to flag the items, I need to scan through all of the numbers. And quite honestly, my brain gets bored easily, so I'm sure that there are some items that I'm missing.
In case you're curious, this code is using the Manhattan Distance algorithm, and it is using the full training set (over 40,000 items).
Reminder: The code for this is available in GitHub: jeremybytes/digit-display -- specifically, this code references the UIAwesome branch.
This is really an interim step. As I mentioned previously, I'm really interested in seeing how different algorithms behave differently, and I'm also curious about how altering the size of the training set affects both the accuracy and the speed.
One other thing I'd like to do is add an "offset" so that I can start looking at data at an arbitrary (but repeatable) location. This will make sure that I'm not just optimizing for the data that is in the "first" part of the validation set.
So my next steps are a few more UI improvements to let me pick these different variables -- right now, they are in the code, and I have to rebuild/rerun to try different scenarios. I'd really like to be able to see the different outputs without having to alter code and rebuild the application.
I've been enjoying this exploration, and I'll keep doing it even though it doesn't have specific application. Many times, when we dig into things that don't have an immediate value, we find things that are really helpful in the future.