Wednesday, September 21, 2016

Recording Live Presentations

I've had a few people ask me about recording live presentations recently, so now's a good time to talk about how I do it.

I've been recording my presentations for about a year and a half now. There are a couple of reasons why I've wanted to record myself speaking:
  1. By watching myself, I can learn my quirks that I don't notice. This helps me improve my speaking technique.
  2. I can put together clips of me speaking to create a promo video to help get me more speaking opportunities. (I never did this because I had some great professionally recorded videos to showcase before I got around to completing it.)
  3. I can post recordings that come out well to my YouTube channel.
I have fully-produced two talks so far. This is from April 2016 (Central California .NET User Group, Fresno CA):



And this is from March 2015 (Nebraska.Code(), Lincoln NE):



So here's what I've done. It's not professional quality, but it gets the job done for what I was looking for.

Sound Quality is King
One of the most important things about recording live presentations is making sure that you have good audio. If there is echo-y sound that's hard to understand, no one will watch the video. I've passed up watching videos of people I really wanted to hear just because the audio was so distracting. This problem actually kept me from recording my presentations for quite a while.

For a previous discussion on the importance of audio quality, check out my article on screencasting: Jeremy's Screencast Production.

Record Audio Separately
Don't rely on the camera or your computer to pick up the audio. Record the audio separately. I use a Zoom H1 digital recorder (Amazon link) that I'm very happy with:



This has a built-in microphone array which would be good for recording interviews or something where you don't move around a lot. For my presentations, I pair this with a Sony ECMSC3 lapel microphone (Amazon link):


This is an inexpensive microphone, but the quality is surprisingly good. I can clip the microphone to my collar and stick the Zoom recorder in my pocket.

Note: If you take this approach, make sure the microphone is plugged in all the way to the recorder. I have a couple presentations that are nothing but a recording of the inside of my pocket.

What I Like
I am happy with the results of the Zoom recorder. It runs on AA batteries and records to a micro-SD card. Battery life is very good, and 2-1/2 hours of audio takes about 1-1/2 gigs of space (which isn't much when you can buy 64 gig cards really cheaply).

I use the "Auto Level" feature of the Zoom recorder (you can see the switches on the back). This has a few quirks, but it's easy to fix in editing (more on that below).

You'll also notice that there is a "Hold" switch. This will save you from accidental button presses, particularly if you put the recorder in your pocket.

Repeat the Question
A general piece of speaker advice is to always repeat the question so that other members of the audience can hear it. This is especially important when recording because you will not pick up the audience at all on the lapel microphone. So you want to repeat the question for the recording as well.

Screen Capture
I use Camtasia Recorder (from TechSmith) for screen capture. I've spent a bit of time talking about Camtasia in my screencasting article, so I won't repeat it here. I'll have a bit more to say about it when we get to editing.

One thing to be aware of is that the Camtasia Recorder will capture some keystrokes for its own use. So be prepared to re-map some keys in Camtasia or be ready to use alternate methods in your demo. The most prominent one that I run into is that F10 is "pause" for Camtasia, so in Visual Studio, I'll use the toolbar buttons for debugging instead of F10 to step through code.

Video
High-quality video was not really important to me. I primarily wanted to be able to see myself in action (and again, potentially put together a promo video). Rather than spend a lot of money on a video camera, I spent a couple hundred dollars on a little Sony HDR-AS100V (Amazon link):


This is the Sony version of a GoPro. It's really designed as an action camera, so the lens is a bit fish-eyed. I got this a while back, so I'm sure there's an improved version of this now.

The recording quality is good: 1080p. And if you crop the video (like is shown in the first video in this article), then you don't see the fish-eye. Note: the second video in the article was actually recorded first, and I learned some things about editing that I was able to use in the other video. But more on editing later.

I have the camera mounted on a small tripod (this was included with the Zoom kit that is linked above):


It's nice and small to stick in a bag with the other gear. But the legs expand to give it a decent height when placed on a table. Here's the rig with the camera attached:


I usually put the camera on a table in the front row (which gives a bit of a side-angle). I've put it in the back of the room a few times, and the image was a bit small. When it's closer, it's easier to see my movement and mannerisms.

What to Watch Out For
There are a couple things about this camera to watch out for. First, it can only be charged by the USB port (there's a removable battery, but you would have to buy a separate charger for that). It cannot be plugged in and record at the same time, so you're limited by battery life.

In my experience, battery will go about 1 hour 45 minutes. This can usually accommodate one of my presentations at a user group, but I make sure to start it just before I start, and I'll even pause it at breaks to save a bit of battery.

This also uses micro-SD for storage. And it runs about 4 gigs for 35 minutes of video. I use those weird increments because it actually splits up video files into 35 minute chunks. Again, with a 64 gig card, I've never had any problems with space. The limiting factor has always been battery life.

Audio Everywhere
Record audio in all three places: digital recorder, screen capture, and video camera. Even though the final audio will be from the digital recorder, it's easy to use the sound waves to line up the various tracks when editing the video.

And here's a tip I picked up from my friend and fellow speaker Justin James (@digitaldrummerj):
After starting all 3 recorders, clap your hands.
This will create a very visible audio spike that you can use to line up the various tracks. (Just like movies use the clappers to start a take.)

Lots of Files
The result of the recordings are lots of files:


The ".mp4" files come from the video camera. As mentioned, this records in 35 minute chunks, so these need to be put together in editing. The ".trec" file is the screen capture from Camtasia. And the ".wav" file comes from the Zoom audio recorder.

The ".camproj" file is the Camtasia project. We're headed there next.

Putting Everything Together
Now that we've got all of the sources, we need to put everything together in Camtasia Studio. First job is laying down the tracks and lining them up. This is where the audio helps A LOT.

First, here's a look at the Camtasia interface:


The order of the tracks are important, the track that is "on top" (in this case, Track 4) appears over any other tracks.

Track 1: Audio track (not visible). This comes from the Zoom digital recorder and it will be the audio that we ultimately use.

Track 2: Screen capture (left side of screen). This comes from the Camtasia Recorder.

Track 3: Video capture (right side of screen). This is from the Sony video camera.

Track 4: Overlays (text at top of screen). We'll take a closer look at this a bit later.

Lining Things Up
After getting the media in the right slots, we have to line everything up. For this, we can look at the audio for each track:



We can use the audio tracks to line everything up. Notice that the peaks line up across the media. And as noted earlier, a "clap" at the start can create a spike to make it easy to see where the start point is.

In this case, I've already trimmed down the video and audio, so we can't see that part. Once everything is lined up, you can trim the excess at the beginning and end of the tracks.

You'll see different intensities when looking at the audio, this is because the computer is one side of the room, the camera is on the opposite side, and I'm walking around with the voice recorder in the middle.

Audio Processing
After getting all of the tracks lined up, we can use the Audio tab in Camtasia to simply mute the 2 video tracks. This will leave the audio just from the Zoom recorder.

When I'm doing screencasting, I don't normally use the audio leveling tool (as mentioned previously). This is because screencast recording is in a much more controlled environment where I'm sitting still. Things are a bit different when I'm doing things live and don't get a "second take".

So I use the volume leveling tool in Camtasia:


Just select the audio track and click "Enable volume leveling". This smooths out the peaks, and the audio track now looks more like this:


This is a bit different from the audio track we saw above.

I do this to smooth out some of the "Auto Level" from the Zoom recorder (remember I mentioned that above?). The auto-leveling of the Zoom recorder generally works well, but if you cough, tap the microphone, or cause some other "spike", the recorder will lower the sound levels, and it takes a few seconds to re-adjust.

The "Auto Level" in Camtasia will correct this. It's not ideal. But I also don't have a professional audio technician doing these recordings for me. The audio still comes out a lot better than many of the "live recordings" that I come across.

Video Processing
There are a couple different approaches to take when trying to show both the screen and the person at the same time. For the April 2016 video, things were pretty easy:


This projector was had a 4x3 aspect ratio, so my screen capture had that same ratio. This made it really easy to put the screen and video side-by-side (although it does look a little strange in the beginning when I'm showing my wide-screen slides).

In contrast, in the March 2015 video, I put in a lot of edits to show the live video when that was important and to show the screen when that was important. This was *a lot* of work. It is much easier to do side-by-side or an overlay if you can get away with it.

Since the videos were side-by-side, the order of the tracks wasn't as important (since they don't overlap). But if you're overlaying or doing other interesting things, you want to make sure that the right video is "on top".

Stitching Video
As mentioned above, the Sony video camera records in 35 minute chunks, and these end up as separate files. The good news is that if you put the two media files right next to each other in Camtasia (on the same track and touching each other), there is no gap or stutter at all.

Cropping and Panning
It's possible to crop and pan inside Camtasia to show just parts of the video. Some of these features are really hard to find. For example, the "crop" button is in the top right corner of the preview window:


This is entirely non-obvious, and I haven't found the "crop" feature on another screen or menu (maybe I'm missing it).

Here's what the video looks like un-cropped:


So you can see that I cropped out quite a bit of the frame. Cropping also can make the "fish-eye" of the camera a bit less obvious. This is why having the 1080p recording is nice. Even if you don't show the video full screen, you can crop and zoom into different areas and still have decent quality output.

This video was shot at the Central California .NET User Group in Fresno, CA. It's a small group; there were 10 people there that night. But it's not always the size of the room that makes a good presentation. Sometimes the smaller groups are better because you get more questions and interaction. (And remember to "repeat the question" even in a small room. Otherwise, you'll have an audio gap in your recording.)

Callouts
The last step in the editing process is to add callouts to the video. I've talking about adding callouts in screencasts, but it's also important when showing live presentations.

Here's a good example of why:


You'll notice in the video that I'm pointing at the screen. This is pretty common for me (especially when I'm talking in a training room). But it's impossible to tell what I'm pointing at.

This is where the "Callouts" feature in Camtasia comes in handy. I can highlight the area on the screen that I'm talking about. In this case, I'm pointing to a method signature and using a red rectangle to highlight it. This is on the "top track" (Track 4 here) so that it shows up over everything else. I've added similar callouts throughout the presentation.

Rendering
After getting all the bits together, it's time to render and upload. I try to render at the highest resolution I can (which is generally 1080p). Then when I upload to YouTube, they'll process it down to lower-quality versions.

Rendering projects with live video take significantly longer than rendering screencasts. So be prepared for that. This is compounded because my screencasts are generally in 20 minute chunks and will render in about 20 minutes. But a live presentation is over an hour long, so it's both more material and also a slower render.

Make sure that your computer is well ventilated during this process.

Wrap Up
This is a lot of work. But it's not that hard to do. The equipment cost me around $500 (which isn't cheap), but I've gotten quite a bit of use out of it. The Zoom recorder in particular has been a great little tool. I'm surprised at the quality that I can get from this little plastic device. This is the recorder that I also used to record bits of my banjo playing.

I have recorded about 20 of my presentations. And I've gone through the process of laying down the tracks, lining up the audio, and rendering each of them. Most of them have just been for my personal use (so you don't see all of the work behind those). But a couple have made it out into the wild, and when I come across a good presentation, I'll be sure to post it.

If you decide to start recording and producing your own presentations, I hope that my experience will make the path a bit easier for you.

Happy Speaking!

Tuesday, September 13, 2016

"User Driven Development" at AIM hdc

Last week, I had the great opportunity to give the Thursday evening keynote at AIM hdc in Omaha NE. I was honored to be invited to talk, and I had a great time.

Preparation
The topic was "User Driven Development", but I probably should have called it "Making the World a Better Place". Since this talk was me telling stories (with no code, gasp), I figured that I would take the same approach as I did previously with "Becoming a Social Developer" at NDC Oslo.

This meant that I hand-drew my slides:


I picked this up from David Neal, and it has been quite useful for these types of talks. I'm not quite as practiced as David at this point, so I've got simple stick figures:


And rather crude drawings (this is an application that is on fire):


The talk itself is based on a number of stories from my past which have made it into several articles over the years. You can check out the articles (and also the presentation slides) here: User Driven Development.

I ended up with a total of 109 slides. This sounds like a whole lot of slides for a 50 minute talk, but they tend to go by pretty quickly.

Presentation
The presentation itself went very smoothly. I was fortunate enough to have a group of friends sitting at a table near the front. (And they were nice enough not to heckle me during the talk.) They took some great pictures. Here's the view from the front (thanks to Heather Downing (@quorralyne) for this one):


And Cory House (@housecor) took a great panoramic shot from the back of the room:


The room held 650 people, so I figure that I had close to 500 in the audience.

The timing came out *almost* perfectly. Since I was telling stories, the timing for each part wasn't exactly metered out. About halfway through my talk I had this slide:


This related to a story about an application that would give the current time in whatever city you selected. I was *really* hoping that this would be the actual time in Omaha, but I looked at the time just before I got to this slide and found that it was 4:39 p.m. (Missed it by "that much".) But it gave me a chance to crack a joke about how close I was.

Response
The response was very positive. My Twitter notifications went pretty crazy that evening (and my definition of crazy is probably quite a bit different from other people -- it was 20 times more than I normally get.)

A couple of points from the talk stuck out for the audience. This one was tweeted out by Heather Downing, and a couple other people picked up on this point as well:


Our job as developers is not to type code.
Our job is to solve problems.
I was also happy to see Paul Oliver (@ItsPaultastic) pick up on this:


I am not "just" a corporate developer.
I am a corporate developer.
I've hated it when I have been referred to as "just" a corporate developer. This belittles the awesome work that we do. *All* developers can make the world a better place. Making someone else's job easier doesn't seem like it's world-changing, but it makes things better for that user. And that makes a difference.

Christian Peters (@TDDdev) wrote a review of the Thursday keynotes, and I was happy to see some of the key points that he picked up, including:


and


In addition, I had several people talk to me about the presentation that evening and the next day. And I even received some email after the event was over.

Wrap Up
Of course, I also found time to talk about "Becoming a Social Developer" (this was requested by the event organizer). And it was great to be talked about by Cory House:


This is the first full-length keynote that I have given. Previously, I'd had 10 minutes on stage (sharing someone else's keynote). The best compliment that I got was from Cory (who has given keynotes at several events). He said that he couldn't believe that was the first time I'd given a keynote. So I guess I looked like I knew what I was doing. 😉

I don't know if I impacted everyone in the room (probably not), but I did make an impact on some people. And that means that I did succeed in making the world a better place.


Happy Coding!

Monday, September 5, 2016

Help Those Behind You

The past few days have been pretty rough for me. I managed to get focused on the wrong things, and that had a pretty bad effect on me. I felt useless. I *know* I'm not useless, but that's different from feeling useless.

In my case, that means I need to reset my focus:
Help Those Behind You
Learn from Those Ahead of You
We are all in different places on our journey. And when we look at other people, sometimes we're on the same path, sometimes we're on different paths, and sometimes our paths cross.

Wrong Focus
Where I got into trouble was focusing on people who are ahead of me (and on some people who are on different paths). Specifically for me, this might mean a person who was selected to speak at a conference I was rejected from. It may mean a person who is teaching topics that are deeper and more interesting than what I teach. It may mean a person who is more productive at producing video content.

There will always be someone better than you at anything you can come up with. We want to look to those people because we can learn from them. But it's very easy to get focused on not being as good as someone else. When we get into that mindset, it can make us feel useless.

Right Focus
I find that I'm at my best when I'm helping someone else be better. In my world, that often means taking a previously obscure topic and making it clear and approachable.

And I can do this for everyone who is behind me on the path. I've had trouble learning certain topics. I've made mistakes and had failures. I've had some great successes.

I think about how I can guide the people behind me:
"Hey, watch out for that pothole."
"Take this detour; it's a bit longer, but you'll get there faster and easier."
"Slow down a bit here; there's a speed bump." 
When I do this, I know that I'm not useless.

Hang on to Positive Feedback
One thing that I do to keep myself on track is to review the ways that I've been helpful to people. That means digging through feedback from my presentations and comments on my videos & website.

When I have something like this:
Thank you so much; DI has been one of my weaknesses for years, but you've finally made it clear. It makes sense AND looks elegant to boot. Your presentation was clear, concise, and a simple explanation of confusing (at least to me) topic. Thank again!  
or this:
Thanks, Jeremy. Your tutorials really made me understand delegates properly, I now feel like I can identify when the proper time to use them is. I'll continue to check out your videos!
Then I know that I've helped someone along the path.

And now I don't only know I'm not useless, I feel like I'm not useless.

Continuous Learning
This doesn't mean that we stand still to help others. I know that I need to keep moving forward as well. I can learn a lot from the people who are ahead of me, and I try to do that as much as I can. (And I often don't do that as much as I should.)

In addition to learning from those ahead of us, we should also keep track of the not-so-positive feedback as well. I've used this to slowly improve over the years. A very specific thing I remember is someone who asked me to repeat questions from the audience when I was giving a talk -- it's usually difficult for other people to hear the question. (And thanks to Andrew, an attendee at one of my code camp sessions, for being the first one to point this out to me.) It took me a really long time for that to become a habit. But it is a habit now, and it's been a very good habit to have. (And I usually share that with other speakers when I notice that they don't do it.)

Sometimes you get feedback that is not positive and not helpful. It can hurt when this happens. We like to please everyone. But it's also impossible to please everyone. So don't focus too much on the negatives.

Instead, look for places to improve.

You Don't Have to be an Expert
But I've also found that it doesn't matter how far along the path you are, you can still be helpful.

As a recent example, I did a dive into learning F# by going through some of the Euler problems. I am not very far along the path of using F#. And I was using the Euler problems to better understand things myself. I wasn't sure if the articles that I wrote were useful to other people. Most of the feedback that I got was from the people ahead of me who sent in some good advice to help me move forward.

But when I was at Music City Code a few weeks back, I talked to 2 people who told me that they were following those articles and that it was helpful to them. That was encouraging for me considering that I knew there was a lot of information on F# out there, a lot of information on Euler problems, and all of it from people who were way more experienced than I was.

I encourage other people to try speaking at least once. A lot of people think that they don't have anything that's worth sharing (I know that I spent a good chunk of time thinking that). But it turns out that if you know 10% more than someone else, you are the expert in the room. It doesn't matter that there are people who know way more than you do. If you know just a little bit more than the people you're talking to, you can be helpful.

I usually tell people to think of that one moment in their career where they said, "I wish someone had told me about this earlier." This usually comes after struggling through a tough problem, or learning about something in the standard library you didn't know was there. If you've had that experience, then other people have had it too. You have something to share to help people avoid that same problem.

You don't have to be an expert.

You know more than someone else; you can help that person.

Moving Forward
So after spending a few days focusing on how far behind I am, I'm going to start focusing on how far ahead I am. I've never considered myself an expert in anything; that's because I know that there are lots of people who know lots more than I do. But there are a lot of people who are behind me on the path. And I can help them.

Reach out and help someone behind you.

Happy Coding!


Friday, September 2, 2016

September 2016 Speaking Engagements

I've got several speaking events scheduled in September (and a whole bunch in October). I'm back on the road next week.

As always, if you'd like me to come to speak for your company or at your event, just drop me a note. I've got quite a few Presentations and Workshops available.

Wed-Fri, Sep 7-9, 2016
AIM/hdc
La Vista, NE
Conference Site
o User Driven Development (general session)
o DI Why? Getting a Grip on Dependency Injection

I'm looking forward to heading to Nebraska next week. It's my first time in the Omaha area. And I've also been given the opportunity to give the closing session on Thursday night. I've got the slides together (after some more crappy drawing), and the talk should be a lot of fun.

Thursday, September 22, 2016
Corporate Event
Irvine, CA
o Test-Driven Development in the Real World

Thursday, September 29, 2016
SouthBay.NET
Mountain View, CA
Meetup Link
o I'll Get Back to You: Task, Await, and Asynchronous Methods

I've been out to SouthBay.NET several times, and it's always a fun group. It's also great to see my friend Theo Jungeblut (@theojungeblut) who heads things up.

Friday, September 30, 2016
Code Stars Summit
San Jose, CA
Workshop Details
o Getting Better with C#: Interfaces & Dependency Injection

This is a full-day workshop on two of my favorite topics. But at a larger scale, it's really about how we use abstraction to make our code easier to extend, maintain, and test. There's still some spots open for those who want to join me.

Sat-Sun, Oct 1-2, 2016
Silicon Valley Code Camp
San Jose, CA
Event Site
o IEnumerable, ISaveable, IDontGetIt: Understanding .NET Interfaces
o I'll Get Back to You: Task, Await, and Asynchronous Methods

I know this event is not in September, but it's happening while I'm up in the Bay Area, so it fits in well right here. The Silicon Valley Code Camp is one of the largest community events on the west coast. It was fun getting the chance to be on video last year. I'm looking forward to heading back again this year.

A Look Ahead
October is looking to be a lot of fun with a lot of travelling. The first weekend, I'll be at Silicon Valley Code Camp in San Jose, CA (as already mentioned). The following week, I'm headed to the Desert Code Camp in Chandler, AZ. A bit later in the month, I'm going to St. Louis, MO for DevUp and then to Des Moines, IA for Prairie.Code().

So, lots to do, lots of traveling, and lots of days away from home. But places I really want to be.

A Look Back
August was a bit of a light month: only one event -- but it was an amazing event: Music City Code. I've already written a bit about how I got to talk to a ton of great people. And I also had a lot of fun giving my talks.

In addition to my regular sessions, David Neal (@reverentgeek) was nice enough to share his evening keynote with me. This gave me a chance to share "Becoming a Social Developer" (one of my favorite topics) with a new group of people.

The response was great. Here are a few of the tweets that went around:


I had a great conversation with Spencer (@schneidenbach) the morning before the workshops. And I ended up spending a quite a bit of time with him the rest of the week. It was great to get to know him better.



Thanks to everyone for the great pictures.

A really great thing about Music City Code is that there was a photographer taking really great shots during the whole event. Check out the gallery for yourself: Facebook Gallery.

He made me look very inspirational during David's keynote:

So Inspirational...

And he also got some good shots of me in my presentations. I guess there are 3 important things about Unit Testing:


And I'm not quite sure what I was doing in my Dependency Injection talk. I guess I was singing Thriller:

'Cause this is Thriiiiiiller

Needless to say, I had an really awesome time at Music City Code. It's one of the events that will stick out this year.

Happy Coding!

Wednesday, August 31, 2016

Code Coverage Should Not Be The Goal

Metrics (such as code coverage) are useful tools. They can tell us if we're headed in the right direction. But when a particular metric becomes the goal, we won't get the results that we expect.

I was reminded of this yesterday as I went through the drive-thru of a restaurant. So, it's time to take a closer look at the real problem of "metrics as a goal".

A Good Metric Becomes Useless
Yesterday, I was in the drive-thru of a local fast food restaurant. After paying at the window, the employee asked me to pull around to the front of the building, and they would bring me my food there. They have asked me to do this three times over the last couple months, so it stuck out a bit more this time.

Here's the problem: The employees at the drive-thru were being judged on the length of each transaction. The restaurant has sensors set up to see how long each car is at the window (and the shorter, the better). To get me off of the sensor, they asked me to drive around to the front of the restaurant. At this point, the employee has to walk around the counter and come outside to bring me the food.

This sounds like a good metric to check ("how long does it take to serve each customer?"). But the metric became the goal. The effect is that the employees were actually working *harder* to meet that goal. It takes them longer to walk out to the front of the restaurant (and it is work that is completely unnecessary). And this also means that it takes longer to actually serve the customer.

Because the metric became the goal, the employees were working harder to meet the metric, and the actual numbers lost their value -- they no longer know how long it *really* takes to serve each customer.

Code Coverage as a Goal
Bosses love metrics because they are something to grab on to. This is especially true in the programming world where we try to get a handle on subjective things like "quality" and "maintainability".

Code Coverage is one of the metrics that can easily get mis-used. Having our code covered 100% by unit tests (meaning each line of code is represented in a test) sounds like a really good quality to have in our projects. But when the number becomes the goal, we run into problems.

I worked with a group that believed if they had 100% code coverage, they would have 0 defects in the code. Because of this, they mandated that all projects would have to have 100% coverage.

And that's where we run into a problem.

100% Coverage, 0% Useful
As a quick example, let's look at a method that I use in my presentation "Unit Testing Makes Me Faster" (you can get code samples and other info on that talk on my website). The project contains a method called "PassesLuhnCheck" that we want to test.

As a little background, the Luhn algorithm is a way to sanity-check a credit card number. It's designed to catch digit transposition when people type in numbers manually. You can read more about it on Wikipedia: Luhn Algorithm.

So let's write a test:


This test is (almost) 100% useless. It calls our "PassesLuhnCheck" method, but there are no assertions -- meaning, it doesn't check the results.

The bad part is that this is a passing test:


This doesn't really "pass", but most unit testing frameworks are looking for failures. If something doesn't fail, then it's considered a "pass".

Note: I said that this test is *almost* useless because if the "PassesLuhnCheck" method throws an exception, then this test will fail.

Analyzing Code Coverage
Things get a bit worse when we run our code coverage metrics against this code. By using the built-in Visual Studio "Analyze Code Coverage" tool, we get this result:


This says that with this one test, we get 92% code coverage! It's a bit easier to see when we turn on the coverage coloring and look at the method itself:


Note: I didn't write this method, I took it from this article: Extremely Fast Luhn Function for C# (Credit Card Validation).

The blue represents the code that the tool says is "covered" by the tests. The red shows the code that is not covered. (My colors are a bit more obnoxious than the defaults -- I picked bold colors that show up well on a projector when I'm showing this during presentations.)

So this shows that everything is covered except for a catch block. Can we fix that?

From my experience with this method, I know that if we pass a non-numeric parameter, it will throw an exception. So all we have to do is add another method call to our "test":


This test also passes (since it does not throw an unhandled exception). And our code coverage gets a bit better:


We now have 100% code coverage. Success! Except, our number means absolutely nothing.
When a number becomes the goal rather than a guide, that number can easily become useless.
Code coverage is a very useful metric. It can tell us that we're headed in the right direction. If we have 0% code coverage, then we know that we don't have any useful tests. As that number gets higher (assuming that we care about the tests and not just the number), we know that we have more and more useful tests. We just have to be careful that the number doesn't become the goal.

Overly Cynical?
Some people might think that I'm overly cynical when it comes to this topic. But I've unfortunately seen this behavior in a couple different situations. In addition to the restaurant employees that I mentioned above, I ran into someone who worked only for the metric -- to the detriment of the customer.

Many years ago, I worked in a call center that took hotel reservations. The manager of that department *loved* metrics. Everyday, she would print out the reports from the previous day and hang them up outside her office with the name at the top of the list highlighted.

There were 2 key metrics on that report: number of calls answered and average talk time. "Number of calls answered" means what you think it means: the number of calls that a particular agent answers in an hour. "Average talk time" tracked how long the agent was on the phone with each customer.

There was a particular agent who was consistently on the top of the report whenever she had a shift. But there was something that the manager didn't know: the agent was working strictly for the metrics.

This agent always took a desk at the far end of the room (away from the managers office). Then she would only answer every 3rd call -- meaning, she would answer and then immediately hang up on 2 out of 3 customers. This got the "number of calls answered" number up -- she was answering 3 times more calls than otherwise. This got the "average talk time" number down -- 2 out of 3 calls had "0" talk time so the average went down. Since the metrics were up, she could take extra breaks and no one would notice.

Not The Goal
So maybe I am overly cynical when it comes to metrics. But I have seen them horribly mis-used. We can have "short" drive thru times while making the experience longer for the customer. We can have "100%" code coverage without actually having useful tests. We can have "short" talk time because we hang up on our customers.

Measuring is important. This is how we can objectively track progress. But when the measurement becomes the goal, we only care about that number and not the reason we're tracking it.

Use those numbers wisely.

Happy Coding!

Monday, August 29, 2016

Being Present - Mid-Year Review

At the beginning of this year, I made a commitment to Being Present at events where I'm speaking. I've been thinking about this the last couple days, so it's probably time to put down some specifics. (And yes, I know it's a bit past mid-year, but we'll just ignore that.)

Lots of Time Away from Home
In case you haven't figured it out, I really like speaking. I like helping developers take an easier path around the hurdles that I had to get over when I was learning these topics. Since January I've spoken at 22 events. These range from local user groups that are 8 miles from home to conferences that are 9 times zones away from where I live.

In all, I've spent 54 days on the road. A regular job would advertise that as 25% travel. That's a lot more than I've done in the past (and I've still got several more trips before the year is out). Fortunately, I don't have much that keeps me from traveling (the cats pretend to not get lonely), so I'm taking advantage of the opportunity while I can.

So how are things going?

Awesome Interactions
I've had a ton of awesome interactions this year. I first made it to the central time zone last year, and I've made some really good friends who make the rounds in that area.

Music City Code is freshest in my mind (since I was there a little over a week ago). It was really great to spend some time with Eric Potter (@pottereric) who I think I first met at Nebraska.Code() last year. Also, Cameron Presley (@pcameronpresley) who I spent some time with at Code PaLOUsa in Kentucky earlier this year. I also had some good conversations with Chris Gardner (@freestylecoder) and Heather Tooil (@HeatherTooill) -- I've seen both of them at other events, but never really sat down to talk. It was great to get to know them better.

Other people I got to know for the first time included Hussein Farran (@Idgewoo), Jesse Phelps (@jessephelps), Paul Gower (@paulmgower), and Spencer Schneidenbach (@schneidenbach).

In addition, I got to catch up with people who I know well from other events, including (but not limited to) Ondrej Balas, Justin James, Jim Wooley, Jeff Strauss, James Bender, Duane Newman, Kirsten Hunter, David Neal, Phil Japikse, and Paul Sheriff. (Sorry, I'm too lazy to include links; I know I've mentioned them in the past.)

And this is really just from Music City Code. If I look back at the other events I've been to, I've met some great people and been able to get to know them better (as an example, I met Matt Renze (@MatthewRenze) when we shared a ride from the airport to CodeMash; we both went to NDC London; and we hung out again at KCDC). And speaking of KCDC, it was great to spend some time with Cori Drew (@coridrew) who I first met at That Conference last year, and Heather Downing (@quorralyne) who I first met at Nebraska.Code() last year and got to hang out with again at Code PaLOUsa. (More on KCDC: A Look Back at June 2016.)

This makes it sound like I only hang out with other speakers, but that's definitely not the case. I tend to spend a bit of additional time with speakers because we're often staying at the same hotel and/or looking for things to do after all the local folks go home for the night. And repeated interactions at different events reinforce these relationships.

I have great conversations with the non-speaker folks, too.

Other Interactions
I'm always surprised at the folks that I end up running into over and over at an event. At Music City Code, I had a conversation with Eric Anderson one morning, and we kept running into each other throughout the event.

At Visual Studio Live! in Austin, I ended up having dinner with Andrew, Mike, and Mike (differentiated as "Jersey Mike" and "Baltimore Mike"). None of us knew each other before the event, but we walked over to an event dinner together, ended up talking throughout the week, and even rounded things out with really excellent barbecue on the last night.

I made a ton of new friends at NDC Oslo (I mentioned just a few of them previously). CodeMash was awesome because I got to sit down with Maggie Pint to get to know her better (and you can read more about that in the NDC article).

Okay, so I'm going to stop now. I've been going through my notes from the various conferences and there are too many awesome people to mention. I've met a ton of great people. The conversations are useful because I get to hear what other people are successful with and what they are struggling with. Even if those relationships don't continue, we're still the better for having had the conversation.

And when the relationships do continue, it's a great thing.

Being Present
I credit these conversations and these relationships to "being present" at the event. I'm around during the morning coffee time before the event. I'm around during lunch time. I'm around at the breaks. I'm around for the dinners and after parties (with some caveats). And because I know that I can sleep when I get home, I try to be around for the hotel lobby conversations late in the evening.

This gives me a lot of opportunities to interact. I'm not always successful, but the more I'm available, the more conversations I have.

Stepping Out Early
I have stepped out early from parts of events. This is actually something that I put in to my original commitment:
  • This also means that I will be available at the noisy, crowded receptions that clash with my introvert nature (although I reserve the right to find a small group of people to go have coffee with in a quieter location).
I don't usually last very long at receptions or after parties. As an introvert, the noise and activity are overwhelming and suck out all of my energy. So I usually try to find a group of folks where I can "anchor". Sometimes this lets me stay at the party, sometimes it means that we go off somewhere else.

For example, at CodeMash there was a reception at a bar that was *very* loud. But I managed to get into a circle of 4 or 5 people (and stay in that circle), so I was able to manage by focusing on the conversation with the people around me. I managed to do the same thing at the KCDC party. I walked around the venue a little bit and had some good (short) conversations. But when I was saw that I was running out of energy (I even stepped outside for a bit), I found a table of folks where I could "anchor". I could focus on the 5 or 6 people at the table and block out the rest of the activity.

Other events played out a bit differently. At the Music City Code party, things were extremely loud. I had a couple good conversations, but it was overwhelming. A few of us ended up going upstairs to the restaurant (which was a bit quieter) -- our group kept getting bigger as more people stepped out for a "break". I think we ended up with 6 folks having dinner. I went back down to the party for a little while to make sure I had a chance to say goodbye to folks I wouldn't be seeing again. And I ended up talking with Erin Orstrom (@eeyorestrom) about introvert & extrovert developers.

The party at NDC Oslo had a couple bands. I kind of wanted to stay for a little while to hear them, but I ran into a group of folks who were going out to dinner. Since I knew I wouldn't last long at the party, I decided to take the opportunity to go to dinner with Evelina Gabasova (@evelgab), Tomas Petricek (@tomaspetricek), and Jamie Dixon (@jamie_dixon).

I'm still working on how I can best deal with the overwhelming situations. I'd like to be present for the entirety of those, but I know that I need to take care of myself as well.

Tough Decisions
As expected, there have been some tough decisions this year. This is the first year that I've had to decline events because I was accepted at more than one for the same time period. That's a good problem to have, but I want to avoid it as much as possible. It's hard enough for organizers to select speakers and put together a conference schedule; it's even worse when one of the selected speakers can't make it.

When there are multiple events during a particular week, I've decided to submit to only one of them. This has been tough because I don't always make the right decisions. I've "held" a week for a particular event (that I was pretty sure I'd get selected for), and then I don't get selected. By that time, it's too late to submit to the other event for that week. The result is that I had some gaps in my schedule that I would rather have not had. But I'm just playing things by ear at this point. I'm not sure what the "right" events are.

As an example, I would really like to be part of the first TechBash (particularly since Alvin Ashcraft (@alvinashcraft) has been such a great support in sending folks to my blog). But I held that week for another event that I had submitted to (actually 2 more local events that were back-to-back). One of those events didn't accept me; had I known that, I would have planned differently. But it also opened up an opportunity for me to do a workshop at Code Stars Summit (there's still space available), so I've got something good to look forward to that week.

It has been hard getting rejected by events that I really wanted to be at. And it's even harder when the event is happening, and I'm watching folks online talk about how awesome it is. Rejection is part of the process, though. It's normal, and it doesn't reflect on who you are as a person -- at least I keep telling myself that :)

There are some events that I went to this year (and I'm really glad that I did), but I won't be submitting again next year. These are also tough decisions. If you really want me to be at your event, contact me personally, and I'll see what I can arrange. I try to move stuff around for people who send me an invitation.

Falling into Success
I think that my decision to only submit for events that I really want to attend helps me stick with my commitment. If I'm at an event that I want to be at, then I'm more likely to be engaged and excited about it.

I've had some awesome opportunities as a speaker this year. I'm very thankful to everyone who comes to hear me speak and for those who tell me that it was useful to them. I'm looking forward to the opportunities that are still coming this year (Upcoming Events). And I'm also excited about some events that are coming up next year -- I'll announce those as soon as they are official.

In the meantime, I'm glad that I'm conscious about "being present" at the events I'm speaking at. It gives me lots of opportunities to meet new people, catch up with old friends, and expand the amount of awesome that I get from each event. And hopefully it expands the amount of awesome for the folks I talk to as well.

Happy Coding!

Monday, August 15, 2016

Recognizing Hand-Written Digits: Getting Worse Before Getting Better

I took a stab at improving some machine-learning functions for recognizing hand-written digits. I actually made things less accurate, but it's pointing in a promising direction.

It's been a long time since I first took a look at recognizing hand-written digits using machine learning. Back when I first ran across the problem, I had no idea where to start. So instead of doing the machine learning bits, I did some visualization instead.

Then I got my hands on Mathias Brandewinder's book Machine Learning for .NET Developers, and he showed some basics that I incorporated into my visualization. I still didn't know where to go from there. Recently, I've been doing some more F# exploration, and that inspired some ideas on how I might improve the digit recognizers.

To take a look at the history of the Digit Display & Recognition project, check out the "Machine Learning (sort of)" articles listed here: Jeremy Explores Functional Programming.

Blurring the Results
My first stab at trying to improve the recognizers came from reading Tomas Petricek's book Real-World Functional Programming. In the book, he shows a simple function for "blurring" an array:


There's a lot going on here, and I won't walk through it. But this takes a array of values and then averages each item with its neighbors.

Here's an example that creates an array of random values and then runs it through the "blurArray" function:


If we look at the output, the first array is a set of random numbers. The second output shows the result of running it through our blur function one time.

The last result shows the result of running through the blur function three times. And we can see that the values get "smoother" (or "blurrier") with each step.

Applying Blur to the Digit Recognizer
When I saw this, I thought of the digit recognition problem. Our data was simply an array of numbers. What would happen if I ran a similar "blur" over the digit data?

Note: this code is available in the "BlurClassifier" branch of the "digit-display" project on GitHub: jeremybytes/digit-display "Blur Classifier".

The reason I thought of this is because the current algorithms are doing strict comparisons between 2 images (one pixel at a time). But if the images are offset (meaning translated horizontally or vertically by several pixels), then the current recognizers would not pick it up. If I added a "blur", then it's possible that it would account for situations like this.

Blurring the Data
Here's my function to blur the data that we have:


This is a bit more complex than the function we have above. That's because we're really dealing with 2-dimensional data. Each pixel has 8 adjacent pixels (including the row above and below).

I won't go into the details here. I skipped over the edges to make things a bit simpler, and I also weighted the "center" pixel so that it was averaged in 4 times more than the other pixels.

The New Distance Function
With this in place, I could create a new "distance" function:


This takes 2 pixel arrays, blurs them, and then passes them to our Manhattan Distance function that we already have in place. This means that we can do a direct comparison between our Manhattan Distance recognizer and our new Blur Distance recognizer.

The Results
Unfortunately, the results were less than stellar. Here's the output using our Digit Display application:


Note: When comparing the results, the numbers aren't in the same order due to the parallelization in the application. But they should be in the same general area in both sets of data.

There is both good and bad in the results. The good news is that we correctly identified several of the digits that the Manhattan Classifier got wrong.

The bad news is that there are new errors that the original classifier got right. But even with the new errors, it didn't perform any "worse" overall than the original. That tells me that there may be some good things that we can grab from this technique.

But now let's look at another approach.

Adding Some Weight
The other idea that I came up with had to do with how the "best" match was selected. Here's the basic function:


This runs the "distance" function (the "dist" right in the middle) to compare our target item against every item in the training set. In the distance calculation, smaller is better, so this just takes the smallest one that it can find.

But the "best" match isn't always the correct one. So I came up with the idea of looking at the 5 closest matches to come up with a consensus.

Note: this code is available in the "WeightedClassification" branch of the "digit-display" project on GitHub: jeremybytes/digit-display "Weighted Classification".

Here's that function:


This has quite a few steps to it. There's probably a much shorter way of doing this, but this makes it easy to run step-by-step using F# Interactive.

Instead of pulling the smallest value (using "minBy" in the original), it gets the 5 smallest values. It looks something like this (there's some bits left out to make it more readable):


Then it counts up how many of each value. In this case, we have three 6s and two 5s. Then it pulled out the one with the most values in the list. (And 6 is correct in this case.)

To put this into the application, I composed the functions a bit differently to come up with a "weighted" classifier that still used the Manhattan Distance.

The results were not very good:


This actually makes things less accurate overall. But there are some promising items by looking at these results.

First, several of the items that the standard Manhattan Classifier got wrong were correctly identified by the weighted classifier. This did reinforce that the smallest number was not always the correct number.

But there were also a lot of items that this new classifier identified incorrectly. So overall, the performance was worse than the original.

More Refinement
Although this looks like a failure, I think I'm actually headed in the right direction. One thing that I can do to make this more accurate is to add a true "weight" to the calculation. Here's another example from our current approach:


If we look at these values, the distance calculations are fairly close together (within about 1500 of each other.) In this case, we can pretty confidently take the one with the "most" values (which is 2 in this case).

But compare that to this:


Here we have a much bigger gap between our best value and our worst value (over 5000). And there is even a big gap between the first value and the next best value (over 4000). Because of this, I really want to weight the first value higher. A simple consensus doesn't work in this case (especially since we have a "tie").

So even though we get worse results with the current implementation, I think this really shows some promise.

If I can add some "weight" to each value (rather than simply counting them), I think it can improve the accuracy by eliminating some of the outliers in the data.

Wrap Up
I really like having the visualization for what the machine-learning algorithms are doing. This gives me a good idea of where things are going right and where they are going wrong. This is not something that I could get just from looking at "percentage correct" values.

These two approaches to improving the results didn't have the intended effect. But because we could see where they went right and where they went wrong, it's possible to refine these into something better.

I'll be working on adding actual weights to the weighted classifier. I think this holds the most promise right now. And maybe adding a bit of "blur" will help as well. More experimentation is needed. That means more fun for me to explore!

Happy Coding!