Monday, September 25, 2023

Last Chance: Full Day Workshop on Asynchronous and Parallel Programming in C#

This is the last public workshop I have scheduled on asynchronous programming. Next year, I've got a whole new workshop coming. So if you've been putting off attending, you'd better take the opportunity now!

On Sunday, November 12, 2023, I'll be giving a full day workshop (with hands-on labs) at LIVE! 360 in Orlando, Florida. This is your chance to spend a full day with me and also learn tons of useful tech during the rest of the week.

Event Info:

LIVE! 360 November 12-17, 2023
Royal Pacific Resort at Universal
Orlando, FL
Event Link: https://live360events.com/Events/Orlando-2023/Home.aspx

Use the promo code "Clark" to save $500 off the regular price for 4-, 5-, or 6-day packages (Note: you'll need the 6-day package to join me for the full day workshop on Sunday, November 12). Here's a direct link to registration that includes the promo code: bit.ly/3LuBLrd

Read on to see what we'll learn in the workshop.

Hands-on Lab: Asynchronous and Parallel Programming in C#

11/12/2023 9:00 a.m. - 6:00 p.m.
Level: Intermediate

Asynchronous programming is a critical skill to take full advantage of today's multi-core systems. But async programming brings its own set of issues. In this workshop, we'll work through some of those issues and get comfortable using various parts of the .NET Task Parallel Library (TPL).

We'll start by calling asynchronous methods using the Task Asynchronous Pattern (TAP), including how to handle exceptions and cancellation. With this in hand, we'll look at creating our own asynchronous methods and methods that use asynchronous libraries. Along the way, we'll see how to avoid deadlocks, how to isolate our code for easier async, and why it's important to stay away from "asyc void".

In addition, we'll look at some patterns for running code in parallel, including using Parallel.ForEachAsync, channels, and other techniques. We'll see pros and cons so that we can pick the right pattern for a particular problem.

Throughout the day, we'll go hands-on with lab exercises to put these skills into practice.

Objectives:

  • Use asynchronous methods with Task and await 
  • Create asynchronous methods and libraries 
  • Learn to avoid deadlocks and other pitfalls 
  • Understand different parallel programming techniques

Topics:

Here's a list of some of the topics that we'll cover:

Pre-Requisites:

Basic understanding of C# and object-oriented programming (classes, inheritance, methods, and properties). No prior experience with asynchronous programming is necessary; we'll take care of that as we go.

Attendee Requirements:

  • You must provide your own laptop computer (Windows, Mac, or Linux) for this hands-on lab.
  • All other laptop requirements will be provided to attendees 2 weeks prior to the conference

Hope to See You There!

This is the last scheduled public asynchronous programming workshop. If you can't make it to this one, I am available for private workshops for your team - customized to be most relevant to the code that you're building. Next year, I've got a whole new workshop coming, so keep watching here (and my website: jeremybytes.com) for future events.

Happy Coding!

Wednesday, August 23, 2023

New Video: 'await' Return Types

A new video is available on my YouTube channel: Why do you have to return a Task when you use "await" in a C# method?. The video is a quick walkthrough of code based on an article from earlier this week (link below).

Whenever we "await" something in a C# method, the return value is automatically wrapped in a Task, and the return type for the method must include the Task as well. This leads to some strange looking code: the code in the method returns one thing (such as a Person object), but the return type for the method returns another (a Task of Person). In this video, we will look at some code to try to understand this a bit better.

The code compares using "await" with what happens if we were to use "Task" manually. The comparison helps my brain process the disconnect between return types. Hopefully it will help you as well.


Article:

More videos are on the way.

Happy Coding!

Monday, August 21, 2023

Why Do You Have to Return "Task" Whenever You "await" Something in a Method in C#?

There is something that has always bothered me in C#: Whenever you "await" something in a method, the return value must be wrapped in a Task.

Note: If you prefer video to text, take a look here: YouTube: Why do you have to return a Task when you use await in a C# method?

The Behavior

Let's start with an example:


    public Person GetPerson(int id)
    {
        List<Person> people = GetPeople();
        Person selectedPerson = people.Single(p => p.Id == id);
        return selectedPerson;
    }

    public async Task<Person> GetPersonAsync(int id)
    {
        List<Person> people = await GetPeopleAsync();
        Person selectedPerson = people.Single(p => p.Id == id);
        return selectedPerson;
    }

The first method (GetPerson) does not have any asynchronous code. It calls the "GetPeople" method that returns a List of Person objects. Then it uses a LINQ method (Single) to pick out an individual Person. Lastly it returns that person. As expected, the return type for the "GetPerson" method is "Person".

The second method (GetPersonAsync) does have asynchronous code. It calls the "GetPeopleAsync" method (which returns a Task<List<Person>>) and then awaits the result. This also gives us a List of Person objects. Then it uses the same LINQ method (Single) to get the selected Person. Lastly it returns that person.

But here's where things are a bit odd: even though our return statement ("return selectedPerson") returns a "Person" type, the method itself ("GetPeopleAsync") returns "Task<Person>".

A Leaky Abstraction

I love using "await" in my code. It is so much easier than dealing with the underlying Tasks directly, and it handles the 95% scenario for me (meaning, 95% of the time, it does what I need -- I only need to drop back to using Task directly when I need more flexibility).

I also really like how "await" lets me write asynchronous code very similarly to how I write non-asynchronous code. Exceptions are raised as expected, and I can use standard try/catch/finally blocks. For the most part, I do not have to worry about "Task" at all when using "await".

It is a very good abstraction over Task.

But it does "leak" in this one spot. A leaky abstraction is one where the underlying implementation shows through. And the return type of a method that uses "await" is one spot where the underlying "Task" implementation leaks through.

This isn't necessarily a bad thing. All abstractions leak to a certain extent. But this particular one has bugged me for quite a while. And it can be difficult to grasp for developers who may not have worked with Task directly.

A Better Understanding

Most of the descriptions I've seen have just said "If you use 'await' in your method, the return type is automatically wrapped in a Task." And there's not much more in the way of explanation.

To get a better understanding of why things work this way, let's get rid of the abstraction and look at using "Task" directly for this code, building things up one line at a time.

If you would like more information on how to use Task manually (along with how to use 'await'), you can take a look at the resources available here: I'll Get Back to You: Task, Await, and Asynchronous Methods in C#.

Step 1: The Method Signature

Let's start with the signature that we would like to have:


    public Person GetPersonAsyncWithTask(int id)
    {

    }

The idea is that I can pass the "id" of a person to this method and get a Person instance back. So this would be my ideal signature.

Step 2: Call the Async Method

Next, we'll call the "GetPersonAsync" method, but instead of awaiting it, we'll get the actual Task back. When I don't know the specific type that comes back from a method, I'll use "var result" to capture the value and then let Visual Studio help me. Here's that code:


    public Person GetPersonAsyncWithTask(int id)
    {
        var result = GetPeopleAsync();
    }

If we hover the cursor over "var", Visual Studio tells us the type we can expect:



This tells us that "result" is "Task<TResult>" and that "TResult is List<Person>". This means that "GetPeopleAsync" returns "Task<List<Person>>". So let's update our code to be more explict:


    public Person GetPersonAsyncWithTask(int id)
    {
        Task<List<Person>> peopleTask = GetPeopleAsync();
    }

Now we have our explicit type, along with a better name for the result: peopleTask.

Step 3: Add a Continuation

The next step is to add a continuation to the Task. By adding a continuation, we are telling our code that after the Task is complete, we would like to "continue" by doing something else.

This is done with the "ContinueWith" method on the task:


    public Person GetPersonAsyncWithTask(int id)
    {
        Task<List<Person>> peopleTask = GetPeopleAsync();
        peopleTask.ContinueWith(task =>
        {

        });        
    }

The "ContinueWith" method takes a delegate (commonly inlined using a lambda expression). In this case, the delegate takes the "peopleTask" as a parameter (which we'll call "task" in the continuation).

For more information on delegates and lambda expressions, you can take a look at the resources available here: Get Func-y: Understanding Delegates in C#.

Step 4: Fill in the Continuation Code

The next step is to fill in the body of our continuation. This is basically "the rest of the method" that we had in our non-asynchronous version:


    public Person GetPersonAsyncWithTask(int id)
    {
        Task<List<Person>> peopleTask = GetPeopleAsync();
        peopleTask.ContinueWith(task =>
        {
            List<Person> people = task.Result;
            Person selectedPerson = people.Single(p => p.Id == id);
            return selectedPerson;
        });        
    }

The first line in the continuation takes the "Result" property of the task (which is the List<Person>) and assigns it to a variable with a friendlier name: "people". Then we use the "Single" method like we did above to get an individual record from the list. Then we return that selected "Person" object.

But Now We Have a Problem

But now we have a problem: we want to return the "selectedPerson" from the "GetPersonAsyncWithTask" method, but it is being returned inside the continuation of the Task instead.

How do we get this out?

It turns out that "ContinueWith" returns a value. Let's use the same technique we used above to figure out what that is.

Step 5: Getting a Return from ContinueWith


    public Person GetPersonAsyncWithTask(int id)
    {
        Task<List<Person>> peopleTask = GetPeopleAsync();
        var result = peopleTask.ContinueWith(task =>
        {
            List<Person> people = task.Result;
            Person selectedPerson = people.Single(p => p.Id == id);
            return selectedPerson;
        });        
    }

Here we have added "var result = " in front of of the call to "peopleTask.ContinueWith". Then if we hover the mouse over "var", we see that this is "Task<TResult>" and "TResult is Person". So this tells us that "ContinueWith" returns a "Task<Person>".

So let's be more explicit with our variable:


    public Person GetPersonAsyncWithTask(int id)
    {
        Task<List<Person>> peopleTask = GetPeopleAsync();
        Task<Person> result = peopleTask.ContinueWith(task =>
        {
            List<Person> people = task.Result;
            Person selectedPerson = people.Single(p => p.Id == id);
            return selectedPerson;
        });        
    }

Now our "result" variable is specifically typed as "Task<Person>".

Step 6: Return the Result

The last step is to return the result:


    public Person GetPersonAsyncWithTask(int id)
    {
        Task<List<Person>> peopleTask = GetPeopleAsync();
        Task<Person> result = peopleTask.ContinueWith(task =>
        {
            List<Person> people = task.Result;
            Person selectedPerson = people.Single(p => p.Id == id);
            return selectedPerson;
        });
        return result;
    }

But we can't stop here. Our return types do not match. The method (GetPersonAsyncWithTask) returns a "Person", and the actual type we return is "Task<Person>".

So we need to update our method signature:


    public Task<Person> GetPersonAsyncWithTask(int id)
    {
        Task<List<Person>> peopleTask = GetPeopleAsync();
        Task<Person> result = peopleTask.ContinueWith(task =>
        {
            List<Person> people = task.Result;
            Person selectedPerson = people.Single(p => p.Id == id);
            return selectedPerson;
        });
        return result;
    }

Now the method returns a "Task<Person>". And this is what we want. If someone awaits what comes back from this method, they will get a "Person" object back. In addition, someone could take this task and set up their own continuation. This is just the nature of Task and how we set up asynchronous code using Task.

Back to the Comparison

So let's go back to our method comparison. But instead of comparing a method with "await" to a non-asynchronous method, let's compare a method with "await" to a method that handles the asynchronous Task manually.


    public async Task<Person> GetPersonAsync(int id)
    {
        List<Person> people = await GetPeopleAsync();
        Person selectedPerson = people.Single(p => p.Id == id);
        return selectedPerson;
    }

    public Task<Person> GetPersonAsyncWithTask(int id)
    {
        Task<List<Person>> peopleTask = GetPeopleAsync();
        Task<Person> result = peopleTask.ContinueWith(task =>
        {
            List<Person> people = task.Result;
            Person selectedPerson = people.Single(p => p.Id == id);
            return selectedPerson;
        });
        return result;
    }

Here's how we can think of this code now (and the compiler magic that happens behind it): anything after the "await" is automatically put into a continuation. What the compiler does is much more complicated, but this is how we can think of it from the programmer's perspective.

If we think about it this way, it's easier to see why the method returns "Task<Person>" instead of just "Person".

"await" is Pretty Magical

When we use "await" in our code, magical things happen behind the scenes. (And I'm saying that as a good thing.) The compiler gets to figure out all of the details of waiting for a Task to complete and what to do next. When I have this type of scenario (the 95% scenario), then it is pretty amazing. It saves me a lot of hassle and let's me work with code that looks more "normal".

The abstraction that "await" gives us is not perfect, though. It does "leak" a little bit. Whenever we "await" something in a method, the return value gets automatically wrapped in a Task. This means that we do need to change the return type of the method to Task. But it is a small price to pay for the ease of using "await".

Wrap Up

I've always had a bit of trouble with having to return a Task from any method that "awaits" something. I mean, I knew that I had to do it, but there was always a bit of cognitive dissonance with saying "the code in this method returns 'Person', but the method itself returns 'Task<Person>'."

By working through this manually, my brain is a little more comfortable. I have a better understanding of what is happening behind the "await", and that if we were to do everything manually, we would end up returning a Task. So it makes sense that after we add the magic of "await", the method will still need to return a Task.

I hope that this has helped you understand things a bit as well. If you have any questions or other ways of looking at this, feel free to leave a comment.

Happy Coding!

Friday, August 18, 2023

New Video: Nullable Reference Types and Null Operators in C#

A new video is available on my YouTube channel: Nullable Reference Types and Null Operators in C#. The video is a quick walkthrough of code based on a series of articles I did last year (links below).

Nullable Reference Types and Null Operators in C#

New projects in C# have nullable reference types enabled by default. This helps make the intent of our code more clear, and we can catch potential null references before they happen. But things can get confusing, particularly when migrating existing projects. In this video, we will look at the safeguards that nullability provides as well as the problems we still need to watch out for ourselves. In addition, we will learn about the various null operators in C# (including null conditional, null coalescing, and null forgiving operators). These can make our code more expressive and safe.

The code shows an existing application that can get a null reference exception at runtime. Then we enable nullable reference types to see what that feature does and does not do for us. Then we walk through the potential null warnings and use the various null operators to take care of them.

All in under 13 minutes!


The video is based on a series of articles. If you prefer text (or just want more detail), you can take a look at these:

More videos are on the way.

Happy Coding!

Thursday, July 6, 2023

New Video: Fixing Integer Overflow in C# with "checked"

My YouTube channel has been neglected for a while, and I'd like to start fixing that. So here's a new video: Fixing Integer Overflow in C# with "checked".

Fixing Integer Overflow in C# with "checked"

C# integers overflow by default, meaning if you go past the biggest value of the integer it will wrap to the smallest value. In this video, we will look at how we can turn this behavior into an error by using a "checked" statement, a project-level setting, or a "checked" expression.

This is a fairly quick topic -- just 4 minutes for the basics and a few more minutes of bonus content.

If you prefer reading, you can check out the blog article here: Checking for Overflow in C#.

Let me know if you like the format (or not) by dropping a note in the comments. I am planning on refreshing some of the older topics as well as covering a few new ones.

If you find this video helpful, be sure to pass it along.


Happy Coding!


Wednesday, February 22, 2023

C# "var" with a Reference Type is Always Nullable

As an addition to the series on nullability in C#, we should look at the "var" keyword. This is because "var" behaves a little differently than it did before nullable reference types. Specifically, when nullability is enabled, using "var' with a reference type always results in a nullable reference type.

Articles

The source code for this article series can be found on GitHub: https://github.com/jeremybytes/nullability-in-csharp.

The Short Version

Using "var" with a reference type always results in a nullable reference type.

    var people = new List<Person> {...}
When we hover over the "people" variable name, the pop-up shows us the type is nullable:
    (local variable) List<Person>? people
So even though the "people" variable is immediately assigned to using a constructor, the compiler still marks this as nullable.

Let's take a look at how this is a little different than it was before.

Before Nullable Reference Types

Before nullable reference types, we could expect the type of "var" to be the same as the type of whatever is assigned to the variable.

Consider the following code:
    Person person1 = GetPersonById(1);
    var person2 = GetPersonById(2);
In this code, both "person1" and "person2" have the same type: "Person" (assuming that this is the type that comes back from "GetPersonById").

We can verify this by hovering over the variable in Visual Studio. First "person1":

    (local variable) Person person1
This shows us that the type of "person1" is "Person".

Now "person2":
    (local variable) Person person2
This shows us that the type of "person2" is also "Person".

The lines of code are equivalent. "var" just acts as a shortcut here. And for those of us who have been coding with C# for a while, this is what we were used to.

If you would like a closer look at "var", you can take a look at "Demystifying the 'var' Keyword in C#".

After Nullable Reference Types

But things are a bit different when nullable reference types are enabled.

Here is the same code:
    Person person1 = GetPersonById(1);
    var person2 = GetPersonById(2);
When we hover over the "person1" variable, Visual Studio tells us that this is a "Person" type (what we expected.
    (local variable) Person person1
When we hover over the "person2" variable, Visual Studio tells us that this is a "Person?" type -- meaning it is a nullable Person.
    (local variable) Person? person2
So this shows us that the type is different when we use "var". In the code above, "GetPersonById" returns a non-nullable Person. But as we saw in the first article in the series, that is not something that we can rely on at runtime.

The Same with Constructors

You might think that this behavior only applies when we assign a variable based on a method or function call, but the behavior is the same when we use a constructor during assignment.

In the following code, we create a variable called "people" and initialize it using the constructor for "List<Person>":

    var people = new List<Person> {...}
When we hover over the "people" variable name, the pop-up shows us the type is nullable:
    (local variable) List<Person>? people
So even though the "people" variable is assigned based on a constructor call, the compiler still marks this as nullable.

Confusing Tooling

One reason why this was surprising to me is that I generally use the Visual Studio tooling a bit differently then I have used it here. And the pop-ups show different things depending on what we are looking at.

Hover over "var"
Normally when I want to look at a type of a "var" variable, I hover over the "var" keyword. Here is the result:
class System.Collections.Generic.List<T>
...
T is Person

This tells us that the type of "var" is "List<T>" where "T is Person". This means "List<Person>". Notice that there is no mention of nullability here.

Hover over the variable name
However, if we hover over the name of the variable itself, we get the actual type:

    (local variable) List<Person>? people
As we've seen above, this shows that our variable is, in fact, nullable.

The Documentation

There's documentation on this behavior on the Microsoft Learn Site: Declaration Types: Implicitly Typed Variables. This gives some insight into the behavior:


"When var is used with nullable reference types enabled, it always implies a nullable reference type even if the expression type isn't nullable. The compiler's null state analysis protects against dereferencing a potential null value. If the variable is never assigned to an expression that maybe null, the compiler won't emit any warnings. If you assign the variable to an expression that might be null, you must test that it isn't null before dereferencing it to avoid any warnings."
This tells us that the behavior supports the compiler messages about potential null values. Because there is so much existing code that uses var, there was a lot of potential to overwhelm devs with "potential null" messages when nullability is enabled. To alleviate that, "var" was made nullable.

Should I Worry?

The next question is whether we need to worry about this behavior. The answer is: probably not. In most of our code, we will probably not notice the difference. Even though the "var" types are technically nullable, they will most likely not be null (since the variables get an initial assignment).

But it may be something to keep in the back of your mind when working with "var" and nullable reference types. If you are used to using "var" in a lot of different situations, you just need to be aware that those variables are now nullable. I know of at least 1 developer who did run into an issue with this, but I have not heard about widespread concerns.

Wrap Up

It's always "fun" when behavior of code changes from what we are used to. I did not know that this behavior existed for a long time -- not until after I saw some high-profile folks talking about it online about 2 months ago. I finally put this article together because I am working on a presentation about nullable reference types that I'll be giving in a couple months.

My use of "var" is not changing due to this. Historically, if the type was clear based on the assignment (such as a constructor), then I would tend to use "var". If the type was not clear based on the assignment (such as coming from a not-too-well-named method call), then I tend to be more explicit it my types.

Of course, a lot of this changed with "target-typed new" expressions (from C# 9). But I won't go into my thoughts on that right now.

Happy Coding!

Tuesday, January 17, 2023

Checking for Overflow in C#

By default, integral values (such as int, uint, and long) do not throw an exception when they overflow. Instead, they "wrap" -- and this is probably not a value that we want. But we can change this behavior so that an exception is thrown instead.

Short version:

Use a "checked" statement to throw an overflow exception when an integral overflow (or underflow) occurs.

As an alternate, there is also a project property that we can set: CheckForOverflowUnderflow.

Note: This behavior applies to "enum" and "char" which are also integral types in C#.

Let's take a closer look at this.

Default Behavior

Let's start by looking at the default overflow behavior for integer (Int32) -- this applies to other integral types as well.

    int number = int.MaxValue;
    Console.WriteLine($"number = {number}");

    number++;
    Console.WriteLine($"number = {number}");

This creates a new integer variable ("number") and sets its value to the maximum value that a 32-bit integer can hold.

When we increment that value (using the ++ operator), it goes past that maximum value. But instead of throwing an error, the value "wraps" to the lowest integer value.

Here is the output of the above code:

    number = 2147483647
    number = -2147483648

This is probably not the behavior that we want.

Using a "checked" Statement

One way we can fix this is to use a "checked" statement. This consists of the "checked" operator and a code block.

    int number = int.MaxValue;
    Console.WriteLine($"number = {number}");
    checked
    {
        number++;
    }
    Console.WriteLine($"number = {number}");
  

The result of this is that the increment operation is checked to see if it will overflow. If it does, it throws an OverflowException.

Here's the exception if we run this code from Visual Studio:

    System.OverflowException: 'Arithmetic operation resulted in an overflow.'

Let's handle the exception:

    try
    {
        int number = int.MaxValue;
        Console.WriteLine($"number = {number}");

        checked
        {
            number++;
        }
        Console.WriteLine($"number = {number}");
    }
    catch (OverflowException ex)
    {
        Console.WriteLine($"OVERFLOW: {ex.Message}");
    }

Now our output looks like this:

    number = 2147483647
    OVERFLOW: Arithmetic operation resulted in an overflow.

If we overflow an integer value, it probably means that we need to make some changes to the code (such as using a larger type). But the good news is that an overflow exception makes sure that we do not unintentionally use an invalid value.

Using "CheckForOverflowUnderflow"

We may want to apply overflow checks through our entire project. Instead of using individual "checked" statements, we can also set a property on the project: "CheckForOverflowUnderflow".

To see this in action, we will removed the "checked" statement from our code:

    try
    {
        int number = int.MaxValue;
        Console.WriteLine($"number = {number}");

        number++;
        Console.WriteLine($"number = {number}");
    }
    catch (OverflowException ex)
    {
        Console.WriteLine($"OVERFLOW: {ex.Message}");
    }

The code does not throw an exception, and we are back to the wrapped value:

    number = 2147483647
    number = -2147483648

In the project, we can add the "CheckForOverflowUnderflow" property. Here is the project file for our basic console application:

    <Project Sdk="Microsoft.NET.Sdk">

      <PropertyGroup>
        <OutputType>Exe</OutputType>
        <TargetFramework>net7.0</TargetFramework>
        <RootNamespace>check_overflow</RootNamespace>
        <ImplicitUsings>enable</ImplicitUsings>
        <Nullable>enable</Nullable>
        <CheckForOverflowUnderflow>true</CheckForOverflowUnderflow>
      </PropertyGroup>

    </Project>

We've set the "CheckForOverflowUnderflow" property to "true". Now when the application runs, the exception is thrown.

    number = 2147483647
    OVERFLOW: Arithmetic operation resulted in an overflow.

"unchecked"

In addition to "checked", there is also an "unchecked" operator. As you might imagine, this does not check for overflow.

So with our project set to "CheckForOverflowUnderflow", we can add an "unchecked" block that will ignore the project setting.

    try
    {
        int number = int.MaxValue;
        Console.WriteLine($"number = {number}");

        unchecked
        {
            number++;
        }
        Console.WriteLine($"number = {number}");
    }
    catch (OverflowException ex)
    {
        Console.WriteLine($"OVERFLOW: {ex.Message}");
    }

The code does not throw an exception, and we are back to the wrapped value:

    number = 2147483647
    number = -2147483648

Wrap Up

Normally, I do not need to worry about overflow or underflow in my code; it's not something that comes up in my applications very often.

One exception is with Fibonacci sequences. I've written a few articles involving Fibonacci sequences (including "Implementing a Fibonacci Sequence with Value Tuples in C# 7" and "Coding Practice: Learning Rust with Fibonacci Numbers"). Since the sequence more or less doubles on each item, it overflows a 32-bit integer very quickly (around the 46th item in the sequence). This is one place where I generally use a larger type (like a long) and also a "checked" statement to make sure I do not end up using invalid values in my code somewhere.

"checked" statements do come with a cost. There is a little overhead that is added in the arithmetic operations. Because of this, I generally leave projects with the default setting ("unchecked" for the project), and then use targeted "checked" statements where I need them.

It's always interesting to find things like this in the C# language. I don't often need it, but it's really good to have it when I do.

Happy Coding!