Monday, January 27, 2020

Dynamically Loading Types in .NET Core with a Custom Assembly Load Context

A few weeks ago, we looked at dynamically loading .NET Standard assemblies in .NET Core. This approach loaded the assemblies into the default assembly load context. Another option is to create a custom assembly load context, and that's what we'll look at today.

To use this solution, a few changes need to be made to the projects we saw earlier:
  • Create a custom assembly load context
  • Switch from using "Type.GetType" to looking into an assembly more directly
  • Change the dynamically loaded assemblies from .NET Standard to .NET Core
The end result is that we will be able to load a type at run time based on configuration (like the previous solution). After we look at the code, we will look at the differences and see why we might choose one or the other.

The code for this article is available on GitHub: jeremybytes/understanding-interfaces-core30, specifically in the "completed/04-DynamicLoading-Plugin" folder. The code shown in the previous article is in the "completed/04-DynamicLoading" (no plugin) folder if you would like to compare the two.

I won't go into the use case for this application. You can check the prior article's "Why Dynamically Load Types?" section to get an overview of that. The short version is that different clients use different data storage schemes, and we would like the application to be flexible enough to attach to a data store without needing to be recompiled.

Plugin Solution
The implementation shown in this article is based on a tutorial on the Microsoft docs site: Create a .NET Core application with plugins. This was mentioned in the prior article. I did not originally take this approach because it seemed a bit more complex than what I needed for the particular use case. There are some advantages, which is why we are looking at it more closely. In the end, we can decide whether those advantages warrant the added complexity.

Data Readers
The data readers are the objects that we want to select and load at run time. In this case, we have data readers that work with 3 different data sources: (1) web service, (2) text file, and (3) database.

Dependencies
It turns out that loading a single assembly is fairly easy. Things get more complicated when that assembly has its own dependencies. For example, the data reader for the web service uses Newtonsoft.Json to parse data. We need to make sure the appropriate assembly is loaded for that as well.

For this, we'll rely on the "deps.json" file that gets generated when we build the data reader class library (either .NET Standard or .NET Core). This file lives alongside the assembly itself (PersonReader.Service.dll). Here's an excerpt from PersonReader.Service.deps.json (there's a copy of this in the repository: PersonReader.Service.deps.json file):


This shows that we have dependencies on "Newtonsoft.Json" and "PersonReader.Interface". And it also gives some information regarding the Newtonsoft.Json assembly that we need.

A bit later, we will look at the data readers projects in more detail, and we'll go a bit deeper into these dependencies. What we care about for now is that the list of dependencies is available at run time. Now we will head to the assembly load context to see how those are loaded.

Custom Assembly Load Context
The big part of the solution is a custom assembly load context. An assembly load context gives us control over how assemblies are loaded. In this case, we can give instructions on where to find the assemblies and their dependencies.

In addition, this gives us isolation. We can load multiple versions of the same assembly into our application as long as they are in different contexts. For example, the default context can have one version of Newtonsoft.Json, and another context can have a different version. I've had this problem in the past (where the easiest solution was just to change the version so everything used the same one).

Here's the custom context (from the ReaderLoadContext.cs file in the PeopleViewer project):


This class has an AssemblyDependencyResolver; this will help us load the dependencies for the data readers that we saw above.

The constructor takes the location of the data reader assembly. This is the actual assembly name, so it would look something like "C:\application_path\ReaderAssemblies\PersonReader.Service.dll". This value gets passed into the AssemblyDependencyResolver. This gives a base location to look for dependencies for an assembly.

The Load and LoadUnmanagedDll methods are overrides of the base class (AssemblyLoadContext). Here is the code for the Load method (from the ReaderLoadContext.cs file):


The parameter for the "Load" method takes an "AssemblyName" object. We will see what this is when we call this method later on.

The "Load" method uses the dependency resolver to get the assembly path and then loads the assembly. In addition to the assembly itself, the dependencies are also loaded.

The "LoadUnmanagedDll" method does something similar for unmanaged DLLs. Here is that method (also from the ReaderLoadContext.cs file):


Talking about loading unmanaged DLLs is outside the scope of this article (and also outside of my area of knowledge). This code came from the example assembly load context from the plugin tutorial mentioned earlier.

Loading the Data Readers
Now that we have the custom assembly load context, we can look at the code that uses the context and loads up a data reader. For this we have a factory class (ReaderFactory) with a "GetReader" method.

Here is the "GetReader" method (from the ReaderFactory.cs file):


Let's go through this step by step.

The first 2 lines check to see if we already have a data reader cached. If so, we can just use that. We'll loop back around to caching after looking at the body of this method.

Loading the Assembly
The first step in creating a data reader is to load the assembly. Looking back to the custom assembly load context (ReaderLoadContext), we need 2 pieces of information. (1) The constructor needs the assembly file name with the path. (2) The "Load" method needs the assembly name.

In the App.config file for the application, we have the assembly name. This is in the "ReaderAssembly" setting (from the App.config file of the PeopleViewer project):


The "ReaderAssembly" value is the assembly file name without the path. With this, we can construct what we need for the ReaderLoadContext.

Here are a few lines from the GetReader method above (from the ReaderFactory.cs file):


The readerAssemblyName is set to the value from configuation: PersonReader.Service.dll.

The readerLocation variable is set to the path where we can expect to find this assembly. This appends the values together. The "AppDomain.CurrentDomain.BaseDirectory" gets us the location of the executable that we are running. So it will be the path to "PeopleViewer.exe" (the application).

Next, we append "ReaderAssemblies". This is the folder where we put the assemblies that we want to dynamically load (we'll see how they get into this folder a bit later).

Then we add a path separator ("\" in Windows) and the assembly name that we pulled from configuration.

The result is that readerLocation is something like "C:\PeopleViewerEXEFolder\ReaderAssemblies\PersonReader.Service.dll".

Using the Assembly Load Context
Now that we have these 2 values, we can use the ReaderLoadContext that we saw earlier. Here are the next few lines of the "GetReader" method (from the ReaderFactory.cs file):


This creates an instance of ReaderLoadContext using the full path to the assembly file.

Next, we create an AssemblyName object. This code is a bit difficult to read (it was taken from the plugin tutorial), so we'll dissect it a bit.

The "Path.GetFileNameWithoutExtension(readerLocation)" will take the full assembly file name and strip off the path and the .dll extension. So this results in "PersonReader.Service". Using this value as a constructor parameter, we end up with an AssemblyName object.

From here, we call the "LoadFromAssemblyName" method on the load context. This uses the resolver that we created earlier to load the assembly (and its dependencies). It then returns an "Assembly" object that represents the data reader assembly.

We can look into this assembly and pick out the type(s) that we need.

Getting the Data Reader from the Assembly
Here is the rest of the '"GetReader" method (from the ReaderFactory.cs file):


This code is similar to the intermediate solution from the prior article (Dynamically Loading .NET Standard Assemblies in .NET Core).

First, we get the name of the type that we want to use from configuration. This is the fully-qualified type for the data reader. In this sample App.config above, this is "PersonReader.Service.ServiceReader".

The next line uses some LINQ to look at the ExportedTypes of the assembly. The ExportedTypes are all of the publicly accessible types in the assembly. "FirstOrDefault" will search those types and look for one where the FullName property matches our configuration value. If it cannot find the types, it returns null.

Once we have the type of the data reader, we use the Activator to create an instance of that type. At the same time, we cast this to the interface that we need "IPersonReader".

The last step returns this value.

Caching the Data Reader
Since we are dynamically loading a type (and doing the reflection and assembly loading that goes along with that), we want to do this as few times as possible. In this application, we only need to do it once. So the ReaderFactory class is set up to cache the data reader instance so it can be reused.

Here is are the caching bits (from the ReaderFactory.cs file):


This class has a static IPersonReader field to hold the cached instance.

At the top of the GetReader method, we check to see if the cache is populated. If so, then we just return that value.

If it is not populated, then we load the assembly, find the type, and create an instance (the code we saw above). When we create the instance, we assign it to the cache field then return it.

This way, we are always returning a value from the cached field, and it will (hopefully) always be populated.

This is a fairly naive implementation of a cache, but it works fine here. Since there is no way to change the data reader while the application is running, we only need to load this up the first time. Each subsequent call can just use the cached value. We do not need to invalidate or expire the cache while the application is running.

With some creativity, we could probably incorporate the GetReader factory method into a lazy-load object that dynamically loads itself the first time it is used. But we'll save that for another time.

Additional Parts
Between the custom assembly load context and the data reader factory method, we have the core of the solution of dynamically loading a data reader. But there are still a few pieces to fit together
  • Switching data readers from .NET Standard to .NET Core
  • Getting the dependencies to the output folder
  • Putting the assemblies where the application can find them
Updating the Data Readers
There are a few changes that we need to make to the data readers in order for them to work with this solution. The changes all revolve around making sure that we can load the dependencies for each data reader.

Project File Overview
There are a number of things in the project file that ensure that the assemblies and their dependencies will load properly. Here is the PersonReader.Service.csproj file (from the PersonReader.Service.csproj file in the PersonReader.Service project). The project files for the other data readers look similar.


We will be looking at these elements individually as we go.

.NET Core vs. .NET Standard
In the example from the earlier article, the data readers are .NET Standard projects. This was primarily done for migration purposes. These data readers were used in a variety of applications, but primarily a WPF client. Prior to .NET Core 3.0, WPF would only work with .NET Framework. To make the transition easier, I moved the data readers to .NET Standard 2.0 libraries. These libraries would still work with the .NET Framework WPF application, but I could also use them in ASP.NET Core applications. .NET Standard is useful for this type of migration or for multi-targeting solutions.

Our current application is .NET Core 3.1 (including the WPF client).

Unfortunately, .NET Standard libraries are not recommended for a dynamic loading solution.

Here is an excerpt from the tutorial to "Create a .NET Core application with plugins", specifically the section "Plugin target framework recommendations":

"Because plugin dependency loading uses the .deps.json file, there is a gotcha related to the plugin's target framework. Specifically, your plugins should target a runtime, such as .NET Core 3.0, instead of a version of .NET Standard. The .deps.json file is generated based on which framework the project targets, and since many .NET Standard-compatible packages ship reference assemblies for building against .NET Standard and implementation assemblies for specific runtimes, the .deps.json may not correctly see implementation assemblies, or it may grab the .NET Standard version of an assembly instead of the .NET Core version you expect." [emphasis mine]
This tells us that instead of using .NET Standard, we should use .NET Core libraries in order to ensure we get the right dependencies for the application. Since we have a .NET Core 3.1 application, the libraries should also be .NET Core.

The change to the libraries is as easy as updating the "TargetFramework" property in the project file.


Setting the target framework to "netcoreapp3.1" will generate a .NET Core 3.1 library.

Outputting Dependencies
We want to get the data reader assembly and all of its dependencies into the "ReaderAssemblies" folder that we saw above. One problem is that the default project settings do not do that.

Our project currently has 2 dependencies:


The first is a package reference to Newtonsoft.Json; this is a NuGet package that was added to the project.

The second is a project reference to PersonReader.Interface. This is a separate class library in the solution that contains the "IPersonReader" interface -- the abstraction that the application and data readers use.

If we build the class library and look at the output, we find that the package dependencies are not there.


This shows that we have the .dlls for the data reader (PersonReader.Service.dll) and the interface project (PersonReader.Interface.dll). But we do not have Newtonsoft.Json. This means that if we copy this to the "ReaderAssemblies" folder, we would have a problem: a dependency is missing.

We can add a setting to the project file that will change this. In the "PropertyGroup" section, we can add "CopyLocalLockFileAssemblies" and set it to true. Here's what that looks like (also in the PersonReader.Service.csproj file):


By setting this property to true and rebuilding, we can see that all of the dependencies are included:


Newtonsoft.Json.dll is now included in the output. This change is more pronounced for the SQL data reader since the output includes the required Entity Framework Core and SQLite assemblies.

Excluding Shared Dependencies
We do have a danger in our current output: the PersonReader.Interface.dll. The problem with this is that we also have this file in the main application (the WPF client). If we load up the assembly multiple times, there is a danger that the application will see the interface (IPersonReader) as two different types. And that would cause all sorts of problems.

The solution to this is to exclude the "PersonReader.Inferface.dll" from the output. We can do this in 2 ways (but the results are the same).

Option 1:
In Visual Studio, we can set the "Copy Local" property on the dependency to "No". To do this, find the project reference in Visual Studio:


Right-click on the dependency and choose "Properties". This will open up the Properties window:


Then change the "Copy Local" option to "No". This will keep the files from being copied to the output folder.

Option 2:
The other option is to manually edit the .csproj file. Notice the "ItemGroup" for the project reference (from the PersonReader.Service.csproj file):


This has a "Private" property as part of the project reference. When this is set to "False" (as it is here), it means that this project reference is shared, so we do not need a separate copy of it for ourselves.

Note: if you have the project file open in the editor, and make changes to the "Copy Local" property in the Visual Studio Properties window, you will see that setting the property set the "Private" value in the project file.

For more information on this setting, go to the plugin tutorial and do a search for "Private".

The result is that the "PersonReader.Interface.dll" file is no longer included in the output:


This gives us what we want. We have the data reader assembly as well as the package dependencies, but we do not have the shared assembly.

The last step is to copy these files to an accessible location.

Copying Files to a Shared Location
The last change to the data readers is to copy the output files to a shared location. The plugin tutorial shows a brittle-looking search in relative paths from one project output folder to another project output folder.

I have opted to copy the output files to a "ReaderAssemblies" folder at the solution level. This folder can then be copied to the output folder for the WPF application (this is also what the sample from the prior article does).

The data reader projects have a PostBuild step to copy the output files. Here is the section from the project file (from the PersonReader.Service.csproj file):


Because of the escaped quotes, this is a bit easier to read in the Visual Studio Project Dialog. To get there, right-click on the project in Visual Studio, choose "Properties" and then the "Build Events" tab.


This copies all files from the output folder (TargetDir) to the "ReaderAssemblies" folder that is one level up from the project folder (ProjectDir). This is a sibling folder to the data reader projects.


As a side note: I use "ProjectDir" as a reference instead of "SolutionDir" because the solution directory is not available when doing a "dotnet build" of an individual project on the command line.

If you want more information on build events, take a look at Using Build Events in Visual Studio to Make Life Easier.

Copying Files to the PeopleViewer Application
The last piece of the puzzle is to copy the files from the "ReaderAssemblies" folder that we just populated to the output folder for the application.

This is also a post-build step, but it is on the PeopleViewer project. Here is the event (as viewed in Visual Studio):


This has 2 steps. The first copies the contents of the "AdditionalFiles" folder to the output folder of the application. This folder contains the actual data files: (1) People.txt -- the text files used by the CSV data reader, and (2) People.db -- the database file used by the SQL data reader.

The next step copies the contents of the "ReaderAssemblies" folder to the "ReaderAssemblies" folder in the output. The result is that we have a folder in the output with all of the data reader assemblies:


This is the folder that we reference in the data reader factory method, and it becomes a search folder for the assembly load context that we created.

Build Order
The last thing to bring things together is to control the build order. We want to make sure that all of the data reader projects are built (and the files copied to the shared location) before the PeopleViewer application is built (and the files are copied to the output location).

For this, we can set dependencies at the Solution level. In Visual Studio, right-click on the solution, and select "Properties". Then choose "Project Dependencies".


For the "PeopleViewer" project, I have added dependencies to the 3 data reader projects. There are no compile time references, so I had to add these manually. With this setting, all of the data reader projects will be built before the PeopleViewer project.

This will ensure that we have the right versions of files in the right places.

If we are doing individual project builds from the command line, this setting will have no effect, so we would need to make sure that we build the data readers first.

Working Solution
The goal of this type of solution is to be able to deploy just what the client needs.

In this scenario, if the client is using a web service, we would deploy the PeopleViewer executable (and its dependencies) along with the "ReaderAssemblies" folder that contained only the assembly and dependency for the service data reader.

If we added a new client with different data needs, then we create the data reader, and give them the configuration and "ReaderAssemblies" files for their environment.

There is no need to recompile the base application. This is much easier to manage builds and handle issues.

Pros and Cons
So what are the pros and cons when we compare this to the previously shown solution. Let's review the solutions.

Other Solution (Default Assembly Load Context)
Here is the "GetReader" method from the other solution (from the ReaderFactory.cs file in other solution):


This code is very similar to the .NET Framework code. Someone who has used reflection in .NET Framework will be comfortable.

In addition, this works with .NET Standard libraries.

There is a risk of conflicting assemblies (i.e. different versions of the same assembly loaded). For this particular application, the risk is minimal due to the lack of complexity in the application. The core application and data readers are unlikely to need the same dependencies. But this is something to consider.

Pro: Familiar Code
Pro: Works with .NET Standard
Con: Risk of Conflicting Assemblies

This Solution (Custom Assembly Load Context)
Here is the "GetReader" method from this solution (from the ReaderFactory.cs file for this solution):


This solution is quite a bit different from what would be done in .NET Framework. The assembly loading bits are new, even for someone with existing experience with reflection.

This solution requires a move to .NET Core libraries. In the future, this won't be an issue since .NET Standard is most useful during migration to .NET Core. But it is a little painful during a step-by-step migration where libraries need to work in both .NET Framework and .NET Core environments.

This solution is a lot safer when it comes to assembly versions. Since the data reader assemblies (and the dependencies for those assemblies) are isolated in their own assembly load context, there is less concern about conflicting versions.

Con: Steeper Learning Curve
Con: Does Not Work with .NET Standard (but won't matter soon)
Pro: Minimal Risk of Conflicting Assemblies

We can use these to weigh the risks for our particular solution.

If I am building something for the outside world, then I would definitely lean toward this solution (custom assembly load context). The assembly safety is worth the extra learning.

If I am building an internal application, then I may opt for the other solution since it is easier to mitigate the risk of conflicting assemblies.

If I am migrating internal applications from .NET Framework to .NET Core, I might use the other solution so that I can maintain one set of .NET Standard libraries until the migration is complete.
For greenfield applications, I am likely to use the custom assembly load context that we see here.
For migrating applications, I am likely to use the default assembly load that we see in the other solution. 
Once migration is complete, I would look at how easy/difficult it would be to move to a custom assembly load context to mitigate issues in the future.

There may even be a hybrid solution out to be explored. Things are always interesting in programming.

Wrap Up
Using a custom assembly load context is something I didn't think I would need to do. I am a bit sad to see that to do something in the recommended way, there is a lot more to learn. Reflection and dynamic loading are a lot to handle on their own. When we have to do our own assembly resolution, that makes things a bit harder to approach.

There is one thing I need to point out. Both of these solutions currently have a problem running from the command line (with "dotnet run") or with the VS Code debugger. It looks like a bug in the command-line tools. But I need to do a bit more research on that (and my next article will describe the behavior that I've found).

Keep exploring, and feel free to leave any comments or questions you may have.

Happy Coding!

Wednesday, January 22, 2020

Building a Web Front-End to Show Mazes in ASP.NET Core MVC

Recently, I showed a web application that displays generated mazes (Generating Mazes in a Browser).




Here's how I built it and why.

Note: I'm not pretending that this is anything amazing. But I'm not often in the web front-end world, so this was a bit of a challenge for me. In addition, I realized that I was able to put this web application together relatively quickly because I understood the conventions of ASP.NET MVC; I'll be going into detail about those conventions in a later article.

Code samples are available on GitHub: jeremybytes/mazes-for-programmers.

The Existing Application
Previously, I wrote some C# code to generate mazes based on the book Mazes for Programmers by Jamis Buck (Amazon link).

The result was a console application that generated text output and graphical output.



If you look closely, you will find that these outputs represent the same maze.

You can run this code yourself by running the "DrawMaze" project from the GitHub repo mentioned above.

Shortcomings
The shortcomings of the application can be seen in the Main method of the Program.cs file (from the Program.cs file in the DrawMaze project):


The Main method creates a MazeGenerator object and passes in a ColorGrid (that represents the maze grid itself) and a RecursiveBacktracker -- this is the maze algorithm that will be used.

But there are more algorithms to use. Here's the listing from the "Algorithms" project in Visual Studio (also in the Algorithms project on GitHub):


One of the main things I wanted to do was to see the differences in the algorithms. Some are biased along a diagonal, some generate longer paths, some generate twistier paths. For more information on the algorithms themselves, take a look at the articles listed in the README on GitHub.
To switch to a different algorithm with the console application, I had to recompile the application. The same is true for changing the size and color of the maze.
In addition, the current implementation is not cross-platform due to the way the graphical maze is displayed. See this article for an explanation of that: Converting .NET Framework to .NET Core -- UseShellExecute has a Different Default Value.

Solution
  • Cross-Platform: A browser-based solution would fix my cross-platform problem. Instead of saving the output to a file, I would stream it to a web page. This would work on any supported platform that has a web browser.
  • Parameters: A query string on the web page would allow me to change the algorithm, size, and color by changing the URL (no recompiling).
  • Parameter Selection: A page with data entry fields would eliminate the need to craft a query string for parameters.
I've had some experience with ASP.NET MVC, so I was confident that I could get this working.

False Starts and Missteps
I had several false starts and missteps along the way. For example, I started with using the "ASP.NET Core MVC" template on a new project. I got the basics working, but then everything fell apart when I tried to add parameters.

Then I tried starting with the "Empty ASP.NET Core" template. I got the site working (meaning, I could stream the image to the browser and even collect the parameters), but it was a bit of a mess since the template did not include any pages that I could start with for parameter collection.

So I ended up taking what I learned from the "empty" project and starting over again with the "MVC" project. I got that project working (mostly) as I want it to, so that's where things are today.

I won't go into those details here. That's something you'd get from watching a Twitch stream (and I move too slowly to make a Twitch stream interesting). I mainly wanted to get across that things don't work out the first time, and there's often fumbling and frustration involved.

Adding a MazeController
In the web application, I added a MazeController class. With the default routing that is set up in ASP.NET Core MVC, this means that I could navigate to "https://{base-address}/maze" to get to the page.

Again, I will cover this default routing scheme in an upcoming article.

Getting a Bitmap of the Maze
What I needed was a bitmap of the maze to stream back to the browser. Let's go back and look at the "Program.cs" file from the console application (from the Program.cs file in the DrawMaze project):


Above, we saw the MazeGenerator object. But we need to see how that is used. This is in the CreateAndShowMaze method (also in the Program.cs file in the DrawMaze project):


This calls "GenerateMaze" on the MazeGenerator.

"GetTextMaze" returns a string that represents the maze in text format.

"GetGraphicaMaze" returns a bitmap. This is the thing that we actually need.

It seems strange to have "GenerateMaze" as a separate step, but this gives us better control over when a new maze is generated so that we can display the same maze in different output formats. As a side note, if you call either of the "Get" methods when a maze has not already been generated, it will automatically generate the maze.

Extracting Things Out
For the "MazeController", I needed to have a "MazeGenerator" to work with. In the original code, the "MazeGenerator" was part of the console application project. So I moved the "MazeGenerator" class (and the "IMazeGenerator" interface) into a separate class library. This way, I could reference it from the console application and the web application. This code is in the "MazeGeneration" project on GitHub.

The Initial MazeController
Now back to the MazeController class. We can use most of the same code as the console application. This code is in the MazeController.cs file in the GitHub project.

Note that the code on GitHub is a bit more complex than what we will see in this section because it includes the parameters.

Here is the initial "Index" method:


Using the default routing in ASP.NET Core MVC, if we navigate to "https://{base-address}/maze", the "Index" method on the "MazeController" class will be called. This method returns an IActionResult. The IActionResult can be a variety of things. We'll talk about that a bit more in an upcoming article.

This method has 3 steps. (1) it generates the bitmap image that we want to return, (2) it changes the bitmap into a byte array, (3) it returns that byte array with a content type. The result of the last 2 steps is to stream an image file back to the browser (I stole this code from the internet).

The "File" that we use here is not a file on the file system that we think about normally. This a method in ControllerBase that returns a "FileContentResult". Again, we won't go into those details here, but stay tuned for another article.

Here is the code from the "Generate" method (again, it will look a bit different in the GitHub repo):


This is the same code that we had in the console application. It creates an instance of the MazeGenerator and then calls "GetGraphicalMaze".

The "ConvertToByteArray" method turns our image into a byte array that we can use for streaming:


This uses a MemoryStream to save the image to a byte array. In this case, we are using the "PNG" format for the image.

Just Enough Code
This is just enough code to get a result. We can run the web application and navigate to "https://localhost:5001/maze" and get the following result:


Success!

I was pretty excited when I got this to work. It's a small step (particularly to developers who do this all the time), but it was something new for me.

Now that I had the basics working, I could work on adding parameters.

Adding Parameters
For the parameters, I take advantage of some of the magical auto-wiring that happens with ASP.NET Core MVC.

To start with, I added parameters to the "Generate" method (this is the final method from the MazeController.cs file):


This lets us pass in the size, algorithm, and color.

Note that color is a "MazeColor". This is an enumeration with the valid values in it. Due to some weirdness in the way the colors are generated, there are a limited number that we can choose from (at least for now -- I might spend some time changing this).

The "MazeColor" enum is in the "ColorGrid.cs" file in the MazeGrid project:


The values are White (default), Teal, Purple, Mustard, Rust, Green (more of a pea green), and Blue.

The Size Parameter
We'll add parameters to the "Index" method one at a time so we can see how this works. Here's an updated "Index" method with a "size" parameter added (this is not the final version of the MazeController.cs file in MazeWeb):


In this method, we created some default values for the new parameters of the "Generate" method. The size defaults to 15, the algorithm to the Recursive Backtracker, and the color to purple.

But we also added a "size" parameter to the Index method. And this is where things get fun.
ASP.NET Core MVC tries to fill in parameters however it can. 
It will check query strings, form values, and routes to see if it can find a matching value. If it finds something it can use, it fills it in; otherwise, it leaves the default.

So here, we're looking for a "size" value. If it doesn't exist, then our parameter will be the default value for an integer (which is 0). In the body of the method, we check to see if the "size" parameter has a value greater than 0. If it does, then we use it; otherwise we use the value of "15".

Setting Size with a Query String
Let's try things out. First, we'll use the same URL that we did before: https://localhost:5001/maze


This uses the default size of "15" (and we can see it is smaller than the default "20" size that we had earlier).

But we can add a query string to set the size: https://localhost:5001/maze?size=75


And now we can see that we can set the size to whatever we want.

Setting the Color
Next, we'll set the color. For this, we'll add another parameter to the Index method (again, this is not the final version of the MazeController.cs file in MazeWeb):


Here we added a "MazeColor" parameter to the method. Notice also that we got rid of our variable that was set to purple.

Let's see how the page behaves now. First, we'll pass in "Green" using this URL: https://localhost:5001/maze?size=50&color=Green


Even though the type is an enumeration, the ASP.NET Core MVC infrastructure does a good job at parsing the parameter. It figured out that even though we're passing in a "string" as part of the query string, it converted that to the correct value of the enum.

But what if we pick something that doesn't match up, like pink? Let's try it: https://localhost:5001/maze?size=50&color=Pink


This gives us no color at all. Well technically, the selection is "White". Notice in the enumeration that we showed earlier, that "White" is set to the value "0". Enums are integers underneath. So if ASP.NET Core MVC cannot parse the parameter (or find it at all), then it uses the default value for that type. In the case of integers (and enums) that value is "0". The result is that we end up with a "White" grid.

If we were to leave the parameter off completely, then we would also get a white grid.

Setting the Algorithm
The last parameter is the algorithm. I saved this for last since it is the most complex. Here is the final code for the "Index" method (this is the one that you'll find in the MazeController.cs file in the MazeWeb project):


The "algo" parameter has been added as a string type. That makes things a little more complicated since the "Generate" method needs an instantiated object that implements the "IMazeAlgorithm" interface.

Let's walk through the middle part of the code.

First, we still have the "algorithm" variable that is set to a "RecursiveBacktracker". We need to keep this in case we are unable to parse the parameter.

Next, we have an "if" statement to see if the parameter is empty. If it is empty, then we will use our default.

If it is not empty, then we will use reflection to try to load an object from an assembly. In this case, all of the maze algorithms are in the same assembly (the Algorithms project).

The first line in the "if" block loads the assembly that the "RecursiveBacktracker" is in. This gives us access to all of the algorithms in that assembly.

Next, we use the "GetType" method on the assembly to try to locate the algorithm. The first parameter of the "GetType" method is the fully-qualified type name. So we want something like "Algorithms.RecursiveBacktracker". We don't expect someone to pass in the namespace ("Algorithms") as part of the query string, so we add it on here.

The second parameter of "GetType" is whether we want an error to throw an exception. In our case, we'll say "no exceptions".

The third parameter of "GetType" is case-insensitivity. The "true" value means case doesn't matter. I primarily set this because when I was testing I kept typing "SideWinder" instead of "Sidewinder".

If we can find the type successfully (meaning the "algo" variable is not null), then we use the Activator to create an instance of the type. The "CreateInstance" method returns "object", so we need to cast it to the "IMazeAlgorithm" interface. By using the "as" operator, we will not get an exception if the cast fails. If it fails, we get "null".

In looking at this code, I need a bit more safety built in (in case we end up with a null). It's not too big of a worry since the only types in the assembly are algorithms that implement IMazeAlgorithm. But it probably should be hardened up a bit.

By the way, I know these things about Assembly.GetType because I've been looking into that method as well as Type.GetType and how assemblies are loaded in another set of articles: Using Type.GetType with .NET Core & Type.GetType Functionality has not Changed in .NET Core. Plus, I've been working on a follow-up.

Setting the Algorithm
So let's try this. Let's try the "BinaryTree" algorithm: https://localhost:5001/maze?size=30&color=Blue&algo=BinaryTree


The Binary Tree algorithm has a strong diagonal bias (meaning it tends to create diagonal paths through the center of the maze). So we can see that this algorithm is working.

Here's another using "HuntAndKill": https://localhost:5001/maze?size=30&color=Blue&algo=HuntAndKill


This tends to create long paths.

Better Parameter Collection
So things are working. But it is a bit fiddly. Right now, someone needs to know how to create a query string with the right values to use this. It would be much better if we could collect parameters.

And that's what I did. I modified the "Index.cshtml" file for the "Home" view. You can see the file on GitHub: Index.cshtml file in the "Views/Home" folder.

I won't walk through all of the details, but let's hit some highlights. If you're not familiar with ASP.NET Core MVC, I'll cover these bits in more detail in the upcoming article mentioned above.

First, let's look at the "form" element that was added to the page:


The "action" tells us that this is the same as navigating to "/maze" (which is what we have been doing manually). This has the effect of posting the values in the form back to the MazeController.

Before we look at the contents of the form, notice the "@model" at the top of the page. Here is the "MazeModel" object mentioned there (from the MazeModel.cs file in the Models folder):


This is a set of properties that represent the parameters that we need for the web page.

Since these are part of the model, we can create UI elements for them fairly easily. Here are the "Size" and "Color" parameters in the form (from the Index.cshtml file in the Views/Home folder):


The "Html.TextBoxFor" method creates a text box for the "Size" property in the model. (The "@Value" part is supposed to set a default for that field, but it's not working right -- there's still more work to do).

The "Html.DropDownListFor" method creates a drop-down list for the Colors. Here we can match up the value the user sees (the "Text") with the value that we pass to the controller ("Value"). In this case, the text and the value are the same.

The biggest advantage that we get from the drop-down list is that we have a selection of valid values to choose from. No more putting in "Pink" and getting unexpected results.

Things are similar for the algorithm selection:


In this case, the text and values are a little different. The good part is that we know what the valid values are, and we also don't need to worry about typing in things just right.

This is enough to get our parameter collection screen working.



And now it's really easy to run mazes of different sizes in different colors using different algorithms!

Don't forget that you can get the final code on GitHub: jeremybytes/mazes-for-programmers.

How Does It Work?
How does the controller get the parameters? If we look at the address box in the browser, they aren't being sent as a query string. Since the "form" is set up as a "post", the form data is posted to the MazeController. As mentioned earlier, ASP.NET Core MVC is really good at figuring out parameters on controller methods.

It looks in the form data and finds values that match up with the parameters. The parameter names are not exactly the same (the case is different for one thing). But the infrastructure is able to figure everything out. It's pretty cool once you get used to it.

Hitting Goals
After all this, did I hit the goals for this project?

Cross-Platform?
Yes! I confirmed the solution works on macOS. Here's a screenshot of it running locally using Safari:


One note about a problem I ran into: The first time I ran the application, I got a GDI+ runtime error, specifically about a missing dll. Fortunately, a quick search came up with this articile: SOLVED: System.Drawing .NETCore on Mac, GDIPlus Exception.

The solution is to install the GDI+ mono library. On macOS, this is easy to install with "brew install mono-libgdiplus". Check the article for more details.

Parameters?
Yes! The application has parameters for maze size, color, and algorithm. Putting together a query string in the right format allows this to be changed quickly.

Parameter Selection?
Yes! Even though parameters are available, using a query string is a bit brittle. It is easy to get wrong and hard to know which values are valid. The parameter web page gives us easy access to the available parameter values and makes things much less brittle.

Overall, I hit the problems that I was trying to solve with this approach.

Wrap Up
As mentioned, this isn't groundbreaking. But I had fun doing it. There's still a few things to fix. I haven't figured out how to get default values into my parameter form. I've tried a few things (including the current code that we saw above), but haven't had luck so far.

In addition, rather than streaming back a file that takes up a whole page, it would be interesting to stream back the file to another page (I actually had that working, but couldn't figure out how to do the parameters). So I'll be pulling this out from time to time.

And I owe you another article as well. I spent a few hours putting this website together (even with the problems that I ran into). I realize that the reason I was able to put it together so quickly is that I know the conventions of ASP.NET Core MVC, in particular how the controllers and views hook up, how the default routing works, how query string parameters get to controller methods, and how form data gets to controller methods. I'll go through all of these things in a future article.

Happy Coding!