Wednesday, February 5, 2020

Set the Working Directory in Visual Studio Code (or better yet, eliminate the requirement on the working directory)

In the last article, I showed what appeared to be a bug in the Visual Studio Code debugger and the .NET Core CLI (command-line interface). The issue stems from the fact that Visual Studio Code and Visual Studio 2019 use different  default values for "working directory" when debugging.

Default "Working Directory":
  • The Visual Studio 2019 debugger uses the executable directory
  • The Visual Studio Code debugger uses the project directory
The result is that an application that relies on the "current working directory" to find files will fail in strange ways. And that's just what the last article showed.
In Visual Studio Code, you can change the debugger / runner working directory in the "launch.json" file.
The code for this article is available on GitHub: jeremybytes-understanding-interfaces-core30. We are specifically using the completed/04-DynamicLoading-Plugin folder. To make things more interesting, some of the code samples use the "master" branch, and some use the "UsingWorkingDirectory" branch. These branches will be noted when code is shown.

Setting the Working Directory in Visual Studio Code
The specific application we are running is "PeopleViewer" -- a WPF application that uses a dynamically-loaded SQL data reader.

Here is the default "launch.json" file that was created by Visual Studio Code for the PeopleViewer project (from the launch.json file on the "UsingWorkingDirectory" branch):


This has a "cwd" setting which stands for "current working directory".

The default for this is the "${workspaceFolder}" which in this case means the project folder for the PeopleViewer application.

We can also see that the "program" setting references the output folder (workspace + bin/Debug/netcoreapp3.1/). This is the program that will be executed when running or debugging from Visual Studio Code.

Side note: .NET Core creates dlls for everything (including the desktop application that we have here). This dll can be executed from this directory on the command line using "dotnet .\PeopleViewer.dll". In addition, the compiler creates an executable ("PeopleViewer.exe") to make it easy to run as a separate application.

We can fix the issue by setting the working directory to be the same as the output folder:


Now the value is "${workspaceFolder}/bin/Debug/netcoreapp3.1". This will set the working directory to the same as our executable.

And if we run the application from Visual Studio Code (with or without debugging), it will work as expected.


SUCCESS!

Sort of. This doesn't eliminate all of the issues that we saw.

Issues from the .NET Core CLI
This doesn't change the problem that we saw in the last article when running the application from the .NET Core CLI.

We can open the command line to the project folder and type "dotnet run":


The application starts


But if we click the button, the application exits (with an unhandled exception).


This shows that we have the same problem with the working directory. The CLI tools do not use the "launch.json" file.

A better solution is to eliminate the reliance on the working directory.

Eliminating the Requirement on the Working Directory
This application uses a SQLite database which is in the "People.db" file on the file system. The application build process makes sure that the "People.db" file is copied into the output folder, so that "PeopleViewer.exe" and "People.db" are both in the same folder.

Here is the configuration for the SQLite EFCore DBContext (from the SQLReader.cs file on the "UsingWorkingDirectory" branch):


This uses the setting "Data Source=People.db". This tells the SQLite driver that we are using a file called "Person.db". The problem is that it looks for that file in the working directory.

Because it looks for the file in working directory, the application runs fine when executed from the application folder (either from File Explorer or from the command line). It also works in the Visual Studio 2019 debugger that uses the application folder as the working directory. And it will work from Visual Studio Code if we set the "cwd" value as shown above.

The Problem
The way that I discovered this was a problem was with Git. When I went to look at changes to the project, I noticed that there was a new "People.db" file in the project folder.


This is what led me to the discovery. The "People.db" file should not be in this folder; the source is somewhere else (an "AdditionalFiles" folder), and it is explicitly copied to the output folder.

So how did it get here?
If SQLite does not find the database file, it creates a new one.
Since this new file is empty, it explains the exception which says it cannot find the "People" table in the database.

Fixing the Issue
What we really want to do is fix the database connection so that it does not rely on the working directory.

A CSV data reader in this same project does not suffer from this problem even though it is also looking for a file (People.txt) in the same folder as the executable. This is because the CSV data reader is more explicit about the file location. Here is the configuration code (from the CSVReader.cs file on the master branch):


This uses the path "AppDomain.CurrentDomain.BaseDirectory" as the path to find the text file. This value is the same folder as the executable, so it works regardless of the current working directory.

To fix the SQL data reader, we can add the same path (from the SQLReader.cs file on the master branch):


Now we have "Data Source={AppDomain.CurrentDomain.BaseDirectory}People.db".

This adds the full directory to the "People.db" data source value, so SQLite will be able to find this file more reliably.

Now if we re-build everything (making sure to explicitly rebuild the "PersonReader.SQL" project so that the output .dll gets to the right folder), the application now runs fine from the .NET Core CLI.


"dotnet run" starts the application.


And if we click the "Dynamic Reader" button...


It works!

Awesomeness and Frustration
One thing that I really like about .NET Core is the set of options we have to build and run. We can use Visual Studio 2019; we can use Visual Studio 2019 for Mac, we can use Visual Studio Code, or we can use the .NET Core CLI.

My goal when building samples in .NET Core is for them to run as consistently as possible in these various development environments. This particular application will not work with Visual Studio 2019 for Mac (since it relies on WPF -- a Windows-only solution), but my website that generates mazes does work on macOS and Linux.

Finding the differences in the working directory among Visual Studio 2019, Visual Studio Code, and the .NET Core CLI was frustrating. I was sidetracked into figuring out why this was working in some places but not others. I "accidentally" ran across a clue by finding the unexpected "Person.db" file. Without that, I was completely lost on the cause. And that's quite a frustration.

But finding this problem also led to a more robust solution. Now that I know that SQLite relies on the current working directory, I can make sure that my connection strings are more specific. This is better overall.

I'll be sure to add more hurdles and solutions as I come across them. Hopefully this will be a help to someone else.

Happy Coding!

Tuesday, January 28, 2020

Apparent Bug in the Visual Studio Code Debugger / .NET Core CLI

So I ran into some weirdness with a project. It runs fine in the Visual Studio 2019 debugger but throws a runtime exception in the Visual Studio Code debugger. The application builds fine from Visual Studio 2019, Visual Studio Code, and "dotnet build". These executables run fine on their own. But running the application with "dotnet run" or from Visual Studio Code results in the exception.

I have .NET Core SDK version 3.1.101 installed (the current at the time of this article).

Summary
  • Debug from Visual Studio 2019: SUCCESS
  • Debug from Visual Studio Code: FAILURE
  • Run from executable (VS or CLI build): SUCCESS
  • Run from Visual Studio Code: FAILURE
  • Run from CLI (dotnet run): FAILURE
This impacts a particular project that uses EFCore / SQLite. I'm guessing that there's something weird in the CLI tools since the Visual Studio 2019 debugger works fine.

The code is available in GitHub: 04-DynamicLoading-Plugin (on the "UsingWorkingDirectory" branch). The specific project is the "PeopleViewer" project. See below for sample runs.

Update 29 Jan 2020
I was able to track down the issue. When looking at code changes in Git, I noticed that running the code in Visual Studio Code or the CLI (dotnet run) created a People.db file in the project folder. This file is normally in the executable folder. This leads me to the following:
Visual Studio 2019 debugger uses the executable folder as the working directory.
Visual Studio Code / CLI uses the project folder as the working directory.
This explains the SUCCESS/FAILURE modes listed above and the differences in behavior described in the rest of this article.

This seems like a problem to me. Even though code uses the same runtime (.NET Core 3.1), applications that work under one debugger (such as Visual Studio Code) may not work under another debugger (such as Visual Studio).

Also, for those who are wondering why the sample application uses files loaded from a directory, the same behavior is exhibited in an application with compile-time links. This sample project is available on GitHub: 03-Extensibility (also on the "UsingWorkingDirectory" branch).

I'll leave it up to the tool developers to decide where the problem actually lies. But this looks like a bug to me.

End Update

Update 5 Feb 2020: The working directory for Visual Studio Code can be changed from the default value in the "launch.json" file (Thanks to Craig in the comments; I overlooked this). Additional information: Set the Working Directory in Visual Studio Code (or better yet, eliminate the requirement on the working directory).

Successful Run
The project runs fine from Visual Studio 2019.

Open "DynamicLoading.sln".


"Start Debugging" the PeopleViewer project.


The application starts up.


Click the "Dynamic Reader" button. (It takes a few seconds for the SQL DB stuff to spin up the first time.)


SUCCESS!
The data is displayed and a popup shows that it is using the SQL data reader.

This same scenario works when we run the executable from File Explorer.


Running the project from the "bin/Debug/netcore31" folder gives the same results.

Failure Run
The project fails in Visual Studio Code.

Open the "PeopleViewer" folder in Visual Studio Code.


From the "Debug" menu, choose "Start Debugging".


The application starts up.


Clicking the "Dynamic Reader" button results in an exception.


FAILURE!
The exception comes from SQLite. The error is "no such table: People".

So it can find the table in the Visual Studio 2019 debugger, but not the Visual Studio Code debugger.

Other Tests
Here's what happens from the CLI.

Navigate PowerShell to the "PeopleViewer" folder. And type "dotnet build".


The build succeeds.


Type "dotnet run" to run the application.


The application starts successfully.


But clicking the "Dynamic Reader" button results in a run-time exception and the application exits.


Navigate to the output folder "PeopleViewer\bin\Debug\netcoreapp3.1"


Then run the application directly with ".\PeopleViewer.exe"


The application starts.


Clicking "Dynamic Reader" works.


This tells me that the builds are fine. There is something wrong with the CLI debugger and CLI runner.

One last test is to run the application in Visual Studio Code without debugging. In Visual Studio Code, choose "Debug / Start Without Debugging".


The application starts.


Clicking the "Dynamic Reader" button shuts down the application. The terminal window in Visual Studio Code show the same "no such table: People" error.


Wrap Up
Trying to run the application in these different ways leads me to believe that there is something wrong with the CLI runner and/or the Visual Studio Code debugger.

The Visual Studio 2019 debugger runs the application fine. The compiled application from Visual Studio 2019 runs fine from File Explorer.

Visual Studio Code and the CLI "dotnet build" do not appear to be the problem. The generated executable runs fine.

"dotnet run" seems to be culprit. This looks like a bug to me, but I'm not exactly sure how to report it. I'm working on how to get the right person to look at it, and I'll be sure to post any updates here.

Monday, January 27, 2020

Dynamically Loading Types in .NET Core with a Custom Assembly Load Context

A few weeks ago, we looked at dynamically loading .NET Standard assemblies in .NET Core. This approach loaded the assemblies into the default assembly load context. Another option is to create a custom assembly load context, and that's what we'll look at today.

To use this solution, a few changes need to be made to the projects we saw earlier:
  • Create a custom assembly load context
  • Switch from using "Type.GetType" to looking into an assembly more directly
  • Change the dynamically loaded assemblies from .NET Standard to .NET Core
The end result is that we will be able to load a type at run time based on configuration (like the previous solution). After we look at the code, we will look at the differences and see why we might choose one or the other.

The code for this article is available on GitHub: jeremybytes/understanding-interfaces-core30, specifically in the "completed/04-DynamicLoading-Plugin" folder. The code shown in the previous article is in the "completed/04-DynamicLoading" (no plugin) folder if you would like to compare the two.

I won't go into the use case for this application. You can check the prior article's "Why Dynamically Load Types?" section to get an overview of that. The short version is that different clients use different data storage schemes, and we would like the application to be flexible enough to attach to a data store without needing to be recompiled.

Plugin Solution
The implementation shown in this article is based on a tutorial on the Microsoft docs site: Create a .NET Core application with plugins. This was mentioned in the prior article. I did not originally take this approach because it seemed a bit more complex than what I needed for the particular use case. There are some advantages, which is why we are looking at it more closely. In the end, we can decide whether those advantages warrant the added complexity.

Data Readers
The data readers are the objects that we want to select and load at run time. In this case, we have data readers that work with 3 different data sources: (1) web service, (2) text file, and (3) database.

Dependencies
It turns out that loading a single assembly is fairly easy. Things get more complicated when that assembly has its own dependencies. For example, the data reader for the web service uses Newtonsoft.Json to parse data. We need to make sure the appropriate assembly is loaded for that as well.

For this, we'll rely on the "deps.json" file that gets generated when we build the data reader class library (either .NET Standard or .NET Core). This file lives alongside the assembly itself (PersonReader.Service.dll). Here's an excerpt from PersonReader.Service.deps.json (there's a copy of this in the repository: PersonReader.Service.deps.json file):


This shows that we have dependencies on "Newtonsoft.Json" and "PersonReader.Interface". And it also gives some information regarding the Newtonsoft.Json assembly that we need.

A bit later, we will look at the data readers projects in more detail, and we'll go a bit deeper into these dependencies. What we care about for now is that the list of dependencies is available at run time. Now we will head to the assembly load context to see how those are loaded.

Custom Assembly Load Context
The big part of the solution is a custom assembly load context. An assembly load context gives us control over how assemblies are loaded. In this case, we can give instructions on where to find the assemblies and their dependencies.

In addition, this gives us isolation. We can load multiple versions of the same assembly into our application as long as they are in different contexts. For example, the default context can have one version of Newtonsoft.Json, and another context can have a different version. I've had this problem in the past (where the easiest solution was just to change the version so everything used the same one).

Here's the custom context (from the ReaderLoadContext.cs file in the PeopleViewer project):


This class has an AssemblyDependencyResolver; this will help us load the dependencies for the data readers that we saw above.

The constructor takes the location of the data reader assembly. This is the actual assembly name, so it would look something like "C:\application_path\ReaderAssemblies\PersonReader.Service.dll". This value gets passed into the AssemblyDependencyResolver. This gives a base location to look for dependencies for an assembly.

The Load and LoadUnmanagedDll methods are overrides of the base class (AssemblyLoadContext). Here is the code for the Load method (from the ReaderLoadContext.cs file):


The parameter for the "Load" method takes an "AssemblyName" object. We will see what this is when we call this method later on.

The "Load" method uses the dependency resolver to get the assembly path and then loads the assembly. In addition to the assembly itself, the dependencies are also loaded.

The "LoadUnmanagedDll" method does something similar for unmanaged DLLs. Here is that method (also from the ReaderLoadContext.cs file):


Talking about loading unmanaged DLLs is outside the scope of this article (and also outside of my area of knowledge). This code came from the example assembly load context from the plugin tutorial mentioned earlier.

Loading the Data Readers
Now that we have the custom assembly load context, we can look at the code that uses the context and loads up a data reader. For this we have a factory class (ReaderFactory) with a "GetReader" method.

Here is the "GetReader" method (from the ReaderFactory.cs file):


Let's go through this step by step.

The first 2 lines check to see if we already have a data reader cached. If so, we can just use that. We'll loop back around to caching after looking at the body of this method.

Loading the Assembly
The first step in creating a data reader is to load the assembly. Looking back to the custom assembly load context (ReaderLoadContext), we need 2 pieces of information. (1) The constructor needs the assembly file name with the path. (2) The "Load" method needs the assembly name.

In the App.config file for the application, we have the assembly name. This is in the "ReaderAssembly" setting (from the App.config file of the PeopleViewer project):


The "ReaderAssembly" value is the assembly file name without the path. With this, we can construct what we need for the ReaderLoadContext.

Here are a few lines from the GetReader method above (from the ReaderFactory.cs file):


The readerAssemblyName is set to the value from configuation: PersonReader.Service.dll.

The readerLocation variable is set to the path where we can expect to find this assembly. This appends the values together. The "AppDomain.CurrentDomain.BaseDirectory" gets us the location of the executable that we are running. So it will be the path to "PeopleViewer.exe" (the application).

Next, we append "ReaderAssemblies". This is the folder where we put the assemblies that we want to dynamically load (we'll see how they get into this folder a bit later).

Then we add a path separator ("\" in Windows) and the assembly name that we pulled from configuration.

The result is that readerLocation is something like "C:\PeopleViewerEXEFolder\ReaderAssemblies\PersonReader.Service.dll".

Using the Assembly Load Context
Now that we have these 2 values, we can use the ReaderLoadContext that we saw earlier. Here are the next few lines of the "GetReader" method (from the ReaderFactory.cs file):


This creates an instance of ReaderLoadContext using the full path to the assembly file.

Next, we create an AssemblyName object. This code is a bit difficult to read (it was taken from the plugin tutorial), so we'll dissect it a bit.

The "Path.GetFileNameWithoutExtension(readerLocation)" will take the full assembly file name and strip off the path and the .dll extension. So this results in "PersonReader.Service". Using this value as a constructor parameter, we end up with an AssemblyName object.

From here, we call the "LoadFromAssemblyName" method on the load context. This uses the resolver that we created earlier to load the assembly (and its dependencies). It then returns an "Assembly" object that represents the data reader assembly.

We can look into this assembly and pick out the type(s) that we need.

Getting the Data Reader from the Assembly
Here is the rest of the '"GetReader" method (from the ReaderFactory.cs file):


This code is similar to the intermediate solution from the prior article (Dynamically Loading .NET Standard Assemblies in .NET Core).

First, we get the name of the type that we want to use from configuration. This is the fully-qualified type for the data reader. In this sample App.config above, this is "PersonReader.Service.ServiceReader".

The next line uses some LINQ to look at the ExportedTypes of the assembly. The ExportedTypes are all of the publicly accessible types in the assembly. "FirstOrDefault" will search those types and look for one where the FullName property matches our configuration value. If it cannot find the types, it returns null.

Once we have the type of the data reader, we use the Activator to create an instance of that type. At the same time, we cast this to the interface that we need "IPersonReader".

The last step returns this value.

Caching the Data Reader
Since we are dynamically loading a type (and doing the reflection and assembly loading that goes along with that), we want to do this as few times as possible. In this application, we only need to do it once. So the ReaderFactory class is set up to cache the data reader instance so it can be reused.

Here is are the caching bits (from the ReaderFactory.cs file):


This class has a static IPersonReader field to hold the cached instance.

At the top of the GetReader method, we check to see if the cache is populated. If so, then we just return that value.

If it is not populated, then we load the assembly, find the type, and create an instance (the code we saw above). When we create the instance, we assign it to the cache field then return it.

This way, we are always returning a value from the cached field, and it will (hopefully) always be populated.

This is a fairly naive implementation of a cache, but it works fine here. Since there is no way to change the data reader while the application is running, we only need to load this up the first time. Each subsequent call can just use the cached value. We do not need to invalidate or expire the cache while the application is running.

With some creativity, we could probably incorporate the GetReader factory method into a lazy-load object that dynamically loads itself the first time it is used. But we'll save that for another time.

Additional Parts
Between the custom assembly load context and the data reader factory method, we have the core of the solution of dynamically loading a data reader. But there are still a few pieces to fit together
  • Switching data readers from .NET Standard to .NET Core
  • Getting the dependencies to the output folder
  • Putting the assemblies where the application can find them
Updating the Data Readers
There are a few changes that we need to make to the data readers in order for them to work with this solution. The changes all revolve around making sure that we can load the dependencies for each data reader.

Project File Overview
There are a number of things in the project file that ensure that the assemblies and their dependencies will load properly. Here is the PersonReader.Service.csproj file (from the PersonReader.Service.csproj file in the PersonReader.Service project). The project files for the other data readers look similar.


We will be looking at these elements individually as we go.

.NET Core vs. .NET Standard
In the example from the earlier article, the data readers are .NET Standard projects. This was primarily done for migration purposes. These data readers were used in a variety of applications, but primarily a WPF client. Prior to .NET Core 3.0, WPF would only work with .NET Framework. To make the transition easier, I moved the data readers to .NET Standard 2.0 libraries. These libraries would still work with the .NET Framework WPF application, but I could also use them in ASP.NET Core applications. .NET Standard is useful for this type of migration or for multi-targeting solutions.

Our current application is .NET Core 3.1 (including the WPF client).

Unfortunately, .NET Standard libraries are not recommended for a dynamic loading solution.

Here is an excerpt from the tutorial to "Create a .NET Core application with plugins", specifically the section "Plugin target framework recommendations":

"Because plugin dependency loading uses the .deps.json file, there is a gotcha related to the plugin's target framework. Specifically, your plugins should target a runtime, such as .NET Core 3.0, instead of a version of .NET Standard. The .deps.json file is generated based on which framework the project targets, and since many .NET Standard-compatible packages ship reference assemblies for building against .NET Standard and implementation assemblies for specific runtimes, the .deps.json may not correctly see implementation assemblies, or it may grab the .NET Standard version of an assembly instead of the .NET Core version you expect." [emphasis mine]
This tells us that instead of using .NET Standard, we should use .NET Core libraries in order to ensure we get the right dependencies for the application. Since we have a .NET Core 3.1 application, the libraries should also be .NET Core.

The change to the libraries is as easy as updating the "TargetFramework" property in the project file.


Setting the target framework to "netcoreapp3.1" will generate a .NET Core 3.1 library.

Outputting Dependencies
We want to get the data reader assembly and all of its dependencies into the "ReaderAssemblies" folder that we saw above. One problem is that the default project settings do not do that.

Our project currently has 2 dependencies:


The first is a package reference to Newtonsoft.Json; this is a NuGet package that was added to the project.

The second is a project reference to PersonReader.Interface. This is a separate class library in the solution that contains the "IPersonReader" interface -- the abstraction that the application and data readers use.

If we build the class library and look at the output, we find that the package dependencies are not there.


This shows that we have the .dlls for the data reader (PersonReader.Service.dll) and the interface project (PersonReader.Interface.dll). But we do not have Newtonsoft.Json. This means that if we copy this to the "ReaderAssemblies" folder, we would have a problem: a dependency is missing.

We can add a setting to the project file that will change this. In the "PropertyGroup" section, we can add "CopyLocalLockFileAssemblies" and set it to true. Here's what that looks like (also in the PersonReader.Service.csproj file):


By setting this property to true and rebuilding, we can see that all of the dependencies are included:


Newtonsoft.Json.dll is now included in the output. This change is more pronounced for the SQL data reader since the output includes the required Entity Framework Core and SQLite assemblies.

Excluding Shared Dependencies
We do have a danger in our current output: the PersonReader.Interface.dll. The problem with this is that we also have this file in the main application (the WPF client). If we load up the assembly multiple times, there is a danger that the application will see the interface (IPersonReader) as two different types. And that would cause all sorts of problems.

The solution to this is to exclude the "PersonReader.Inferface.dll" from the output. We can do this in 2 ways (but the results are the same).

Option 1:
In Visual Studio, we can set the "Copy Local" property on the dependency to "No". To do this, find the project reference in Visual Studio:


Right-click on the dependency and choose "Properties". This will open up the Properties window:


Then change the "Copy Local" option to "No". This will keep the files from being copied to the output folder.

Option 2:
The other option is to manually edit the .csproj file. Notice the "ItemGroup" for the project reference (from the PersonReader.Service.csproj file):


This has a "Private" property as part of the project reference. When this is set to "False" (as it is here), it means that this project reference is shared, so we do not need a separate copy of it for ourselves.

Note: if you have the project file open in the editor, and make changes to the "Copy Local" property in the Visual Studio Properties window, you will see that setting the property set the "Private" value in the project file.

For more information on this setting, go to the plugin tutorial and do a search for "Private".

The result is that the "PersonReader.Interface.dll" file is no longer included in the output:


This gives us what we want. We have the data reader assembly as well as the package dependencies, but we do not have the shared assembly.

The last step is to copy these files to an accessible location.

Copying Files to a Shared Location
The last change to the data readers is to copy the output files to a shared location. The plugin tutorial shows a brittle-looking search in relative paths from one project output folder to another project output folder.

I have opted to copy the output files to a "ReaderAssemblies" folder at the solution level. This folder can then be copied to the output folder for the WPF application (this is also what the sample from the prior article does).

The data reader projects have a PostBuild step to copy the output files. Here is the section from the project file (from the PersonReader.Service.csproj file):


Because of the escaped quotes, this is a bit easier to read in the Visual Studio Project Dialog. To get there, right-click on the project in Visual Studio, choose "Properties" and then the "Build Events" tab.


This copies all files from the output folder (TargetDir) to the "ReaderAssemblies" folder that is one level up from the project folder (ProjectDir). This is a sibling folder to the data reader projects.


As a side note: I use "ProjectDir" as a reference instead of "SolutionDir" because the solution directory is not available when doing a "dotnet build" of an individual project on the command line.

If you want more information on build events, take a look at Using Build Events in Visual Studio to Make Life Easier.

Copying Files to the PeopleViewer Application
The last piece of the puzzle is to copy the files from the "ReaderAssemblies" folder that we just populated to the output folder for the application.

This is also a post-build step, but it is on the PeopleViewer project. Here is the event (as viewed in Visual Studio):


This has 2 steps. The first copies the contents of the "AdditionalFiles" folder to the output folder of the application. This folder contains the actual data files: (1) People.txt -- the text files used by the CSV data reader, and (2) People.db -- the database file used by the SQL data reader.

The next step copies the contents of the "ReaderAssemblies" folder to the "ReaderAssemblies" folder in the output. The result is that we have a folder in the output with all of the data reader assemblies:


This is the folder that we reference in the data reader factory method, and it becomes a search folder for the assembly load context that we created.

Build Order
The last thing to bring things together is to control the build order. We want to make sure that all of the data reader projects are built (and the files copied to the shared location) before the PeopleViewer application is built (and the files are copied to the output location).

For this, we can set dependencies at the Solution level. In Visual Studio, right-click on the solution, and select "Properties". Then choose "Project Dependencies".


For the "PeopleViewer" project, I have added dependencies to the 3 data reader projects. There are no compile time references, so I had to add these manually. With this setting, all of the data reader projects will be built before the PeopleViewer project.

This will ensure that we have the right versions of files in the right places.

If we are doing individual project builds from the command line, this setting will have no effect, so we would need to make sure that we build the data readers first.

Working Solution
The goal of this type of solution is to be able to deploy just what the client needs.

In this scenario, if the client is using a web service, we would deploy the PeopleViewer executable (and its dependencies) along with the "ReaderAssemblies" folder that contained only the assembly and dependency for the service data reader.

If we added a new client with different data needs, then we create the data reader, and give them the configuration and "ReaderAssemblies" files for their environment.

There is no need to recompile the base application. This is much easier to manage builds and handle issues.

Pros and Cons
So what are the pros and cons when we compare this to the previously shown solution. Let's review the solutions.

Other Solution (Default Assembly Load Context)
Here is the "GetReader" method from the other solution (from the ReaderFactory.cs file in other solution):


This code is very similar to the .NET Framework code. Someone who has used reflection in .NET Framework will be comfortable.

In addition, this works with .NET Standard libraries.

There is a risk of conflicting assemblies (i.e. different versions of the same assembly loaded). For this particular application, the risk is minimal due to the lack of complexity in the application. The core application and data readers are unlikely to need the same dependencies. But this is something to consider.

Pro: Familiar Code
Pro: Works with .NET Standard
Con: Risk of Conflicting Assemblies

This Solution (Custom Assembly Load Context)
Here is the "GetReader" method from this solution (from the ReaderFactory.cs file for this solution):


This solution is quite a bit different from what would be done in .NET Framework. The assembly loading bits are new, even for someone with existing experience with reflection.

This solution requires a move to .NET Core libraries. In the future, this won't be an issue since .NET Standard is most useful during migration to .NET Core. But it is a little painful during a step-by-step migration where libraries need to work in both .NET Framework and .NET Core environments.

This solution is a lot safer when it comes to assembly versions. Since the data reader assemblies (and the dependencies for those assemblies) are isolated in their own assembly load context, there is less concern about conflicting versions.

Con: Steeper Learning Curve
Con: Does Not Work with .NET Standard (but won't matter soon)
Pro: Minimal Risk of Conflicting Assemblies

We can use these to weigh the risks for our particular solution.

If I am building something for the outside world, then I would definitely lean toward this solution (custom assembly load context). The assembly safety is worth the extra learning.

If I am building an internal application, then I may opt for the other solution since it is easier to mitigate the risk of conflicting assemblies.

If I am migrating internal applications from .NET Framework to .NET Core, I might use the other solution so that I can maintain one set of .NET Standard libraries until the migration is complete.
For greenfield applications, I am likely to use the custom assembly load context that we see here.
For migrating applications, I am likely to use the default assembly load that we see in the other solution. 
Once migration is complete, I would look at how easy/difficult it would be to move to a custom assembly load context to mitigate issues in the future.

There may even be a hybrid solution out to be explored. Things are always interesting in programming.

Wrap Up
Using a custom assembly load context is something I didn't think I would need to do. I am a bit sad to see that to do something in the recommended way, there is a lot more to learn. Reflection and dynamic loading are a lot to handle on their own. When we have to do our own assembly resolution, that makes things a bit harder to approach.

There is one thing I need to point out. Both of these solutions currently have a problem running from the command line (with "dotnet run") or with the VS Code debugger. It looks like a bug in the command-line tools. But I need to do a bit more research on that (and my next article will describe the behavior that I've found).

Keep exploring, and feel free to leave any comments or questions you may have.

Happy Coding!