Monday, January 24, 2011

Using Dynamic Data with EF Code First and NuGet

Note: this post is a bit outdated. Checkout this other post for more up to date information on this topic.

Dynamic Data works out of the box with Entity Framework, but it takes a small trick to get it working with the latest EF Code First bits (known as CTP5).

Here is quick walk through of what you need to do.

As a first step, create a new ASP.NET Dynamic Data Entities Web Application. Then, let’s use NuGet to add EF Code First to your project (I never miss a chance to pitch my new product!). We’ll use it with SQL Compact, and also bring in a sample to get started.

Right click on References and choose ‘Add Library Package Reference’ to bring in the NuGet dialog. Go to the Online tab and type ‘efc’ (for EFCodeFirst) in the search box. Then install the EFCodeFirst.SqlServerCompact and EFCodeFirst.Sample packages:

image

Now we need to register our context with Dynamic Data, which is the part that requires special handling. The reason it doesn’t work the ‘usual’ way is that when using Code First, your context extends DbContext instead of ObjectContext, and Dynamic Data doesn’t know about DbContext (as it didn’t exist at the time).

I will show you two different approaches. The first is simpler but doesn’t work quite as well. The second works better but requires using a new library.

Approach #1: dig the ObjectContext out of the DbContext

The workaround is quite simple. In your RegisterRoutes method in global.asax, just add the following code (you’ll need to import System.Data.Entity.Infrastructure and the namespace where your context lives):

public static void RegisterRoutes(RouteCollection routes) {
DefaultModel.RegisterContext(() => {
    return ((IObjectContextAdapter)new BlogContext()).ObjectContext;
}, new ContextConfiguration() { ScaffoldAllTables = true });

So what this is really doing differently is provide a Lambda that can dig the ObjectContext out of your DbContext, instead of just passing the type to the context directly.

And that’s it, your app is ready to run!

image

One small glitch you’ll notice is that you get this EdmMetadatas entry in the list. This is a table that EF creates in the database to keep track of schema versions, but since we told Dynamic Data to Scaffold All Tables, it shows up. You can get rid of it by turning off ScaffoldAllTables, and adding a [ScaffoldTable(true)] attribute to the entity classes that you do want to see in there.

Another issue is that this approach doesn’t work when you need to register multiple models, due to the way the default provider uses the ObjectContext type as a key. Since we don’t actually extend ObjectContext, all contexts end up claiming the same key.

Approach #2: use the DynamicData.EFCodeFirstProvider library

This approach is simple to use, but just requires getting a library with a custom provider. If you don’t already have NuGet, get it from here.

Then install the DynamicData.EFCodeFirstProvider package in your project:

PM> Install-Package DynamicData.EFCodeFirstProvider
'EFCodeFirst 0.8' already installed.
Successfully installed 'DynamicData.EFCodeFirstProvider 0.1.0.0'.
WebApplicationDDEFCodeFirst already has a reference to 'EFCodeFirst 0.8'.
Successfully added 'DynamicData.EFCodeFirstProvider 0.1.0.0' to WebApplicationDDEFCodeFirst.

After that, this is what you would write to register the context in your global.asax:

DefaultModel.RegisterContext(
   new EFCodeFirstDataModelProvider(() => new BlogContext()),
   new ContextConfiguration() { ScaffoldAllTables = true });

And that’s it! This approach allows registering multiple contexts, and also fixes the issue mentioned above where EdmMetadatas shows up in the table list.

Friday, January 21, 2011

NuGet.exe is now self-updatable

Yesterday, I blogged about how the NuGet command line tool can now be used to bring down packages without using VS.

Another cool new trick that it just gained is the ability to update itself. What that means is that after you get the tool on your machine (e.g. get the latest from here), keeping it up to date becomes super easy.

I’ll demonstrate how it works by example. First, let’s run nuget.exe with no params just to see what version we have:

D:\>nuget
NuGet Version: 1.1.2120.136
usage: NuGet <command> [args] [options]
Type 'NuGet help <command>' for help on a specific command.
etc...

We’re running 1.1.2120.136. Now let’s check for updates:

D:\>nuget update
Checking for updates from http://go.microsoft.com/fwlink/?LinkID=206669.
Currently running NuGet.exe v1.1.2120.136.
Updating NuGet.exe to 1.1.2121.140.
Update successful.

And now let’s make sure we’re running the new one:

D:\>nuget
NuGet Version: 1.1.2121.140
usage: NuGet <command> [args] [options]
Type 'NuGet help <command>' for help on a specific command.
etc...

And just like that, we’re now running the newer build!

How is the update performed

Being a package manager, it’s pretty natural for NuGet to be able to do that, as NuGet.exe is itself a package in its own feed! The package is named NuGet.CommandLine.

To perform the in-place update, nuget.exe simply renames itself to nuget.exe.old, and downloads the new one as nuget.exe. The old file can then be deleted, or if for whatever reason you’re not happy with the newer build, you can simply delete it and rename nuget.exe.old back into nuget.exe.

What about updates to the NuGet Visual Studio add-in?

Just a final note in case you’re wondering why update is done this way for nuget.exe, but not for the NuGet VS integration. Since the VS tooling is a standard extension, it gets an update story ‘for free’ via the VS Extension Manager. In VS, just go into Tools / Extension Manager and go to the Updates tab, which will tell you if there are updates available to any of the extensions you have installed.

Thursday, January 20, 2011

Installing NuGet packages directly from the command line

Most of the coverage around NuGet revolves against its clean integration to Visual Studio, which makes adding references to packages as easy as adding references to local assemblies. While this is indeed a key scenario, it is important to note that the core of NuGet is completely decoupled from Visual Studio, and was designed with that goal from day 1.

If we look back at the early days of NuGet, it was in many ways inspired by the ‘Nu’ project (which members have since joined NuGet). What Nu had was a solid command line driven experience to bring down .NET bits to your machine. In their case, it was based on Ruby Gems, but that is an implementation details. Take a look at Rob Reynolds’s original screencast to see what the Nu experience was about.

While we have been planning all along to provide the same experience with NuGet (in addition to the VS experience of course), it’s something that had somewhat fallen off the radar, and it just had not been done. This was unfortunate, because we already had all the plumbing to make it happen, and all it needed was about 10 lines of code to expose this!

So I’m happy to say that we have now filled this little hole by implementing a new ‘install’ command in our NuGet.exe command line tool. Using it couldn’t be any easier, and I’ll walk you through an example.

Where do I get NuGet.exe?

You first need to get NuGet.exe. This is the same tool that package authors have been using to create packages and upload them to the http://nuget.org gallery.

The easiest way to get it is to download it from CodePlex.

You can also obtain it via NuGet itself by installing the package name NuGet.CommandLine (using Visual Studio).

How do I run it?

The best way to demonstrate it is to just show a sample session.

D:\>md \Test

D:\>cd \Test

D:\Test>nuget list nhi
FluentNHibernate 1.1.0.694
FluentNHibernate 1.1.1.694
NHibernate 2.1.2.4000
NHibernate 3.0.0.2001
NHibernate 3.0.0.3001
NHibernate 3.0.0.4000
NHibernate.Linq 1.0
NHWebConsole 0.2
SolrNet.NHibernate 0.3.0

D:\Test>nuget install NHibernate
'Iesi.Collections (≥ 1.0.1)' not installed. Attempting to retrieve dependency from source...
Done.
'Antlr (≥ 3.1.3.42154)' not installed. Attempting to retrieve dependency from source...
Done.
'Castle.Core (≥ 2.5.1)' not installed. Attempting to retrieve dependency from source...
Done.
Successfully installed 'Iesi.Collections 1.0.1'.
Successfully installed 'Antlr 3.1.3.42154'.
Successfully installed 'Castle.Core 2.5.2'.
Successfully installed 'NHibernate 3.0.0.4000'.

D:\Test>tree
Folder PATH listing
Volume serial number is 26FF-2C8A
D:.
├───Antlr.3.1.3.42154
│   └───lib
├───Castle.Core.2.5.2
│   └───lib
│       ├───NET35
│       ├───NET40ClientProfile
│       ├───SL3
│       └───SL4
├───Iesi.Collections.1.0.1
│   └───lib
└───NHibernate.3.0.0.4000
└───lib

D:\Test>dir Antlr.3.1.3.42154\lib
Volume in drive D has no label.
Volume Serial Number is 26FF-2C8A

Directory of D:\Test\Antlr.3.1.3.42154\lib

01/20/2011  05:06 PM           117,760 Antlr3.Runtime.dll

Why would you want to use this instead of the Visual Studio integration?

For most users, the Visual Studio integration will be the right choices. But suppose you want to work much more ‘manually’, and not deal with VS or even with a .csproj file. e.g. all you want is to bring down nhibernate.dll so you can write some code against it, and compile it manually using ‘csc /r:nhibernate.dll MyCode.cs’.

In this scenario, you just want NuGet to download the assemblies for you, and leave the rest to you. It still saves you a lot of time by letting you easily download the bits and all their dependencies, but it doesn’t force you into a development model that may not be want you want.

So I don’t think it’s a feature that the majority of users will use, but it is important to have it for those who need it.

Tuesday, January 11, 2011

Introducing the NuGet gallery

Back in December, I blogged about how poor the NuGet package submission process was. You had to clone a HUGE repository that had all the other packages, add your package files to it, and submit a pull request. It was something that we had meant to last a couple weeks, and it lasted a few months, way overstaying its welcome.

The good news is that that process is now obsolete! Instead of have a brand new gallery site that lets authors publish packages very easily.

And the site it.. drums… http://nuget.org!

Who is this site for?

To set expectations, please note that this site is not feature complete yet, and is still rough in many ways. Eventually, it will be a place for both package authors and consumers, but in the short term, it’s primarily useful to package authors.

So basically, this site provides a complete (and much better) replacement for the old package submission process, and at this point that is its main focus.

So if you are using NuGet from Visual Studio to install packages, you can probably ignore this site for now. It will be come interesting later, but it isn’t now. You’re certainly welcome to browse around it, but there is no point in creating an account now unless you have packages to submit.

Getting started with the site

Here is what you need to do if you’re a package author wanting the submit a package.

  • Go to http://nuget.org, click Sign In, and Register Now
  • After registering, you’ll get an email with a link you need to click (check your junk mail, it’s probably there!).
  • An admin then needs to approve your account (see below for the reason behind that)
  • Once you’re approved, you can just go to the Contribute tab and click Add New Package to upload your .nupkg files. They will be live on the feed instantly, though it may take a couple minutes for it to show up on the site itself.

If you submitted packages with the old process

If you previously submitted packages using the old process, you will need to be given ownership of those packages in the new gallery before you can upload new versions.

Just ping @davidebbo on twitter and I can take care of that.

Why the approval process?

Eventually, there won’t be any approval process. The reason we chose to have one initially is that the site is still very young and we want to take it step by step. We will generally approve anyone that wants to get packages up there.

If you registered and don’t see your account getting approved, please just ping me or Phil Haack on twitter (@davidebbo, @haacked).

What, no Live ID or OpenID?

I know, this is really something we should be supporting, and we will later.  We had to make that cut in order to get the gallery out sooner, as the current situation with package submission was simply not sustainable.

Built on Orchard

The NuGet gallery was built using Orchard, which itself is still very young (1.0 release is around the corner). This could be one of the first real sites built using it, so it is both a great learning experience for the Orchard team, and a good showcase of the technology.

There were certainly a number of Orchard issues during development, but since the team is just down the hall from us, they took good care of things!

Open Source

Just like NuGet itself, the gallery site was built as open source. Most of the development was done by NimblePros.

If you want to look through the sources, there are two CodePlex projects, one for the Orchard gallery site and one for the backend.

The gallery is at http://orchardgallery.codeplex.com, and the backend is at http://galleryserver.codeplex.com/.

How to report issues and give feedback

If you run into issues or want to give feedback about the site, feel free to start a discussion or file a bug on http://orchardgallery.codeplex.com.

Wednesday, January 5, 2011

NuGet versioning Part 3: unification via binding redirects

This is part 3 of the series on NuGet versioning.

  1. NuGet versioning Part 1: taking on DLL Hell
  2. NuGet versioning Part 2: the core algorithm
  3. NuGet versioning Part 3: unification via binding redirects

In part 1 & 2, we described the DLL hell problem, as well as the algorithm NuGet uses to achieve the best possible results.

Let’s now look at how we can achieve runtime unification of assemblies using binding redirects.

Strong names and binding redirects

Another important part of the story that we haven’t yet discussed is assembly strong naming. When an assembly has a strong name, the binding to that assembly becomes very strict. That is, if assembly C depends on assembly X 2.0, it’s not going to be happy with any other version of X, even if it’s X 2.0.0.1.

Note: I’m now talking about assembly versions rather than package versions, but let’s assume that they match, as will usually be the case.

Going back to our earlier sample in Part 2, what this means is that the package-level unification that we performed would have ended up breaking the app!

Recall that we had:

  • A depends on X 1.1 (as in ‘>= 1.1’)
  • B depends on X 1.2
  • C depends on X 2.0

And then (with the NuGet 1.1. twist), we ended up installing X 2.0.1.5, which doesn’t match what A, B or C are looking for! If you then try to use A at runtime, and it itself tries to use X, you’ll get a nasty error that looks like:

Could not load file or assembly 'X, Version=1.1.0.0, Culture=neutral, PublicKeyToken=032d34d3e998f237' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference.

While this looks scary, it’s really just saying things the way they are: A was looking for X 1.1 and X 1.1 was nowhere to be found (since we have X 2.0.1.5).

The answer to this problem is to use binding redirects, which provide a simple and effective way of telling the runtime to bind to a different version than what an assembly was built against. e.g. in this case, you would add this to your web.config (or app.config):

<runtime>
<assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
<dependentAssembly>
<assemblyIdentity name="X"
publicKeyToken="032d34d3e998f237" culture="neutral" />
<bindingRedirect
oldVersion="0.0.0.0-2.0.1.5" newVersion="2.0.1.5" />
</dependentAssembly>
</assemblyBinding>
</runtime>

This is basically telling the the runtime: “hey, if anyone asks you to load any version of X that’s less than 2.0.1.5, please go ahead and load 2.0.1.5 instead”.

Once you do that, our A, B and C friends will all happily work against X 2.0.1.5. We have now achieved assembly-level unification to go along with our earlier package-level unification.

Need some help writing those binding redirects?

You may be thinking that writing those binding redirects by hand is not as simple as I make it sound, and that it would be nice if you didn’t have to worry about the details.

If so, you’re in luck because NuGet comes with an Add-BindingRedirect command which will generate all the binding redirects for you!

It doesn’t take any parameters. All you have to do is run it, and then check out your config file to see what was added (if you’re curious). It’s pretty smart about only adding things that are needed, so in simple situations that don’t call for it, it will not make any changes.

Note: make sure that you build your app before running this command. We’ll try to not make this required in the future, but for now, please remember to build first or it won’t work correctly.

We are also considering ways to automate things even more such that you don’t need to run any commands at all, and the binding redirects would get managed for you as you add packages. Not sure when we’ll get there, but it should be feasible. in the meantime, just running Add-BindingRedirects is still a big help over hand writing those sections.

Alternative to binding redirects

OpenWrap uses a different approach to solve that same issue: it modifies all the assemblies at install time to strip the strong name from them, hence allowing them to work with each other regardless of versions. Though it is a viable technique, we chose not to do this for a number of reasons.

The first is that it doesn’t feel right to take an assembly that has been strongly named and signed by its author and turn it into something with no trace of the original signature. It may also have implications in some environments that only allow signed assemblies to run.

The second reason is that it violates one of our key design goals for NuGet, which is to make it easy to do things that you could otherwise have done yourself without NuGet. i.e. once NuGet has done its thing, it stays out of the way of your app, and you very much have a ‘normal’ app, not much distinguishable from what you would have if you had put it together without NuGet. But with the rewriting approach, you would end up with something that is quite different, making the installation process feel a little too ‘magical’ and not transparent enough.

The third reason may be the most important one: the rewriting approach locks you into using OW for everything. e.g. suppose that assembly A uses assembly X, and that you get both via OW. And now suppose that you get some other assembly B through some other mean (because there is no package for it yet), and that assembly also references X. You drop B.dll in bin and expect it to just work. But if X had been stripped of its strong named, B.dll would be broken (as it would fail to load the strong named X.dll). On the other hand, with the binding redirect approach, everything just works naturally, since no assemblies have been modified.

In the end, it comes down to NuGet and OW having rather different design goals, even though at a high level they share some similarities.

Update: it turns out that OW does not *yet* do this but it is planned for the next version. See Sebastien Lambla's new post on the topic for details.

Conclusion

This completes this 3 part series on NuGet versioning, While there are still many areas that NuGet has not yet tackled, it has solid core approach to versioning and dependency management that it can build on.

Tuesday, January 4, 2011

NuGet versioning Part 2: the core algorithm

This is part 2 of the series on NuGet versioning.

  1. NuGet versioning Part 1: taking on DLL Hell
  2. NuGet versioning Part 2: the core algorithm
  3. NuGet versioning Part 3: unification via binding redirects

In part 1, we described the two sides of DLL hell, as well as how assembly Unification is superior to Side by Side.

Let’s now dive into the algorithm that NuGet uses to deal with versioning.

Package vs. Assembly

It should be noted that at the top level, NuGet deals with Packages rather than assemblies. Those packages in turn can bring in zero or more assemblies. The assembly versions may or may not match the package version, though is most cases they do.

The following discussion on versioning is referring primarily to Package versions, though the reasoning applies equally well to DLL versions (and essentially falls out of it).

How NuGet specifies dependency versions

The NuGet syntax for specifying package dependency versions borrows from the Maven specification, which itself borrows from mathematical intervals. e.g. when component A depends on component X, it can specify the version of X that it needs in two different ways (in the .nuspec file):

  1. A range, which can look like [1.0,3.0), meaning 1.0 or greater, but strictly less than 3.0 (so up to 2.*). See spec above from more examples.
  2. A simple version string, like “1.0”: this means “1.0 or greater”

Your first reaction may be that #2 is counter intuitive, and should instead mean “exactly 1.0”. The reason it means “greater or equal'” is that as things turn out, this is what should be used most of the time in order to get the best behavior, i.e. in order to avoid both of the extremes of DLL hell mentioned above. This reason will soon become clear.

The version selection algorithm

Having a version range is only half of the puzzle. The other half is to be able to pick the best version among all the candidates that are available.

Let’s look at a simple example to illustrate this:

  • A depends on X 1.1 (meaning ‘>= 1.1’ as discussed above)
  • B depends on X 1.2
  • C depends on X 2.0
  • X has versions 1.0, 1.1, 1.2, 2.0, 3.0 and 4.0 available

The version resolution used by NuGet is to always pick the lowest version of a dependency that fits in the range (a small exception to this is mentioned further down). So let’s see what will happen in various scenarios:

  • If you just install A, you’ll get X 1.1
  • If you just install B, you’ll get X 1.2
  • If you just install C, you’ll get X 2.0
  • If you first install A, then B then C
    • You’ll initially get X 1.1 when you install A
    • X will be updated to 1.2 when you install B
    • X will be updated to 2.0 when you install C

The crucial point here is that even though A and B state that they can use any version of X, they are not getting forced into using anything higher than necessary.

It may very well be that A does not work with much higher versions of X like 3.0 and 4.0, and in that sense you can say that the specified range is ‘wrong’. But that is simply not relevant unless you are in a situation where you must use those higher versions due to a different component in the same app depending on those higher versions.

If we had instead specified exact versions, we would not have allowed anything to work together, even though the components may very well be backward compatible up to a point. That is one of the extremes of DLL hell discussed in Part 1: inability to find a version that everyone can work with.

Likewise, if the algorithm had picked the highest version in range, we would have ended up with X 4.0 in all scenarios. That is the other extreme of DLL hell: a newly released component breaks scenarios that were working before.

The simple algorithm NuGet uses does a great job of walking the fine line between those two extremes, always doing the safest thing that it can while not artificially disallowing scenarios. As an aside, that is essentially the same as what Maven does (in the Java world), and this has worked well for them.

When an upper bound makes sense

In most cases, simply specifying a minimum version is the way to go, as illustrated above. This does not imply that upper bounds shouldn’t be specified in some cases.

In fact, an upper bound should be specified whenever a component is known not to work past a certain version of a dependency.

e.g. in our example, suppose that A is known not to work with X 2.0 or greater. It would then be fine to specify the range as [1.1,2.0). And in the scenario above, when you try to install C after installing A, you’d get a failure to install. i.e. A and C simply cannot be used in the same app. Clearly, this is a bit better than allowing the install to happen and then having things break at runtime.

But the key thing here is that the incompatibility has to be known before such range is used. e.g. if at the time A is written, X 2.0 doesn’t even exist, it would be wrong to set a range of [1.1,2.0).

I know, it may feel like the right defensive thing to do not to allow running against something that doesn’t yet exists, but doing so creates many more issues than it solves in the long run.

The rule of thumb here is that a dependency version is “innocent until proven guilty”, and not the other way around.

Backward compatibility is in the eye of the consumer

A subtle yet very important point is that simply knowing that version 2.0 of X has some breaking changes over version 1.2 doesn’t mean all that much.

e.g. you may be tempted to say that if B uses X 1.2 and X 2.0 has some breaking changes over 1.2, then B should never use 2.0. But in reality, doing so is too conservative, and causes the second form of DLL hell (inability use some components together, and general lack of flexibility).

The more important question to ask is whether X 2.0 has breaking changes that affect B. B may very well be using a small subset of the API’s and be unaffected by the breaking change. So in this situation, you should not jump to the conclusion that you need a [1.2,2.0) range.

Again, “innocent until proven guilty”. Or maybe I should say “give (DLL) peace a chance”, or “if it ain’t broke, don’t prevent it”. Or maybe I should stop there ;)

Credits to Louis DeJardin on convincing me of this key point.

NuGet 1.1 twist

Earlier, I mentioned that NuGet’s algorithm was to “always pick the lowest version of a dependency that fits in the range”. That is true of NuGet 1.0, but in 1.1 or later, we added a small twist to this, which is to always move up to the highest build/revision. Confused? An example will make it clear.

Let’s take our example above, but now say that X’s available versions are 1.0, 1.1, 1.2, 2.0, 2.0.0.1, 2.0.1.0, 2.0.1.5, 3.0, 3.0.1 and 4.0.

When installing A, B and C, with NuGet 1.0 we would end up with X 2.0. But with 1.1, we’d get version 2.0.1.5. The reason this is important is that the last two numbers are typically non-breaking bug fixes, and the assumption is that you are always better off picking them over an older build with the same Major/Minor version (i.e. the same first two numbers).

A few words on Semantic Versioning

Semantic Versioning (SemVer) describes a way for authors to define versions in a way that they have a consistent semantic. In a nutshell, semantic versions look like X.Y.Z (Major.Minor.Patch), such that:

  • A change in X is a breaking change
  • A change in Y adds functionality but is non-breaking
  • A change in Z represents a bug fix

The use of this versioning scheme is not widely adopted today, but I think it would be beneficial if component authors (and NuGet package authors) followed it more.

Currently, the only case where NuGet makes some use of SemVer is with the “1.1 twist” described above, which causes it to move up to a slightly newer version that has ‘bug fixes’.

Technically, if all components actually honored SemVer, we could always safely move from 1.0 to 1.1, as it would be guaranteed to be a non-breaking upgrade. But in practice, this would not work well today given how a change in Minor version (Y) does often contain breaking changes.

It is also worth noting that the NuGet algorithm described above makes this mostly unnecessary, because there is no reason to use 1.1 if the component asks for 1.0. Unless of course some other component needs 1.1, in which case we would use it.

In part 3, we will discuss how NuGet makes use of CLR binding redirects to achieve assembly unification.

Monday, January 3, 2011

NuGet versioning Part 1: taking on DLL Hell

NuGet makes it easier than ever to get all kind of libraries into your .NET apps. While that is its most obvious benefit, NuGet also helps tremendously with managing dependencies and versioning, which can normally be a complicated process.

In this multipart series, I will cover the following topics:

  1. NuGet versioning Part 1: taking on DLL Hell
  2. NuGet versioning Part 2: the core algorithm
  3. NuGet versioning Part 3: unification via binding redirects

Before going too deep into the NuGet behavior, let’s step back and look at the problem we're tackling: the infamous DLL hell.

The two extremes of DLL hell

I have seen the term ‘DLL hell’ used to describe situations happening at both ends of the spectrum.

The more common usage refers to what occurs when the versioning policy is too loose (or non-existent). This is the classic old 16-bit Windows issue where a DLL gets updated system wide, and everything on the system that was using the old version is now using the newer one, sometimes causing breaks due to incompatibilities.

At the other end of the spectrum, there is what happens when the versioning policy to too tight, as is often the case with the GAC. Here, you can lose the ability to use components A and B at the same time, because they each want to use a different version of component X, and the system won’t let those be unified.

So in one case we get in trouble because apps get broken, while in the other we get in trouble because apps can’t use the components that they need.

BIN deployment limits the scope of the issue

To begin with, it’s worth emphasizing that NuGet never installs assemblies machine wide. In particular, it will never put anything into your GAC. Instead, all assemblies are bin deployed, which means they only affects the current app. This reduces the scope of the issue because it moves the concern from a potential machine wide DLL hell to a potential application wide DLL hell. So even if things are not done correctly, you can still get bad things to happen within an app, but you’ll never mess up a different app.

So NuGet’s focus is on doing the right thing within this one app.

Unification versus Side by Side: a clear choice

When dealing with situations where two different versions of a DLL appear to be needed, there are two possible approaches.

The first approach is unification, which consists in picking one version of that DLL and using it for the entire app. This can require the use of Binding Redirects, as we will discuss later in the series.

The second approach is to allow both versions of the DLL to run Side by Side. e.g. A could be using X v1.0 while at the same time B could be using X v1.1.

NuGet always uses unification, as Side by Side is evil for several reasons. First, using Side by Side is difficult for a practical reason: the bin folder can only contain one file named X.dll. But even if you get around this (e.g. by going crazy with AssemblyResolve events), it is likely to get you in trouble because many assemblies don’t expect this. e.g. an error logging component expects to be the only one there, and would end up fighting with its evil twin if they ever came to run in the same app domain at the same time.

So let’s settle this one: two versions of an assembly should never be loaded in the same app. I’m not saying that there aren’t any situations where it may legitimately arise, but for the most part, it’s best to not go there,

In part 2, we will discuss how NuGet deals with package and assembly versioning.