AutoMapper: Missing type map configuration

While trying out AutoMapper I stumbled on this generic error:

Message: AutoMapper.AutoMapperMappingException : Missing type map configuration or unsupported mapping.

Below is the initial Stack Overflow question I wrote, after struggling for at least 25 minutes with this problem. The solution however was shamefully simple: if you call Mapper.Initialize twice, the latter will overwrite the first.

Full Description

So why am I writing an entire post about this? Simple: to ingrain this solution into my brain, may I never make the same mistake again.

Basically, I was trying to understand a more specific version of this generic question on AutoMapperMappingException, getting the same kind of error message:

Message: AutoMapper.AutoMapperMappingException : Missing type map configuration or unsupported mapping.

Here’s a way to repro my scenario:

  1. Using VS2017, create new “xUnit Test Project (.NET Core)” project (gets xUnit 2.2 for me, targets .NETCoreApp 1.1)
  2. Run `Install-Package AutoMapper -Version 6.0.2
  3. Add the following code
  4. Build
  5. Run all tests
  • Expected result: green test.
  • Actual result: error message:

    Message: AutoMapper.AutoMapperMappingException : Missing type map configuration or unsupported mapping.

    Mapping types:
    FooEntity -> FooViewModel
    XUnitTestProject3.FooEntity -> XUnitTestProject3.FooViewModel

If I uncomment the line marked as “culprit” the test turns green. I fail to see why.

I also placed a Mapper.Configuration.AssertConfigurationIsValid() call right before the Map call but that will run without error.

As far as I can tell, the other question, specifically its top answer talks about forgetting the initialization, but that’s explicitly there. I’ve also looked through the other answers but none of them helped me.

Another top question’s answer to this same problem tells me to add ReverseMap(), but that’s not applicable for my scenario.

Solution

Only after writing the entire above question on Stack Overflow, specifically while perfecting the minimal repro, did I realize what was causing the error: the line marked as “Culprit!”. Then, buried deep in Google’s search results (okay, okay: on page 2; but who looks on page 2 of search results?!) I find this answer that has the solution. Multiple initializations should be done like this:

I guess that teaches me for disregarding the advice to use Profiles for proper AutoMapper configuration.

Entity Framework: Cascading Delete of Optional Related Entity

After several years of NHibernate, and a couple of years Dapper and NoSQL, I’m now working on a project that uses Entity Framework as its ORM. It’s a mature ORM at this point. However, it does give me a headache as I’m struggling to find ways to do certain things. I’ll admit that I’m trying to dive in without sitting down and doing a few hours of learning first; for sure I’ll be on Pluralsight some time soon.

Fair warning: what comes next is a highly specific, rather technical, and possibly stupidly uninformed dump of a problem I ran into. Feel free to skip this one: no hard feelings, and I’ll see you on my next post!

The main problem: I’ve got a repro, but it’s extremely similar to various other questions on Stack Overflow. Except… that my code contains the accepted (and often highly upvoted) answers’ code as well, but still doesn’t work as advertised!

What I’m trying to do is make sure that EF will delete a “Child” property (i.e. the database row) automatically when its “Parent” is explicitly deleted via the DbContext. Note that the Child is optional in my scenario.

Here’s the repro. Create a new class library and install EntityFramework (I used 6.1.3) and NUnit (I used 3.6.1). Then drop in one namespace these entities:

And then this DbContext:

And finally this TestFixture:

Make sure the “TestDb” database is available on your “(LocalDb)\cascades” instance (or use another Sql Server instance and database). Then run the tests, and you’ll get:

Test Failed – Delete_will_cascade.
Message: “Child”.
Expected: 0, but was: 1.

All I want is that a “Child” is deleted with its “Parent” once that is deleted. I know I can use a Sql CASCADE, and I know I could manually remove the Child from the context, but I want EF to handle this automatically, damnit!

For sure I’ve missed the obvious solution. I was somewhat hoping it would come to me while writing this. But perhaps I just need a good night’s rest.


References:

Pet project status report

This post assumes you’ve read my previous post on this project. It’s going to be a very short status report.

In aforementioned post I’ve tried to break through my analysis paralysis by listing all the things I had to think about (it helped!). Let me re-iterate, update, and complete the list to once again gather my thoughts:

  1.  Project and Namespace structure. KISS, so I went with just a single project (plus one for tests) for now.
  2.  Folder structure. following the lead of popular C# projects, only needing some files in the root and a src folder for the projects.
  3.  Initializing git. this was actually the biggest mental blockade. I just bursted through: screw TFS history, screw being “optimal”, and just move. I copy-pasted all code, cleaned everything carefully, and initialized the git repo with an already decently sized project.
  4.  License. MIT.
  5.  GitHub setup. Let’s start simple. Project created under our organization’s GitHub account. Pushing straight there for now, using my own personal profile. Will think about working with forks and pull requests later.
  6. NuGet packaging. Haven’t started this yet.
  7. Re-including the open sourced bits in my closed source solution. Have postponed optimizing this. The not-so-optimal solution for now is that the projects are gone from TFS, and there’s a “lib” folder instead with compiled DLLs from the open source project.
  8.  Choosing a name. Chosen, but not ready to disclose yet, even though it’s very easy to sherlock this bit.
  9. .NET Core. We’ll cross that bridge when we get there.
  10.  What am I forgetting? Looking at some of the “top” C# GitHub projects (Restsharp, NodaTime, dapper-dot-net, AutoMapper) was extremely helpful. Note to self: in-depth code reviews of those projects will be extremely educational.
  11.  Minimum quality. I actually chose to make this project an exercise in being as “clean” as possible (within -though close to the edge of- reason). But one bullet at a time, so e.g. crafting a great readme is a sub-exercise left for later.
  12. Early feedback. Still on my list, but want to get to some kind of “alpha” stage before I send out review requests.
  13. Logo. Was a great excuse to get started with the recently released GitHub Projects feature.
  14.  XML Documentation. Probably way over the top, but a nice personal exercise, so worth it after all.
  15. CI. Have not started with this yet, but know I have to at some point.
  16. GitHub Wiki. Probably way over the top, but would be a nice personal exercise to create one.
  17. Domain name. Not sure how that plays together with the license, the fact that the repo was initialized under my organization’s account, or any trademark stuff. Will have to figure that out some time soon.
  18. .NET Framework Versions. Related to the .NET Core bullet I guess, but slightly more important. I found that the popular repo’s I looked at for guidance have some kind of setup with duplicated projects several times over, not sure how that works. For now I’ll have to stick with a (unfortunately slightly older) version, 4.5.1, because that is the highest version I can use in the project that is dog-food-testing the project.

Okay, now I can stop that hurricane of thoughts, and get back to this pet project, and tick some more things off the list!

Extracting and Publishing closed bits as Open Source

There’s a project at work that’s got all the traits to serve as a great vehicle for dabbling some more with open source:

  • Small-but-not-too-small;
  • Self-contained;
  • Not too specific (e.g. to our domain);
  • Keeping the IP safe isn’t a concern since it’s just a small tool unrelated to the core business (i.e. we’ve got “the okay” to make it open source);
  • It might be of some use to someone else, and failing that it’s still fun to open source it;

While trying to start with the process of extraction and open sourcing, there seem to come up a million things to think about when going through this process. I want to gather my thoughts (by writing this post) and untangle that mental mess, so that I can get to a clear plan of attack. Perhaps this post will even be of use to someone else, somewhere.

Context

Currently, the bits I want to make open source are only slightly coupled to the bits I want to keep closed. That is: currently everything is in one solution, but the split in projects already clearly demarcates what would be open and what would remain closed. To put this in a tabular overview (where “Foo” is the closed bit, and “Bar” is the open bit):

Current Situation

Closed Source

/Foo.App
/Foo.App.Tests
/Foo.Core
/Foo.Core.Tests
/Foo.MyInstances
/Foo.ConsoleWrapper
/Foo.ServiceWrapper

Target Situation

Open Source

/src/Bar.App ???
/src/Bar.App.Tests ???
/src/Bar.Core
/src/Bar.Core.Tests
/src/Bar.ConsoleWrapper
/src/Bar.ServiceWrapper

Closed Source

/packages/Bar ???
/Foo.MyInstances

/Foo.WindowsService ???
/Foo.Console

There are some question marks in that overview. Elaborating on them a bit:

  • I’m not sure if the distinction between “Core” and “App” should remain a dll/project splitup as opposed to possibly a namespace split.
  • I’m not sure how the Open Source solution will be included in the closed source app (via NuGet, or as external source or binaries, or…?).
  • I’m not sure if I can easily distribute a generic windows service wrapper, or if I need to create it in the closed source solution after all.

In addition to these source-related questions, there are other things to think about as well. So next up:

What things to think about beforehand?

Let’s summarize all the things I can currently come up with that might be important in this process, in no particular order, merely numbered to be able to reference them easily:

  1. Project, Namespace, and DLL structure: what’s wise, what’s neat, what’s useful?
  2. Folder structure: how will the repository be structured? What’s future proof here?
  3. Initializing git: currently, the project history is in our corporate TFS. So not even sure if/how to keep the history intact, or if that’s even feasible.
  4. License: okay, that’s not too difficult, but have to choose one nonetheless.
  5. GitHub setup. What’s a good setup? Should I make the organization main owner of the repo, with a personal fork from which I do pull requests?
  6. NuGet packaging: how does this even work? As an application developer I had never had the need to learn how any of this works.
  7. Re-including the open sourced bits in my closed source solution: via NuGet, as external source, or as external binaries?
  8. Choosing a name. One which is not already in use for some software, where a domain with “normal” extension is still avaible, where the name is not taken on GitHub yet, etc.
  9. .NET Core. From my rss feeds I gather that it can be a quite fuss to make open source projects .NET Core “proof”, but it would be yet another thing to tackle while starting this whole thing off. Perhaps something for later? Or is that a grave mistake?
  10. What am I forgetting? I’ve been looking at some of the “top” C# GitHub projects (Restsharp, NodaTime, dapper-dot-net, AutoMapper) and their git history to see how they have evolved, to find out if I’m forgetting anything.
  11. Decent starting point, code-wise: I might be setting unreasonably (and unnecessarily) high standards for the initial commit, but still something to consider. What is the minimum quality I’m requiring before opening the source?
  12. Early feedback: it might be useful to get friends and colleagues to review the setup and get the first version(s) of the repository just right.
  13. Logo: yeah, this project needs the best. logo. ever!

In addition, there’s a practical point: once I split off bits to an open source project, I’ll effectively have two (source-control-wise unrelated) forks of the same code-base, until I’m ready to make my closed source solution depend on the open source bits again.

In Conclusion

After writing the above: wew! No wonder I was not getting anywhere: my mind was just wandering in circles through all of the above points. And I guess “Perfect is the enemy of Good”, or some such. Time to tick off some of the above items.

Nested Elastic Explorations – Part 2

Reusing a properly modelled domain for storing data in Elasticsearch does not work well out of the box. Let’s examine a problem scenario. Consider this mini-domain:

Mini Bieb Domain

This ties in with my last post, where I mentioned that loops are a pain for serializing to json. Here’s the loop, visualized:

Mini Bieb Domain Loop

The problem is that NewtonSoft (used under the hood by Nest) will start serializing “The Greatest Book”, and recurses through all properties. In the end it’ll try to serialize “The Greatest Book” again as part of “Richard Roe”‘s AuthoredBooks property.

Breaking this serialization loop is actually pretty simple with NewtonSoft, and since a while you can inject the appropriate NewtonSoft setting in Nest as well. Something like this:

Problem solved, right? Not so much. Here’s why. Suppose I use the LoopHandling “fix” and load up the mini-domain with this integration test:

This will create a document in Elasticsearch of a whopping 71 KB / 1364 lines, see this example JSON file. Not so good.

The simple solution which would do for now would be to index only Book items, and all related people (authors, editors, translators), but not those people’s Books (AuthoredBooks, etc). We somehow need to let Nest and Elasticsearch know that we want to stop recursion right there. The question is how to be explicit about how they should map my domain objects to documents. I see two courses of action I like:

  1. Declarative mapping, with Attributes. This would (to my taste) require separate DTO classes to represent the documents in Elasticsearch, and have an explicit transformation between those DTO’s and my Domain objects. (I wouldn’t like to litter my domain object classes with persistence-specific attributes.)
  2. Mapping by code. This would seemingly allow me to keep using domain object classes for persistance, having the “Mappings” in code as a strategy for the transformation in separate files. At this point though I’m unsure if this approach will “hold up” once you start adding more complex properties and logic to domain objects.

I lean towards option 1, even though it feels like it’ll be more work. Guess there’s only one way to find out…

Nested Elastic Explorations – Part 1

After my previous post on what to explore next I’ve dabbled with option 4 “Linux” (but got frustrated so I set it aside), worked some on option 2 “More Gulp!” (bu that was mostly at work). However, I think I’ve settled for now with putting some effort in learning Elasticsearch and Nest. So let’s start a series of posts on this venture. (No promises though, this “Part 1” post might end up being the only part…)

About this venture…

Elasticsearch can be seen as a NoSQL database, more specifically as a document database. I’ll be using my Bieb pet project (or at least its domain) for testing its features. Bieb’s domain should be familiar to everyone: books!

Specifically I’m writing this post because I quickly ran into a fundamental challenge with document databases, and I hope to gather my thoughts on the matter by writing about it. To set the stage, I’ll be talking about this part of the domain:

  • A Book has multiple Authors (of type Person).
  • A Person has authored mulitple Books.

The classic many-to-many relationship example. The challenge then obviously is how to express this in the various tools.

Setting the coding stage…

Before I dive into the Elasticsearch bit, let me first describe how I got this going in C#, SQL, and NHibernate. First, the SQL bit:

Run-of-the-mill stuff, with a many-to-many table. Obviously, we don’t want to see BookAuthor show up as a class in our code. That is, we want this kind of C#:

As you can see, the Book is the “main” entity and the “boss” of the relationship with authors. Apart from the virtual keyword everywhere, little to no SQL or NHibernate know-how leaked into this domain.

The NHibernate mapping looks like this:

The important bit, and the link to Elasticsearch, is the inverse="true" bit. With NHibernate, you have to indicate which entity is “in charge” of updating the many-to-many table. With document databases you have to do the same thing.

Moving to Nest & Elasticsearch…

Diving head first into throwing these entities into Elasticsearch, I tried to run the following Nest integration test:

It’ll fail, because Nest internally uses JSON.Net to serialize the Book, which means you’ll get this:

Newtonsoft.Json.JsonSerializationException : Self referencing loop detected with type ‘Bieb.Domain.Entities.Book’. Path ‘allAuthors[0].authoredBooks’.

A problem that was to be expected.

With SQL self-referencing loops are just fine, handled by the many-to-many table. In NHibernate it’s a minor nuisance, simply handled by the inverse="true" bit. However, with document databases this is somewhat less trivial, in my opinion. In fact, it may be the hardest part of using them: how do you design your documents?

What I’d like to do…

The solution obviously is to break the self-referencing loop somewhere. Simply not serializing a Person’s authored books should do the trick. (On a side note, it occurs to me that graph databases would not need this trick, and would be great for a scenario with endless Book-Person-Book-… traversal, but that’s for another time.)

However, I don’t want to hard-code the loop-break into my domain object, e.g. by marking AuthoredBooks as not-serializable. The first and foremost reason is that this would mean my storage layer leaks into my domain layer. The second reason is that I may want to break the loop at different places depending on the Elasticsearch index I’m targeting. That is, I’ll probably have an index for Authors as well as Books, and want to break the loop in different places on either occasion.

Questions…

So some questions remain. How could we do this with JSON.Net without altering the domain layer? Or is there a way to do this with Elasticsearch + Nest? Or should we bite the bullet and have a seperate set of DTO’s to/from our domain entities to represent how they’re persisted in Elasticsearch? Or do we need a different approach alltogether?

Good questions. Time to go and find out!

1995: It was a very good year…

Apparently, 1995 was a very good year. But wait, I get ahead of myself!

Recently I had a conversation with a friend about CMSes. We were confused, disagreeing on whether “Django” was a CMS or a web application framework. Neither of us was entirely sure, both of us being developers in the Microsoft stack.

Given that looking for (meta-)comparisons, and coming up with (sometimes far fetched or even disrespectful) parallels is a “secret” hobby of mine. So why not attempt to insult as many technology zealots in one go as I can by comparing popular technologies? At the very least I’d find out if Django is a CMS or web framework.

Turns out it is both.

Of course, finding parallels is hard and at times nearly impossible. Platforms and technologies differ and overlap at times, making a tabular comparison quite a challenge. Here’s my (first) attempt at a comparison:

Web App Framework(s) Example CMS(es) Package Manager(s) IDEs
C# (and VB.NET)

Current: 5.0
Originated: 2000
Microsoft

ASP.NET: WebForms & MVC DotNetNuke, Orchard NuGet Visual Studio, MonoDevelop, SharpDevelop
Java

Current: 8
Originated: 1995
Oracle

Spring, JSF, Struts, Google Web Toolkit, etc. Liferay, Hippo, Open CMS, Pulse Maven Eclipse, IntelliJ, NetBeans
PHP

Current: 5.6.0
Originated: 1995
The PHP Group

Zend 2, CakePHP, CodeIgniter, Symfony WordPress, Drupal, Joomla! Composer PhpStorm, PHPEclipse, Zend Studio, NetBeans, Sublime Text
Ruby

Current: 2.1.2
Originated: 1995

Ruby on Rails Radiant, Refinery, BrowserCMS RubyGems n/a
Python

Current: 3.4.1
Originated: 1991
Python Software Foundation

Django Django-CMS, Plone Pip, PyPM n/a
JavaScript (Node)

Current: 5
Originated: 1995
Joyent

Node.js Ghost, KeystoneJS, Hatch.js, Calipso npm n/a
Perl 5

Current: 5
Originated: 1987
The Perl Foundation

Catalyst, Dancer, Mojolicious Bricolage nCPAN, PPM n/a

There were two main takeaways from this comparison for me. First, apparently “Django” is a Python web application framework, and “Django-CMS” is, well: a CMS. Second, 1995 was a very good year: Wikipedia lists 4 out of 7 languages as originating in 1995.

Guess it was a very good year.