Finishing Bieb

I’m calling it. Bieb is now officially wrapped up.

I’ve accomplished the bare minimum I set out to do with Bieb, and learned a lot of extra things along the way. But motivation to continue any further has dropped to near zero. And I’m okay with that.

Fair warning: this is a long post. Consider the above to be the TLDR version, and whatever’s below as a peek into my personal notes and thoughts. Read on at your own risk!

Still here? Allright, here goes.

Conclusion

So let’s start up with a preliminary conclusion annex Table of Contents:

  • Bieb was a fun project. Now it’s “wrapped up” / abandoned.
  • Azure sucks I disliked working with Azure for a hobby project. Great tech, but that learning curve…
  • Proper errors in ASP.NET MVC are hard.
  • ASP.NET MVC is a nice framework in general though.
  • Database integration tests for NHibernate are useful and easy to set up.
  • Bieb is a successfull TDD experiment.
  • Razor is great for static content. You need nearly zero JavaScript for views.
  • Designing (the domain logic for) an entire application is fun!

You want more details you say!? You shall have them! Moving on…

Elevator Pitch

First up, the elevator pitch. There actually was something of the kind (though not in typical pitch-format) on the project home page:

Website project based on ASP.NET MVC for managing and displaying your personal book collection on the web.

At the bottom of the page, the (equally important) secondary objectives are mentioned:

  • Finding out more about Codeplex
  • Experimentation sandbox for MSSQL, ASP.NET MVC, NHibernate, Ninject, html5, css3, jQuery, and Modernizr
  • Experimentation sandbox for trying out competing and additional frameworks
  • Having this website project available for anyone who’s interested
  • Real-life events such as BBQ’s and drinks to “discuss the project”

And that was actually pretty close to what it turned out to be.

Note that Bieb is not solving a new problem, nor is it solving a problem in new ways. There’s GoodReads, which is a great site that does 90% what Bieb does and more. Bieb was not meant for us to get rich with a “next big thing”; it was meant as a fun project to toy with technology.

The Negatives

Before I get to the positive things about this project, first some negatives.

Azure is no friend of mine

Bieb had to be a replacement for a version of this app I’d hacked together in PHP. With PHP however, it’s very easy and cheap to get some hosting, and very easy to set up a deployment. With .NET, there is no such luxury. Private / shared hosting of .NET sites is expensive. The best option seemed to be Azure (as I had a MSDN subscription that comes along with some Azure credits), but Azure has a very steep learning curve compared to getting a PHP site up and running.

Put in another way: Azure feels solid, but it also feels big and “enterprisey”. There’s just so many buttons and settings and “thingies” that it feels impossible to get started. Watching tutorials doesn’t help much, because your choices are (a) a very specific 20 min tutorial that doesn’t cover all your needs or (b) watching a 6-hour Pluralsight course and still being covered only 80% of the way. Add to that the fact that it’s still under heavy development (which mostly is a good thing, but doesn’t make the learning curve any better), and you’re set up for frustration.

One particular anecdote worth mentioning is as follows:

  1. I tweeted that I found it confusing that one Azure portal redirects me to yet another Azure portal.
  2. Azure support tweets back redirecting me to MSDN forums.
  3. In the forum thread, I get redirected to yet another forum where I should post my findings.

I think I would now like to redirect Azure to a place where the sun don’t shine.

Don’t get me wrong though: I love .NET development and Windows hosting in general. Both Azure and hosting yourself are fine options for business-type projects. However, I’m deeply disappointed in using Azure and .NET for web hobby projects. It’s probably the main reason I’m abandoning wrapping up Bieb, and the main reason I’ll be focussing on other tech for hobby projects in the near future.

Great error pages with MVC are hard

A very specific “negative”, but one that annoyed me to no end. I think I’ve tried to attack this problem 5 times over, and failed every single time.

This is what I wanted to accomplish:

  • No YSODs. Ever. Making sure that if your app fails it does so elegantly is very important IMHO.
  • Proper HTTP status codes. That means 404s for non-existent static resources, MVC routes that can’t be resolved, but also “semantic” 404s for when a domain item (e.g. a “Book” or an “Author”) was not found.
  • A nice error page. That means a proper MVC page when possible, one that gives options (like a Search partial view) and relevant info. Failing that, a nice, styled static html page.
  • Proper error logging. The logging and error handling code should be unobtrusive to the business logic. If a request fails (i.e. a 500 error) it should be caught, logged, and the user should be directed to a meaningful, useful page with proper content but (again) also a proper HTTP status code.

Whenever I tried to tackle these requirements, I would fail to do so, and clicking through the online resources about this problem I would inevitably end up at Ben Foster’s blog post on this problem. That excellent post notwithstanding, I have never gotten this to work on IIS or IIS Express.

And having failed to get this working locally, I dread the thought of having to get this to work on Azure…

The Positives

There were also many positive aspects about fiddling with Bieb. Here are the main ones.

NHibernate Database Integration Tests

I continued to work on a way to run database integration tests with a unit testing framework, based off an approach we took at my previous job. Here’s what the base test fixture looks like:

The Factory has some more bootstrapping code for recreating the database schema from scratch. All this setup allows you to write this kind of test:

Which is a nice way to fix a database issue in a test-driven manner.

Currently my approach has one big flaw though: the tests are not isolated. Each test has to account for previous tests possibly having left data in the database. For the tests written so far that doesn’t matter, but it’s an accident waiting to happen nonetheless.

I see basically two solutions for this:

  1. Drop and recreate the database before each test. But that’s probably slow.
  2. Wrap each test in a transaction, and roll it back at the end. But that exludes the option of testing things that require actually committing transactions.

Truth be told, I wrote most database integration tests because I was unsure of how NHibernate would function and/or handle my mappings. And for that (i.e. learning NHibernate) the current set up worked just fine. If I were to continue with Bieb there would most likely come a time where I’d go for option 2. I think Jimmy Bogard typically advocates a similar approach.

PS. For what it’s worth, I’ve written a GitHub Gist with a minimal setup for creating NUnit+NHibernate tests, specifically geared to creating minimal repro’s to share with colleagues or on Stack Overflow.

ASP.NET MVC is a very nice framework

I quite like the kind of code you have to write to get pages to the user with MVC. Here’s an example of a Controller action in Bieb:

It’s short and to the point as it delegates the other responsibilities to (injected) dependencies like the Mapper and Repository. In addition, Razor views are also pretty easy to write. At the least, they are a breath of fresh air after WebForms.

So, if you want to create web sites with a big C# component, ASP.NET MVC is a fine choice. However, I do ponder if a more SPA-like approach (with Web API or similar + an MV* client side library) or a full-stack JavaScript solution are better choices for new web development projects…

Having said that, I’m very pleased with how Bieb was a great way to learn ASP.NET MVC.

Test-Driven Development

Bieb Code Distribution

I went to great lengths to stick to TDD development, even when things got ugly. In particular, I’ve had to learn how to deal with several awful dependencies:

  • Random
  • HttpContext
  • HtmlHelper (and all the things it drags along)

To show the length I went, here’s a summary of the Visual Studio code analysis for LoC:

  • 49% (1.564 lines of code) for unit tests
  • 8% (253 lines of code) for database tests
  • 43% (1.371 lines of code) for all the rest

Even though Lines of Code is hardly ever a good metric (it would be easy to inflate these numbers one way or the other), in this case they do reflect the actual state of the code base. Note that this does not include view code (obviously), nor does it include client side code (there isn’t nearly any; more on that later).

Looking back at the code, I’m also pretty pleased with how the tests turned out. There’s very simple, early tests like this one:

As well as tests that border on not being unit tests anymore, but serving IMHO a great purpose:

Which tests your haven’t forgotten any attributes like these:

Which is worth the fact that it’s not a “clean” AAA-style unit test, in my opinion.

Where’s the client side code!?

But wait a second: where is the JavaScript? The answer: there isn’t (nearly) any! All the custom JavaScript for Bieb is this inline bit of code:

This may seem silly anno 2015, and truth be told: it is!

Bieb was a vehicle for me to learn mostly server side tech. Most of my hobby projects, as well as the larger part of my day job includes client side programming. I’m confident I could improve Bieb (a lot) with rich client interaction, but haven’t found the need to do so (yet). I was aiming to get a 1.0 release out with mostly MVC code, and work on improved client side components after that.

One important thing to note: I was very pleased with how far you could get with just Razor in creating html. I was also pleased with how much fun it was to generate static content, which Bieb is mostly about.

The admin views of Bieb suffered most from the lack of JavaScript. The UX is bad, at times even terrible. However, with me wrapping up this project that will likely never change.

How are things wrapped up?

So, how are things wrapped up? Well:

  • You can check out bieb.azurewebsites.net, at least as long as my free Azure credits last.
  • The CodePlex Project will go into hibernation, but will remain available for as long as the wise folks at Microsoft keep it up and running.
  • Some screenshots can be found at the bottom of this post.

And that’s that. Which brings me to say something…

In conclusion

Bieb was a great learning experience. There were things (apart from aforementioned struggle with Azure) I did not really enjoy coding, including:

  • Setting up Logging. A real website needs this, but setting it up is a chore. Setting it up so that it’s unobtrusive can even make it a tricky chore.
  • Setting up a Dependency Injection Framework. DI itself can be a great help, I find that it kept my code clean and testable by default, but choosing- and learning how to use a specific DI container didn’t feel particularly interesting.
  • A proper Unit of Work pattern. It’s necessary, but certainly not my favorite bit of coding. And it probably shows in my codebase, too.

But this is vastly outweighed by the things I did enjoy coding:

  • Razor Views were fun to write.
  • Designing the Domain and its logic was fun, even though unfortunately sometimes the underlying persistance layer leaks through.
  • Project and Solution structure: fun things to think about, even though Bieb is just a small application.
  • Routing: it feels good to have nice, pretty URLs.
  • Controllers: because I did spend some time on the things I did not like, the controllers did end up looking pretty good.
  • Database structure, written through NHibernate mappings that actually is used to generate the database from code.
  • Design. Truth be told, the design is at most 49% mine, but that doesn’t make me any less proud of the end result.

Bieb was a fun project to do. I learned a lot. And even though I had many more plans, and even the start of a big backlog, it’s time to move on.

Farewell Bieb, slumber in peace!


Bieb - Home - LG View

Bieb - Book Index - LG View

Bieb - Book Details - LG View

Bieb - Book Details - XS View

Bieb - Book Edit - LG View

Bieb - Person Index - LG View

Bieb - Person Details - LG View

Bieb - Person Details - XS View

Bieb - Series Index - LG View

Bieb - Series Details - LG View

Bieb - Publisher Index - LG View

Bieb - Publisher Details - LG View

Bieb - Search - LG View

Finishing Things – Intermezzo part 2 of 2

So, like I wrote, Google is “bidding farewell to Google Code“. So my recently “wrapped” up projects needed some post-wrapping-wrap-up (you still with me?!). I’m using this as a good excuse to partially move from Mercurial to Git, because I suspect Google Code’s export to GitHub is probably the migration path they’ve put most effort in.

So, here’s my first (active) day at GitHub:

GitHub Contributions

I’m a little afraid that -given GitHub’s popularity- many will see this and consider it to be my current status on code hosting services (disregarding commits to those repo’s being done over a larger time span, and disregarding activity on CodePlex). However -given GitHub’s popularity- I guess my GitHub activity will increase anyways over the coming years.

As a related side note: moving projects from Google Code to GitHub was extremely easy. The web-based tool required nothing besides a link to my Google Code project, and me authorizing Google Code in GitHub through OAuth. The only downsides I personally found during this automated conversion:

  1. My old commits in the migrated GitHub repo’s don’t get attributed to my GitHub account, because the user details were different. Also, apparently TortoiseHg “helped” me commit unwittingly as “jeroenheijmans <jeroenheijmans@localhost>”. I guess if I’d payed attention I (sh/w)ould’ve at least used a valid e-mail address. (Yes, given that probably no one has cloned my repo’s (yet) I could probably rewrite history to change this, but it’s not that big a deal I guess…)
  2. My repository includes some Mercurial-specific files, i.e. a .hgignore file. I might’ve stripped that if I had done the Hg-Git conversion manually.
  3. The project home page from Google Code was not automatically added as a README.md file in the GitHub repo. I think that would’ve been nice. Then again, this omission gave me a chance to check out Git(Hub).

And so my move to Git(Hub) begins. I’ve already enjoyed reading through other people’s thoughts on (Git) commit messages, had my first dealings with Atlassian SourceTree, as well as a combination of the two. You’d almost wonder how I got the actual migration done at all.

I feel excited about getting to know more about SourceTree, Git, and GitHub. However, I’m also still very determined to finish Bieb, which will remain on CodePlex. We’ll see how that plays out in the coming weeks / months. Stay tuned.

Finishing Things – Intermezzo part 1 of 2

In the beginning of 2015 I declared I was wrapping up my open projects. I’ve since quickly wrapped up BattleTop, TimeLine, and DotaGrid. After that I’ve started to work on my somewhat bigger project: Bieb. I haven’t had as much time to put into that project as I wanted, but there has been a thin stream of commits.

However, all this has been brutally interjected by Google: they’re “bidding farewell to Google Code“.

Google Code was very barebones, but in a sense that was its charm. I like BitBucket and CodePlex too, but for small projects Google Code was just fine. Especially since I’ve shut down those projects. However, now I need to make a choice: do I put effort in moving my discontinued projects? And if so, do I move them to another Hg provider, or do I bite the bullet and convert them to Git?

Guess I’ll bite the bullet. Let me do so right now, and get back in a fresh post with the results…

Finishing DotaGrid

Let’s start off similar to Finishing BattleTop, with the elevator pitch that I would have created for DotaGrid at the start:

I’m hacking a web app together to quickly create a well-aligned grid of heroes for the picking-stage at the start of a Dota 2 match, so I have something to use while dota2layout.com is down.

There’s a few important things to note here:

  • “hacking”: Yes, this is throwaway code, even though it’s open sourceHeck, the main code files are called myMonolithicApp.js and tempHeroesJson.js.
  • “well-aligned”: I prefer my hero layout to be aligned to a grid with some very minor spacing between items. I.e. there’s no option to turn off “snap to grid”.
  • Dota 2“: Yes, this entire pitch and the tool itself assume you know and play Dota 2. Or that you know me personally and are willing to struggle through the Dota 2 specifics of this post.
  • so I have something“: The key word being “I”. Even though I’ve shared the app and its source, it was mainly something I created to satisfy my own need.

That last point is really rather important here, and it greatly affects the way I want to “wrap up” this project. But before I talk about that, let’s first look at something that seems to be in big contrast with this point.

Reddit

Even though my first ever Reddit post was well-received, Reddit has always felt as a moderately hostile environment. Depending on the particular subreddit, folks can be rather harsh and direct, not always in a well-founded fashion. But there was only one way to test that assumption and face my public-reddit-humiliation-fear: by posting something. I felt my DotaGrid tool was decent enough to be of some use to others as well, so I decided to post it on Reddit. Here’s a screenshot of my post:

DotaGrid Reddit post

This got me some decent votes and upvote percentage…

DotaGrid post Reddit vote count

…some friendly remarks…

DotaGrid friendly Reddit comment

…some suggestions…

DotaGrid add techies comment

…and some low-on-details bug reports…

DotaGrid bug report without info

…but in any case, overall comments felt friendly. So even though I wrote the tool for my personal use, deciding to share it seems like a good idea in hindsight.

The Code

Looking at the code now, a few months after I wrote the tool, I must say I’m not too disappointed. Sure, it’s “hacked together”, and according to Feather’s definition it’s legacy code, as it doesn’t contain unit tests. However, the code’s structured well enough to add those after all and work from there. For example, have a look at this typical view model property:

Not quite great code, especially that final (rather frail) line of code, but at least it’s concise, clear, and potentially unit testable.

If I were to rewrite the tool I do think I’d need to separate things a bit more. Especially the fact that the internal grid model (which hero sits where) and the rendering bit should be decoupled. This would also be necessary if I were to create a more responsive version that possibly doesn’t use a table for layouting the grid (but either a canvas or a div instead).

Then again, I’m not rewriting the tool. Instead, I think I’m going to wrap it up by:

  • Putting a disclaimer on the Google Code project, the app itself, and the Reddit post;
  • Annotating a few bits of the code, should I or anyone else care to continue the thing;
  • Keeping it up to date with new heroes as they come out, but only for as long as I feel like it (which highly depends on the amount of Dota I’ll play).

That last point does remind me to make one confession: I did not anticipate too well what would be needed to add heroes to the app. I feel rather dumb about that, since my tool is a direct response to the fact that dota2layout.com broke precisely for that reason. But oh well, you can’t win ’em all.

In conclusion

So in any case, that’s how I’m going to wrap it up, effectively “closing down” this project. Which clears the way for me to reboot and start finishing my final “open” project…

Finishing TimeLine

Oh my, that code… where to begin!?

Before you read on, perhaps you should have a 30 sec look at what I’m talking about. I’ll wait here, go ahead!

Current thoughts about this project

TimeLine’s at prototype status, but hopefully it’s clear what the app was meant for: visualizing experience over time, using grouped, parallel timelines.

I look back at this mini-project, and directly notice a few things. First and foremost: I still like the idea of such an app! Visualizing work experience is great for a résumé I think, it allows the reader to instantly get a “feel” for what you’ve been working on. Much better than dreary text.

The second thing I realize is that I clearly had no idea and/or time to quickly scaffold an editor (“yes”, I thus resorted to using a textarea with JSON; so sue me!). Today I’d use KnockoutJS for this, even in a mockup. Or, barring KO, I’d use some other plugin for it. But then again: “first make it work, then make it work better”, so I’m happy I chose something and moved on.

The final thing I realized: it’s a decent prototype, and to get there I sacrificed a lot of code quality. With 140 SLOC it’s not much code, but what’s there is rather bad.

Finishing things

This leaves me a bit in a tight spot. What am I to do with this project?

First I thought about deleting the thing entirely, praying no one would ever find out about the JavaScrippaghetti I had unleashed on the world. Then again, that would be silly: everyone needs to write a fair share of bad code to get better at writing code.

My second thought was to rewrite the thing from scratch: right here, right now. Nearly started working on it too. Guess I wasn’t lying when I said I still like the idea. Then again, that would be silly: I still have two other projects left to “wrap up”, one of which is much more deserving of my time.

My third and final thought was to wrap up by doing a loose “code review” of my own code. In addition it feels good to “stamp” it with a disclaimer and message that it’s “wrapped up”. And that actually seems like the best idea.

Code review

The html is not really worth going through. It’s a plain html-reset-based file, that uses semantic markup (header, nav, article, etc), and has a placeholder div where Raphaël will draw the timelines. The html contains a link to timeline.js, and some basic bootstrapping code to call timeline jQuery-plugin-style on the placeholder div.

Let’s dive into that one code file. It’s starts off with some good signs:

Apparently the code’s being linted with a more sane version of Crockford’s jslint, and the code declares ES5 “strict mode”. Guess the author did forget to run the linter recently, because currently it seems to still cause a few errors. Anyways, moving on we see:

Not exactly globals, but a definite hint the author’s struggling with JavaScript closures. Except for paperPadding perhaps, which hints at a feature for settings that’s just not implemented yet.

Moving on the code follows a typical jQuery plugin template, first with all the plugin’s methods defined within a seperate closure, then at the end of the file exported in a typical fashion:

What you can already see here though is that the plugin relies on one single god method: init. This method in itself isn’t all too interesting. It just loops through the grouped timelines, and renders them one by one. To find some things that are of note I went through a mix of Dutch and English comments (not a good thing for something intended to be public). Here’s one that’s particularly interesting:

Phew, at least the author realized the code could not go on like this for much longer. This is further clarified by the presence of this kind of code (abbreviated):

Yikes! The break statement is IMHO often a code smell (smells of goto), surely there’s an OO-approach for the problem being solved here.

Let’s finish on a positive note though, having a look at some of the variable names in use:

  • categoryHighway
  • lane
  • segment

This makes me smile. Having the UI of the app in mind I instantly know what these mean. A segment is one piece of timeline, a lane is a horizontal area where segments can be placed, and a catgory highway is a set of lanes for segments from one category. Close-reading the code I find this is in fact mostly correct. And with that I feel there’s a great OO (or more prototypical) way to reboot the code for this app.

In Conclusion

In conclusion, or perhaps more appropriately “in summary”, I can say this is a fine and above-all inspiring prototype. The code’s short and bad, but not even that bad for a quick prototype.

With a proper disclaimer and “wrap up” message I feel can safely leave TimeLine be for now. Perhaps one day I’ll revisit it.

Finishing BattleTop

Okay, the title is about as uninspired as topping a vanilla ice cream dessert with chocolate sauce. But it’s to the point! Also, with slight OCD, it’s nice that my archives will show nicely aligned “Finishing …” titles. This is important for many reasons!

BattleTop ThumbnailAnyhow, as I’ve announced recently, I had already finished wrapped up one project: BattleTop. Have a peek at the Source, or check out the Live Version.

Elevator Pitch

What would’ve been the Elevator Pitch if I had made one when I started? Something like this:

I’m developing an app for keeping combat initiative so tabletop RPG groups can skip the tedious parts and share the initiative state easily, using (mobile) devices that are present in sessions anyways.

Writing elevator pitches is far from my specialty, perhaps I should’ve said something about “the competition” too: I’ve actually looked for existing apps, and tried a few from the Android Play Store. Only a few existed at the time, and they were very unsatisfactory.

What’s it built with?

I’ll be honest: from the subset of suitable technologies, I didn’t choose based on “best for the job”. Instead, I chose two technologies that I wanted to learn more about, and stayed in my comfort zone when choosing the rest.

The initial prototype was built with:

  • Html5, because at the time I loved tinkering with the new semantic tags. Also, I intended to learn all about tinker with LocalStorage for saving state between (possibly accidental) page refreshes.
  • Custom CSS, because it’s “good enough” for a prototype, and switching to SASS or LESS later on is easy.
  • jQuery, because I thought I wanted to learn how to write jQuery plugins.

Only two days / 11 commits after I had started did I branch off to rewrite things MVVM style with KnockoutJS. I’ve not looked back since. It also paved the way for easy unit testing, allowing me to experiment with QUnit.

How’s it wrapped up?

The frequency with which our group did RPG sessions had dropped dramatically. This removes the need to have a tool, the motivation to keep developing it, and the ability to test it. And this is okay with me. I learned some cool new things, and heck: it even got it to a functional beta version.

To wrap it up I just reviewed all of my code. Unit tests had already been done in August, and even though the code isn’t the fanciest ever, it didn’t have any obvious loose ends or idiotic bugs. So after some minor changes I just added a “discontinued” warning to the code, the live version, and the project site, and that’s that: closure!

Any future for BattleTop?

Not very likely. I think I prefer to focus effort on one certain other project, and after that probably even start new projects before picking up BattleTop again.

But like I said: I’m okay with that.

Software Licences

Software licenses are confusing. I thought making a summary would require some useful research and would be a great way to learn more! Also, I was inspired by this WTFPL comic about licenses.

So I sat down and started my research.

Then I broke down and cried.

Why is this shit so hard? The opensource.org site is decent, and even dares annotate pieces. However, for a summary of the popular license you’d have to construct something yourself from the bare license texts.

Anyways, I stopped crying. Grabbed both a beer and a hot cup of tea, and went in for round two. Surely Wikipedia has a good starting point? A zoomed out screenshot of the Comparison of free and open-source software licenses looks like this:

Open Source Comparison

So I started crying again.

Why I is this shit so hard? I guess it’s in part tiredom and beer (or even the tea? :O) talking, as the comparison table is actually quite informative and clean (after blinking once or twice). However, this would not help me summarize the licenses that I would consider using for a project.

In any case, I stopped crying. Poured a glass of whiskey after downing a cup of coffee. Round three.

Time to remember why I was doing this in the first place. Or perhaps not why, but more what triggered this post in the first place (besides the WTFPL comic).

The reason’s actually simple: it was the Stack Overflow license. Specifically this meta question. User  “Stefan” asks how to get permission to use a function from a SO post in GPL-licensed software. I had never thought about that: people may have to worry about all sorts of things to use entire snippets from SO posts. And heck: I am also one of “people”…

Interestingly, the solution from a code-providing point of view is simple. Posting on Stack Overflow means you release it under “cc-by-sa 3.0 with attribution required”. However, you can license it in other ways too if you so desire (and have the (copy)right to do so).  As user Pëkka puts it in this post:

Stack Overflow Copyright flowchart

So I broke down in tears again.

Why was this shit so easy for Pëkka to describe? How can I ever hope to create a text or picture about licenses equally eloquent and explanatory?

I think I can’t. So it’s probably best if I don’t write a post about licences.

CSS syntax naming conventions – REDUX

I’ve blogged about CSS naming conventions before. The Stack Exchange question I referred to then has since been closed (for understandable reasons). However, it recently also started gathering “delete” votes. Given that I don’t have enough reputation to see deleted posts on Programmers.SE, I intend to salvage up front whatever info was in that post and it’s answers here.

So, here’s the redux version of my post, along with the answers. If anything, this’ll be a good excercise in following the cc-by-sa license from Stack Overflow.


Question: what are the practical considerations for the syntax in class and id values?

Note that I’m not asking about the semantics, i.e. the actual words that are being used. There are a lot of resources on that side of naming conventions already, in fact obscuring my search for practical information on the various syntactical bits: casing, use of interpunction (specifically the - dash), specific characters to use or avoid, etc.

To sum up the reasons I’m asking this question:

  • The naming restrictions on id and class don’t naturally lead to any conventions
  • The abundance of resources on the semantic side of naming conventions obscure searches on the syntactic considerations
  • I couldn’t find any authorative source on this
  • There wasn’t any question on SE Programmers yet on this topic :)

Some of the conventions I’ve considered using:

  1. UpperCamelCase, mainly as a cross-over habit from server side coding
  2. lowerCamelCase, for consistency with JavaScript naming conventions
  3. css-style-classes, which is consistent with naming of css properties (but can be annoying when Ctrl+Shift+ArrowKey selection of text)
  4. with_under_scores, which I personally haven’t seen used much
  5. alllowercase, simple to remember but can be hard to read for longer names
  6. UPPERCASEFTW, as a great way to annoy your fellow programmers (perhaps combined with option 4 for readability)

And probably I’ve left out some important options or combinations as well. So: what considerations are there for naming conventions, and to which convention do they lead?


Amos M. Carpenter answered:

Bounty or not, to some extent the choice will always be a “matter of preference” – after all, how would you feel if the W3C recommended (or even imposed) a certain convention that you didn’t feel was right?

Having said that, though, I personally prefer the lowerCamelCase convention, and I’ll give the reasons and practical considerations I’ve used to make up my mind – I’ll do so by a process of elimination, using the numbering from your question:

(5.) justnoteasilyreadablebecauseyoudontknowwherewordsstartandend.

(6.) ASABOVEPLUSITSANNOYINGLIKESOMEONESHOUTING.

(4.) historical_incompatibility_plus_see: Mozilla Dev Documentation.

(3.) a-bit-trickier-to-explain… as you mention, selectability in text editors is one issue (as with underscores, depending on the editor), but for me it’s also the fact that it reminds me of the syntax reserved for vendor-specific keywords, even if those start with a hyphen as well as having words separated by them.

So this leaves your (1.) and (2.), UpperCamelCase and lowerCamelCase, respectively. Despite the mental link to Java classes (which are, by a more clearly defined convention, UpperCamelCase), CSS class names seem, to me, to be better off starting with a lowercase letter. Perhaps that is because of XHTML element and attribute names, but I guess you could also make the case that having CSS classes use UpperCamelCase would help to set them apart. If you need another reason, lowerCamelCase is what the W3C uses in examples for good class names (though the URL itself, annoyingly, disagrees with me).

I would advise against (4.), (5.) and (6.), for the reasons stated above, but suppose that arguments could be made for either of the other three.

Whether or not you (or anyone else for that matter) agree with me on this matter is up to you though. The fact that you haven’t got a definite answer quoting authoritative sources by now can be taken as a hint that there isn’t such a thing as a definite standard on this issue (else we’d all be using it). I’m not sure that’s necessarily a bad thing.


Emanuil Rusev answered:

Words in CSS class names should be separated with dashes (class-name), as that’s how words in CSS properties and pseudo-classes are separated and their syntax is defined by the CSS specs.

Words in ID names also should be separated with dashes, to match the syntactic style of class names and becaus ID names are often used in URLs and the dash is the original and most common word separator in URLs.


tdammers answered:

It’s mostly a matter of preference; there is no established standard, let alone an authoritative source, on the matter. Use whatever you feel most comfortable with; just be consistent.

Personally, I use css-style-with-dashes, but I try to avoid multi-word class names and use multiple classes wherever possible (so button important default rather than button-important-default). From my experience, this also seems to be the most popular choice among high-quality web sites and frameworks.

Lowercase with dashes is also easier to type than the other options (excluding the hard-to-read nowordseparatorswhatsoever convention), at least on US keyboards, because it doesn’t require using the Shift key.

For id’s, there is the additional practical consideration that if you want to reference elements by their ID directly in javascript (e.g. document.forms[0].btn_ok), dashes won’t work so well – but then, if you’re using jQuery, you’re probably going to use them through $() anyway, so then you can just have $('#btn-ok'), which makes this point mostly moot.

For the record, another convention I come across regularly uses Hungarian warts in ID’s to indicate the element type, especially for form controls – so you’d have #lblUsername, #tbUsername, #valUsername for the username label, input, and validator.


asfallows answered:

I strongly believe the thing that matters most is consistency.

There are two ways to look at this:

  1. A good argument can be made for alllowercase or css-style-clauses (probably the better choice) because they will be the most consistent with the code they’ll be in. It will lend a more natural flow to the code overall and nothing will be jarring or out of place.
  2. An equally good argument can be made for a style that is distinct from HTML tag names or CSS clauses, if it will differentiate IDs and classes in a way that aids readability. For example, if you used UpperCamelCase for IDs and classes, and didn’t use it for any other construct or purpose, you would know you had hit on one every time you saw a token in that format. One restriction this might impose is that it would be most effective if every ID or class were a 2+ word name, but that’s reasonable in many cases.

In writing this answer out I came to find that I’m much more inclined toward the second choice, but I will leave both because I think both cases have merit.

CSS Kata “The Lord of the Rings”

After last kata I’d really had it with CSS-ing “true semantic” markup. So this time I went all out: bend the markup backwards as far as it’d go. And though the html-gods may strike me down, they won’t do so before I’ve created this monster:

The Lord of the Rings - Comparison

I’ve spent 1 minute on a background to match the “feel” of the original poster, and a good 2 hours fiddling on the markup and CSS. The result felt moderately pleasing. I only felt like doing the easy bits, so I left the things I didn’t instantly know a solution to for what they were (the 3D effect on the letters, choosing a better font, etc.).

For reference, here is the monster we’re talking about:

So many spans, my eyes! Here’s the corresponding CSS:

See it in action on JSBIN.

This concludes my self-imposed challenge of CSS katas. Even though these katas (or the fact that I insisted on publicizing them) were probably not “lightweight” enough, the basic principle of doing katas was enjoyable. Perhaps I should secretly start another series…

All roads lead to Excel, even those from SQL

My first employer provided me with some valuable insight:

Microsoft Excel is the main competition for any piece of software.

Over the years this statement has proven true an alarming number of times. And it makes sense too. Everyone knows how to use Excel, and it’s extremely flexible, especially when you’re working with any kind of (tabular) data. In other words: all roads lead to Excel.

Roads from SQL also often lead to Excel, even though they’re not always pretty. Sure, if you’re on foot or horseback, with a limited amount of luggage, the road will be fine. However, here’s a particular scenario that obscures the path.

Scenario

These are the basic constraints:

  • Available tools: MSSQL 2012, SSRS, SSIS, Visual Studio 2012 & .NET 4.5.
  • Excel versions: either XLS (2003 and below) or XLSX (2007 and up, slightly preferred) will do.
  • Form of data: combination of normalized and denormalized data (see below).
  • Amount of data: tops 250.000 rows (times 20 when unpivoted).
  • Required response time: live exports that should run within seconds.
  • Databases: many instances each with the exact same schema but different data.

So there’s access to the latest and greatest Microsoft tools, and the option to include custom components. Free and open source components are preferred, but buying tools and components is also an option.

Data

Here’s a simplified version of how the data is modeled:

  • Person is a “flat” table, containing some columns that have “fixed” personal details.
  • Property and Value allow for custom (normalized) Person fields.

Here’s a visual of this simplified model:

Database model for Person, Property, and Value

 

You can view the SqlFiddle with some sample data. A typical query to get the data that’s going to be our starting point:

This will give output similar to this:

Id FirstName Surname Id PropName Id ValName CustomValue
1 John Doe 1 Trait 2 Bold NULL
1 John Doe 1 Trait 3 Easygoing NULL
1 John Doe 2 Eye color 4 Green NULL
1 John Doe 3 Pet name 7 Placeholder Fluffybunny
2 Mark Roe 1 Trait 3 Easygoing NULL
3 Mary Fisch 2 Eye color 6 Other… Red-brown-ish
3 Mary Fisch 3 Pet name 7 Placeholder Ravi

Note that in reality I’m dealing with these figures:

  • About 30 columns on the Person table;
  • About 20 different Properties with about 6 possible Values on average;
  • Anywhere between 100 and 250.000 Persons;
  • Usually between 0 and 2 Values per Person per Property;

For one, this means that the normal output of mentioned query has a lot of redundant information (i.e. the 30-ish Person columns).

Target Output

The business requirement here when moving this data to Excel should be obvious: the data should be pivoted. Each “Property” should become a group of columns, with one column per “Value”. A table says more than a thousand words; this is the requested output:

Trait Eye Color Pet name
Id FirstName Surname Bold Easygoing Green Other… Placeholder
1 John Doe x x x Fluffybunny
2 Mark Roe x
3 Mary Fisch Red-brown-ish Ravi

Something along these lines is what the business user’s would like to see.

Bonus Objectives

Getting the target output in itself is a challenge. I’m not done yet though, here are some bonus objectives we have (with MoSCoW indications):

  • Properties and Values both have ordering, the order of columns Should respect that.
  • Any solution Should allow for some styling (fonts, borders, backgrounds. It’d be Nice to have further control, for example enable a theme, alternate row coloring, etc.
  • I Would like to have a place for metadata (date exported, etc) somewhere in the generated file.
  • Localization of the column headers (where applicable) would be Nice to have.
  • It’d be Nice to be able to reuse much of the solution in generating XML files instead of Excel sheets.
  • Any solution Must be solid and maintainable.
  • Any solution Must run on moderate hardware without hogging resources.

Current Solution

Right now, the above is accomplished using Reporting Services. This works decently well for datasets containing no more than a few thousand Person rows with related Property Values.

However, beyond about 3000 records performance quickly starts to degrade. This isn’t entirely unexpected, because Reporting Services isn’t really meant for this task (it’s much better at showing aggregates than at exporting large volumes of data).

Possible Solutions

There are many possible solutions. I’m currently considering a subset of them (some solutions merely for “benchmark” purposes):

  • SSIS packages. The tool seems meant for the job. I do hold a grudge against the tool, but maybe it’s time to get over that.
  • Dynamic SQL + generated RDLs. Use DynSQL to do the pivoting. This requires generated RDL files because the fields of a query must be known up fron to SSRS.
  • Dynamic SQL + OPENROWSET + OleDB. Use DynSQL to do the pivoting, and export it straight to Excel using OleDB.
  • FOR XML queries into OpenXML. The basic idea: fast FOR XML queries, possibly an XSLT, generating OpenXML data, and plug it in a basic XLSX.
  • ORM or ADO.NET into an OpenXML using an XmlWriter. Something along these lines.
  • BCP directly into Excel files. Haven’t looked into this yet, but it may be an option.
  • SQL CLR. Not sure how this would work (if at all), but there might be options here.

Now it’s time for me to try some of them, see which one fits the requirements best.