CSS Kata “Malcolm X”

Last post I set up five related CSS Katas. The callenge is to recreate a movie poster with just css and minimalistic html. The idea is not to come up with an ultimate and production-ready solution. The idea is to practice, and to think outside the box.

Turns out it takes quite some effort to set aside perfectionism while trying these Kata. Turns out it takes even more effort to publicly post my imperfect attempts. Nonetheless, here goes!

I’ve tried two approaches in reproducing the “Malcolm X” movie poster. Here’s a comparison:

Malcolm X Kata Attempts Overview

What I noticed immediately while practicing: it does in fact challenge you to think about some basic meta stuff. This particular kata had me stumped with one very basic question: “Is the big ‘X’ semantic, or not?”. That is: is it a drawing shaped like an ‘X’, or is it an actual letter ‘X’? So first thing I considered was changing the actual markup to something like this:

Notice the extra “X” as the main div content.

It’s an option, but it’s a bit too much for me though. It would be as if the poster said “Malcolm X, X”. However, you could say that “:after” the h1 tag comes a big “X” as additional graphical content. This led to Version A:

There’s several things I can notice about this:

  • The big X has to be in a certain font. The only widely available font that comes close to the original “X” is the one from “Courier New”, and it’s not all that close. In fact, it looks a lot different.
  • It’s pretty tough to control the width of the lines that make up the X. I’m not at all happy about how the top of the “M” isn’t aligned with the top of the big “X”.
  • Turns out the “Malcolm X” in the original poster is quite dense. I resorted to using css transforms, but I think there’s also some font-stretching css modules (just haven’t looked into that yet).
  • It’s extremely tough to make everything scale nicely. That is, all the various dimensions, font-sizes, paddings, etc. are in absolute pixels. Changing one would mean tweaking all others. I just decided that’s okay for now, allowing me to focus on the other parts of this kata.
  • The screenshot was made in Chrome. In other browsers it’s not right at all. I just chose not to focus on browser differences for now.

My other attempt tries to trade semantics for more pixel-perfectness, or at least explore whether that’s a possible path to take. Here’s the code for Version B:

Here are my notes:

  • It looks a lot more like the original than Version A does. Far from perfect, but this direction looks promising.
  • This version is equally hard-coded: exact dimensions everywhere, all interdependent.
  • There’s some peculiarities about the rotation transforms I haven’t quite grasped yet. For example, the two diagonal bars don’t quite both start equally far from the outer borders.
  • After 15 years of CSS, z-indexes still make my head hurt. I need to practice that seperately some time.

All in all I’m quite pleased. Both my solutions look like shit, but hey: it made me practice! I’ll try to charge up, and may even publicize my epic failures for the second Kata as well.

CSS Katas

In one of the books I read recently I came across Dave Thomas‘ excellent idea of “Code Katas“. These are in essence nothing more than a way to train your programming skills. Small excercises with no real consequence, allowing you to focus on the programming instead of the result.

This idea really stuck with me, even though I don’t do them enough to my taste. I’ve honed my TDD skills with the first 20 Project Euler problems, and sometimes answering a Stack Overflow question may qualify. However, I don’t quite “excercise regularly”.

Regardless of whether it will get me excercising again, I thought it’d be fun to come up with a set of kata myself. So here’s a basic challenge for you (= me?): recreate these titles on movie posters in CSS3. Just the texts of course.

Before we get to the five posters, let’s first lay down the ground rules:

  • CSS3 so modern browsers (IE10+, latest Chrome, FF, Safari). Great if it works in one of em, bonus points if it looks alike across browsers.
  • Fonts is a tricky bit. Google Fonts is okay I guess, but bonus points for sticking with Arial, Verdana, Georgia, Times New roman, etc.
  • Images are out of the question of course.

In addition, or even to extend on these bullets: there are no real “hard and fast” rules. It’s not a competition (even though I personally love treating it as such), it’s an excercise.


1. Malcolm X

The biopic of the controversial and influential Black Nationalist leader.

Malcolm X movie poster

Here’s the suggested markup to style:

2. Citizen Kane

Following the death of a publishing tycoon, news reporters scramble to discover the meaning of his final utterance.

Citizen Kane movie poster

Here’s the suggested markup to style:

3. Face/Off

Famous actors shoot at eachother.” Or something. Fun title for this excercise though!

Face Off movie poster

Here’s the suggested markup to style:

4. Fight Club

“An insomniac office worker looking for a way to change his life crosses paths with a devil-may-care soap maker and they form an underground fight club that evolves into something much, much more…”

Fight Club movie poster

Here’s the suggested markup to style:

5. The Two Towers

Second movie in the Lord of the Rings trilogy.

The Lord of the Rings: The Two Towers

Here’s the suggested markup to style:

However, I can imagine different “correct” kinds of markup for that data, feel free to tweak accordingly.

All roads lead to Excel, even those from SQL

My first employer provided me with some valuable insight:

Microsoft Excel is the main competition for any piece of software.

Over the years this statement has proven true an alarming number of times. And it makes sense too. Everyone knows how to use Excel, and it’s extremely flexible, especially when you’re working with any kind of (tabular) data. In other words: all roads lead to Excel.

Roads from SQL also often lead to Excel, even though they’re not always pretty. Sure, if you’re on foot or horseback, with a limited amount of luggage, the road will be fine. However, here’s a particular scenario that obscures the path.


These are the basic constraints:

  • Available tools: MSSQL 2012, SSRS, SSIS, Visual Studio 2012 & .NET 4.5.
  • Excel versions: either XLS (2003 and below) or XLSX (2007 and up, slightly preferred) will do.
  • Form of data: combination of normalized and denormalized data (see below).
  • Amount of data: tops 250.000 rows (times 20 when unpivoted).
  • Required response time: live exports that should run within seconds.
  • Databases: many instances each with the exact same schema but different data.

So there’s access to the latest and greatest Microsoft tools, and the option to include custom components. Free and open source components are preferred, but buying tools and components is also an option.


Here’s a simplified version of how the data is modeled:

  • Person is a “flat” table, containing some columns that have “fixed” personal details.
  • Property and Value allow for custom (normalized) Person fields.

Here’s a visual of this simplified model:

Database model for Person, Property, and Value


You can view the SqlFiddle with some sample data. A typical query to get the data that’s going to be our starting point:

This will give output similar to this:

Id FirstName Surname Id PropName Id ValName CustomValue
1 John Doe 1 Trait 2 Bold NULL
1 John Doe 1 Trait 3 Easygoing NULL
1 John Doe 2 Eye color 4 Green NULL
1 John Doe 3 Pet name 7 Placeholder Fluffybunny
2 Mark Roe 1 Trait 3 Easygoing NULL
3 Mary Fisch 2 Eye color 6 Other… Red-brown-ish
3 Mary Fisch 3 Pet name 7 Placeholder Ravi

Note that in reality I’m dealing with these figures:

  • About 30 columns on the Person table;
  • About 20 different Properties with about 6 possible Values on average;
  • Anywhere between 100 and 250.000 Persons;
  • Usually between 0 and 2 Values per Person per Property;

For one, this means that the normal output of mentioned query has a lot of redundant information (i.e. the 30-ish Person columns).

Target Output

The business requirement here when moving this data to Excel should be obvious: the data should be pivoted. Each “Property” should become a group of columns, with one column per “Value”. A table says more than a thousand words; this is the requested output:

Trait Eye Color Pet name
Id FirstName Surname Bold Easygoing Green Other… Placeholder
1 John Doe x x x Fluffybunny
2 Mark Roe x
3 Mary Fisch Red-brown-ish Ravi

Something along these lines is what the business user’s would like to see.

Bonus Objectives

Getting the target output in itself is a challenge. I’m not done yet though, here are some bonus objectives we have (with MoSCoW indications):

  • Properties and Values both have ordering, the order of columns Should respect that.
  • Any solution Should allow for some styling (fonts, borders, backgrounds. It’d be Nice to have further control, for example enable a theme, alternate row coloring, etc.
  • I Would like to have a place for metadata (date exported, etc) somewhere in the generated file.
  • Localization of the column headers (where applicable) would be Nice to have.
  • It’d be Nice to be able to reuse much of the solution in generating XML files instead of Excel sheets.
  • Any solution Must be solid and maintainable.
  • Any solution Must run on moderate hardware without hogging resources.

Current Solution

Right now, the above is accomplished using Reporting Services. This works decently well for datasets containing no more than a few thousand Person rows with related Property Values.

However, beyond about 3000 records performance quickly starts to degrade. This isn’t entirely unexpected, because Reporting Services isn’t really meant for this task (it’s much better at showing aggregates than at exporting large volumes of data).

Possible Solutions

There are many possible solutions. I’m currently considering a subset of them (some solutions merely for “benchmark” purposes):

  • SSIS packages. The tool seems meant for the job. I do hold a grudge against the tool, but maybe it’s time to get over that.
  • Dynamic SQL + generated RDLs. Use DynSQL to do the pivoting. This requires generated RDL files because the fields of a query must be known up fron to SSRS.
  • Dynamic SQL + OPENROWSET + OleDB. Use DynSQL to do the pivoting, and export it straight to Excel using OleDB.
  • FOR XML queries into OpenXML. The basic idea: fast FOR XML queries, possibly an XSLT, generating OpenXML data, and plug it in a basic XLSX.
  • ORM or ADO.NET into an OpenXML using an XmlWriter. Something along these lines.
  • BCP directly into Excel files. Haven’t looked into this yet, but it may be an option.
  • SQL CLR. Not sure how this would work (if at all), but there might be options here.

Now it’s time for me to try some of them, see which one fits the requirements best.

70-480 (HTML5 with JavaScript and CSS3) Study Plan

Looking for my exam summary? Jump straight to it! Want to know more? Keep reading…

Last year I published my study plan for the 70-513 Microsoft exam (“study log” would’ve been more appropriate), which helped me pass the exam (without cheating by using brain dumps) with just under 200 hours of study time. This time I’m going for the 70-480 exam (html5, JavaScript, and css3), and I’ve decided to use an actual plan. When I started rolling up this plan, there were a few premises.

First up: the date for my exam is already set. I will probably not be able to spend a full 200 hours on the subject matter before planning my exam. The plan will have to scale with the amount of time I will turn out to have available. Anyways, given that the exam was free of charge, it’s not as big a deal if I don’t pass it.

Secondly, the subject matter for 70-480 is not entirely new to me, as was the case with the 70-513 (WCF) exam. So, my plan would be okay even if it omits more general prose about the subjects and sticks to the details of objectives. I’m not sure how this will affect my chances. I’m hoping my existing knowledge make things easier, but at the same time I’m afraid that it won’t…

Finally, and most imporantly, I figured that reviewing the exam objectives very carefully should be enoughBecause the exam is brand new, there is no official study guide, nor are there any books specifically aimed at the exam. I was hoping to piggyback on someone else’s summary, but given that I couldn’t find any decent one, I decided to create my own.

The Plan

So here are the steps I’m taking to study for this exam:

  1. Create an overview of all exam objectives.
  2. Find links for all objectives (at least one, preferably two or more per objective), from various sources:
    1. The relevant spec.
    2. The W3 wiki, if applicable.
    3. MDN (my personal favorite source for most web development topics).
    4. MSDN (it’s a Microsoft exam, after all).
    5. jQuery (Microsoft’s gambling hard on this library, it seems).
    6. Miscellaneous other sources (DiveIntoHtml5, Stack Overflow, etc)
  3. Review all objectives by reading through the linked pages.

So far I’m pretty happy with the result. If you want to piggyback on my 70-480 summary: be my guest! It looks something like this:

70-480 Study Guide

Creating the list and digging up all the links took a few nights work, all done over the course of the previous two weeks. Next up: two weeks of reviewing these objectives, followed by the exam itself.

Wish me luck!


Exactly one month ago I wrote about Google Code hosting. At the time I wasn’t ready to divulge the project I was using to test it, but today’s different. Today, I have decided to put BattleTop in Beta!

BattleTop is a responsive single-page web application to assist in bookkeeping things like Characters, Initiative, Hit Points, Conditions, et cetera, during D20-based table top RPG sessions.

BattleTop Beta Logo
BattleTop Beta Logo

You can view it, clone it, and provide pull requests for it at Google Code. Mind you, it is still in beta, meaning there’s many rough edges and bugs to be found. I have been and will be dogfooding it during our own table top RPG sessions, so I’ll be sharing your frustrations about bugs and time permitting I will be fixing things.

The current list of features:

  • Track characters. Add and remove monsters, NPCs, PCs, and environment initiatives. You can also reset the list to the party, or to a blank, new list.
  • Track initiative. Keep initiative and initiative modifiers, sort the list by initiative, keep track of ready and delay actions.
  • Track conditions. Each character has its own list of conditions which you can add, remove or change, with a number of turns to it (so they wear off automatically).
  • Track hit points. Each character optionally has a number of hit points. You can deal damage or apply healing to change the hit point amount.
  • Save/Load. Using the LocalStorage API BattleTop will save your state every 5 seconds. If you navigate away or re-open the page on your next play session the old state will be there.

Note: because BattleTop extensively uses many modern features (html5 semantic markup, css3 features, modern JS such as LocalStorage), only modern browsers will be supported.

Here’s a general view of what it currently looks like:


BattleTop 0.9.0 Setup Mode
BattleTop 0.9.0 Setup Mode


BattleTop 0.9.0 Update Hit Points
BattleTop 0.9.0 Update Hit Points

The horrifying state of free Android initiative-keeping apps was what triggered me to create BattleTop. I’ve decided on a HTML5 app as opposed to a native app because (a) it would be easier for me to create something in a short amount of time, and (b) to keep it portable across devices and operating systems with little effort. Hopefully BattleTop will help or inspire others as well.

Google Code Hg Hosting

A few months back I wrote a short comparison of Hg Hosting Providers, followed by a post about CodePlex Hg Hosting. In the mean time I’ve also tested creating a Bitbucket project, but I’ve neglected writing a post about the experience. I may get back on that some time, but first I would like to share my more recent experience with Google Code.

Note that I’m not ready (read: “proud enough”) to share the actual project I’m hosting, as it’s still in (and may never leave) prototype status. Once it’s worth viewing, I’ll probably be doing a short blog post about the project itself.

Without further ado, let’s jump in!

Sign Up and Project Creation

In this case I’m not writing much about sign up. You need a Google account, which most folks will probably already have. If, like me, you are already signed in to your account, nothing more is needed here.

On to project creation, which was surprisingly simple. The process is one step total:

Creating a Google Code Project
Creating a Google Code Project

Very similar to the CodePlex project creation. To note the differences:

  • It’s not entirely clear in Google Code how the project name would become part of the project URL.
  • Source control at Google Code (understandably) excludes TFS.
  • Google Code asks you to label (tag?) your project at creation time.
  • Most importantly: Google Code requires you to pick a license at project creation time.

That last one is really the only difference from CodePlex, but a big one at that! With CodePlex you get a one-month “Setup Mode“, which means your project is private and you can still work out the details (including picking an appropriate license). This is a big plus for CodePlex, especially if (like me at the time) you’re creating your first open source project.

Importing the Existing Repository

Where I had some cautionary steps importing my existing repository the previous time around, this time I had only one, making a total of four steps to get my existing repo pushed to Google Code:

  1. Create a temporary “copy-paste-backup” of my (existing) repo folder.
  2. Clone the Google Code (empty) repo locally.
  3. Copy the existing repo into the empty repo.
  4. Push the repo.

Step one turned out to be superfluous. So, this is in fact easy as pie.

Google Code Features Overview

Here’s my first impression of the various features of a Google Code project:

  • Project home is what you see when you land on the URL of your project (which is in the format: http://code.google.com/p/project-name)
  • Downloads is where you can host releases you’ve built and packaged. Haven’t tried this feature yet, but it looks very straightforward.
  • Wiki feels a little odd, because you start at a table-based listing of all pages. Beyond that it’s just a basic wiki (with yet another syntax), that’ll do the job.
  • Issues are for tracking bugs and tasks. Haven’t tried this yet, but it looks both straightforward and barebones.
  • Source provides a way to browse the files, view a history of the changes, and check on any clones there may be. Not as fancy as -say- GitHub, but gets the job done.
  • Administer allows you to change just about everything you see in your project, and works quite okay.

The theme should be clear: everything in Google Code is no-nonsense. All important features for code hosting are there though.


This hosting provider is pushing the KISS principle to the max. Google Code gets the job done, but it is very barebones, at times even downright ugly. Great hosting, but it doesn’t leave me “WOWed”. With the same functionality but a more pleasant experience, I think I would prefer CodePlex.

Pivot vs Unpivot

When mnemonics and learning by heart fail: write a blog post! So here is my own personal “which is which” reference post about the Pivot and Unpivot operations. First things first:


Example of the pivot and unpivot operation


Using the great Scipio Africanus and Hannibal as puppets, the above example shows the two operations in their most basic form. The pivot operation would be along these lines in T-SQL:

The query above is also available as a SQL Fiddle, to toy around with it some more, as there are many more complex queries possible with these operations.


Sassy Styles

In my previous post I ranted about the way the design community seems to violate the DRY principle. Let’s revisit the code (and Repeat the code, I know, I know):

What’s going on here? Well, to get to our beloved Em measurements, we apparently need a calculation based on our body’s font-size (24px) and the h1 target font-size (16px). This “would make future adjustments much, much easier”.

It’s not that I dislike this really, but more that I despise (having to do) this. We’re saying “1.5em” here, only we’re doing it twice.

Last week I’ve tried one of the solutions to this: SASS. And let me say, this feels like it could be love at first sight! With SASS, the above snippet will quickly transform into the following:

Much better, no? We’ve now only stated once what the font-size should be: a certain fraction × 1em. I’m a little bit disappointed about needing the “* 1em” there, but hey: it’s a great reason to ask another Stack Overflow question.

Anyways, SASS doesn’t stop here. It will add more improvements, one of particular importance to the above snippet. Consider this:

What’s up with the additional lines of code? Isn’t that extra code bloat? Well no, those lines help us achieve two very important goals:

  1. Our calculation is now much more meaningful, and will now truly “make future adjustments much, much easier”.
  2. We can reuse those variables in our style sheets. In the somewhat contrived example above it doesn’t really shine, but you can surely imagine this is a great benefit to the entire style sheet.

For my current pet project I’ve tried SASS in a feature branch, but I’ve already closed that branch: it was merged into the main branch after only a few hours. With this I’m indeed implying there’s a very friendly learning curve for SASS.

And yes: I’m also implying that you should try it for yourself! There’s many more nice features I haven’t even mentioned yet. And if I didn’t convince you, perhaps the two-page tutorial will!

Measurements in Responsive Design

My wife called me out for looking at CSS through a pair of Programmer’s Glasses™. She hastily added this could well be a Good Thing, and I suppose I’ll just interpret it as a compliment. In fact, I must say I agree, feeling more like a “developer” than a “designer”.

Let’s first look at how we ended up at this name-calling. Here’s a code snippet from Ethan Marcotte‘s book Responsive Web Design from the A Book Apart series:

He then goes on to state that:

I usually put the math behind my measurements in a comment to the right-hand side of the line, which makes future adjustments much, much easier for me to make.

At first glance, this makes sense. However, in the long run, this feels really weird to me: it smells like code duplication. The actual result and the calculation in the comment both express the “what“, just in different form. It reminded me of Clean Code, where a whole chapter is dedicated to comments, and I’d think this would fall under the Bad Comments section (“Noisy Comment”, perhaps?).

Now I’m currently thinking I must be wrong: everyone who is something in responsive design is generating this type of sample code. Maybe it is because I should read this kind of code like:

Both forms describe what the font-size should be. Form 2 is probably “best” from a Clean Code point of view (as it’s most descriptive), but unfortunately only form 1 is valid (plain) CSS. As a compromise both forms are kept.

So, what are the options for improving things? There’s at least a few I can currently see:

  1. Combine “Form 1 and 2”. Accept that you’ll need discipline to keep the measurement and comment in synch. This is what the Responsive Design community leaders seem to practice.
  2. Just use “Form 1”. You’ll loose (or never have) the benefit of understanding your measurement.
  3. Use a CSS pre-processor. There are Sass and LESS, the most well-known ways to introduce (among others) calculations in stylesheets.
  4. CSS3 modules. The CSS3 Calculations module introduces calculations, and the Variables module may even take this one step further (as far as preventing code duplication is concerned).

To be honest, this list is currently my reverse order of preference. I’d love for option 4 (the CSS3 modules) to become a success. Until then, I’m bound to investigate the CSS pre-processors, because option 1 and 2 are both crappy, in my opinion.

MCTS 70-513 (WCF) Study Plan (part 2 of 2)

This is part 2 in a series on the MCTS 70-513 exam. The first part is an introduction, the second part describes the study plan I followed to complete the exam.

There were two questions I wanted answered when I set out to study for the exam:

  1. How much time is reasonably needed to pass the exam?
  2. What materials are recommended and/or needed to pass the exam?

The first question was impossible to answer or to find an answer to. The second one was a bit easier, even though I had to filter the many suggestions to use Braindumps. Here is my answer to both.


Let’s kick off with an overview of the materials used, and provide an answer to question one:

Material Hours spent
MSDN Getting Started With WCF Tutorial 2
Book: Programming WCF services, 3rd Edition 35
Book: Windows Communication Foundation 4 Step by Step 47
Book: Microsoft .NET Framework 3.5- WCF Self-paced Training Kit 34
Stack Overflow: Reading and answering questions on [WCF] 30
Memorizing web.config’s system.serviceModel section 6
Reviewing All 70-513 Exam Objectives 36
Grand Total 190

If you’re going to create a study plan for yourself based on this, you should note the following:

  • These are pure study hours; breaks and such are not included.
  • This is what worked for me; no warranty given or implied!
  • I spent these hours mostly in the order above.
  • Before these hours I had zero experience with WCF, a wee bit experience with ASMX services, and quite a bit of experience with ASP.NET.

The great thing about these 200- hours: this is approximately what I estimated up front I’d need to get a passing grade.

Materials Used

Here are some miscellaneous details about the abovementioned materials.

Programming WCF ServicesProgramming WCF services, 3rd edition

This book by Juval Löwy is widely considered the most in-depth book about WCF. It explicitly focuses on the things the author deems important, leaving out (or just barely mentioning) the esoteric or “useless” bits of WCF. When studying for something I prefer to get in over my head, and slowly fill in the gaps. If you prefer to build up your knowledge steadily instead of doing a deep-dive: save this book for last.

Windows Communication Foundation Step-by-StepWindows Communication Foundation 4, Step by Step

This book was probably the worst but most complete tour you could get for WCF. It takes you along most of the features with practical excercises, but is horrible in explaining things (even mis-informing at times). The worst part about this tour is the fact that you have to follow the tour guide to the letter, otherwise you’ll get lost without any clue how to get back on track. Glad I read the other book first, which allowed me to understand what this tour guide was showing me.

WCF Training KitWCF 3.5 Self-paced Training Kit

There is no .NET 4 version of the WCF training kit (makes you wonder how invested Microsoft is in this technology, huh?). Instead, most fora recommend just grabbing the 3.5 version. This book is very decent, has topics chopped into nice small chunks, offering a mix of theory, exercises, and training questions. It also was very complementary to the book by Juval Löwy.

Stack Overflow

Answering questions on the various Stack Exchange sites is one of my current hobbies. It made sense to practice and test my WCF knowledge by answering questions, and I can highly recommend it as a form of study! It was based on the slogan: “If you want to get good at something, start teaching it!“. Words of wisdom.

Memorizing <system.serviceModel>

Okay, maybe not one of the brightest ideas I had. It did act as a general review of WCF, but in the end it didn’t feel worth it. I suggest you spend your time on something else.

Reviewing the Exam Objectives

This was crucial to passing the exam. I printed all the objectives, and went through each letter of it, making sure I knew exactly what each and every item was about. I reviewed every topic by at least looking it up in one of the books I had, and often enough I also went on to read more about it on MSDN. This exercise was useful not in the least because there were some objectives on the list that weren’t even mentioned in any of the books.

In Conclusion

There’s a lot about the 70-513 exam that understandably lures folks to using those brain dumps. The most important things to mention:

  • WCF is a vast topic, boring in many aspects. Given that you’ll only use a small subset of WCF.
  • Questions on the exam are horrible, with things like “In which namespace does class X reside?“.
  • The exam on itself tests factual knowledge and not in the slightest practical proficiency.

Yet still I’m glad I finished and passed without resorting to “cheating”. The above study plan is what did the trick for me. Maybe it’ll help someone else too.