TomatoVPN and OpenVPN for Secure Internet Access


I use a combination of TomatoVPN and OpenVPN to connect to the Internet while using unsecure connections at places like coffee shops.  Even while connecting to a connection secured by a password you are still vulnerable to others sharing that same connection who know that password.

This combination of technology creates a secure tunnel through the Internet using a shared secret that is not communicated over that connection.  TomatoVPN & OpenVPN are both free and the routers running the software also tend to be cheap.  I found the LinkSys router I set Tomato up on became more stable as well as having the additional features.  This technology provides a great way of using your secure home connection while away from home in a way that would have been hideously expensive only a few years ago.

When traffic is directed over the tunnel it appears to the outside world that you are communicating from your home location, meaning access to foreign resources that might be geo-fenced in your current location. You can check where the Internet thinks you are by using a service like WhoAmI.

The main downside of using a VPN tunnel for all of your Internet traffic is that your down bandwidth is constrained by the up bandwidth of your home connection.

There are a plethora of options for making VPN connections and I will not go in to detail here.

So coming to the point of this post… I did find it a little difficult setting up my client OpenVPN file up just right for my TomatoVPN & Windows 7 configuration.  I’ve coped a slightly altered version of my “.ovpn” file below.  It differs a little from what I have seen elsewhere but I now see traffic going over the VPN 100% of the time rather than intermittently appearing that it was doing so but actually using the Internet directly.


# Use the following to have your client computer send all
# traffic through your router (remote gateway)
# client
port 1195
dev tap
secret “C:/Users/Public/Documents/Keys/(my.static).key”
proto udp
# This is default for many routers
redirect-gateway def1
route-method exe
route-delay 2

Edit 1:


This article of mine does seem to get quite a few hits… I’ve come to need to set this up again and found the following blog article useful:

When I get time I will document this again.  If you find any good tutorial articles in the meantime please do post them in the comments :0)

Book: The Visual Display of Quantitative Information – Edward R. Tufte


This article is a few notes from the excellent book “The Visual Display of Quantitative Information” by Edward Tufte.  Though flicking through the pages casually it may seem like quite a dry text on an equally arid subject, devoting proper time to reading through it I have found it enlightening and very interesting.  Though the text concentrates on statistics and their representation in print, there are metaphors and ideas that I believe can be transferred in to user interface design for applications and games.

The format of this post follows the sequence in the book and includes quotes from each part together with my own notes.  At the end of the article I identify how some principles have already been applied to applications and how they might be be applied to games and tools in the future.

Part 1 – Graphical Practice

Strong opening that is illustrated throughout the book.  Quoting & paraphrasing the principles of graphical excellence ([1] p.13,51), graphics should:

  • * Show with clear purpose; description, exploration, tabulation, or decoration.
  • * Reveal data by inducing the viewer to think about the substance. 
  • * Not obscure the data by the technology and methods used to display the data.
  • * Be truthful about the data and not misleading via how it is presented.
  • * Present many numbers in a small space so it can reviewed at a glance.
  • * Encourage the eye to compare the displayed data.
  • * Make large data sets coherent.
  • * Reveal the data at several levels of detail to allow the consumer to see the broad picture through to the fine detail.
  • * Be closely integrated around the context it was designed for, kept nearby with statistical and verbal descriptions of the data.
  • * Use simple methods to display simple data; do not waste real estate drawing graphs if a table would communicate data using less space.
  • * Be a well-designed presentation of interesting data.  A harmony of substance and design.
  • * Communicate complex ideas clearly, concisely and precisely.
  • * Enable the viewer the greatest number of ideas in the shortest time using the least ink in the smallest space.

Geographical maps have historically been able to show a great deal of information in a small area.  Overlaying information on to maps can reveal patterns to the eye that might not be readily recognisable from large tables of numbers that might be used to create an overlay

Shades of grey from black to white can show a range of numbers very succinctly without having to decode a legend.  Shades of colour and transitions between colours are harder to decode and can pose accessibility issues.

Graphics are important to convey the “shape of the data” [2] to enable contrast and pick out patterns.  The eye tends to be drawn towards line intersections, slopes, plateaus and clusters of information.

Often time series (snapshots of something over time) require contextual information to be useful, the graphic alone is unlikely to be enough so good labels and/or legends are important.  They are useful in before and after studies (perhaps do and undo).

Some powerful graphics combine time and space with art and statistics – search for images of “A complete year long life cycle of the Japanese beetle” to see different interpretations of [3].  Tufte claims a graphic drawn by Charles Josepth Minard (1781-1870) of the the fate of Napoleon’s army in Russia is perhaps the best statistical graphic ever drawn [4]. Graphic drawn by Charles Josepth Minard (1781-1870) of the the fate of Napoleon’s army in Russia.  It combines several layers of data, including a map, temperature, the numbers of people left in the army and the passage of time.  It cleverly shows the route taken by the men and the toll of the cold climate and various positions in the journey.  One subtle colour is used with much of the data conveyed via line thickness and clearly labelled points of change along that line.  It fulfils the criteria set out by Tufte summarised at the start of this part so it is easy to draw the conclusion as to why Tufte thinks so highly of this graphic.  Indeed at a glance the graphic draws your eye along the important details, and closer inspection does yield greater detail.

The “graphical integrity” section covers the use of graphics over the years to deceive a viewer to support a particular agenda.  Eye-opening in many ways, the section does also cover how not to deceive a viewer.  These boil down to:

  • * Once a unit of size has been decided in a graphic, keep it consistent.
  • * Adding more dimensions to the graphic then there are in the data.
  • * Keep the measure along the horizontal axis consistent.
  • * Accurate labelling.
  • * Keep the “Lie Factor” ([1] pg.57) close to 1.  Lie Factor = size of effect shown in graphic ÷ the size of effect in data.

Part 2 – Theory of Data Graphics

The concept of “data ink” is discussed throughout this chapter ([1] p.93).  Data ink is the non-erasable core of a graphic that displays the variation in the numbers represented.

  • Data ink ratio is:
    • data ink ÷ total ink used to print the graphic.
    • proportion of a graphic’s ink devoted to the non-redundant display of data-information.
    • 1.0 – proportion of a graphic that can be erased without loss of data-information.

An important idea in this part of the book is that less is more.  Within reason maximise the data-ink ratio by erasing non-data ink.  This could mean for example displaying part of the axis on a graph to mark data bounds, or if the data is displayed on a grid only show parts of the grid that directly add value.

Symmetry in data sometimes means that only half of the graphic need be displayed to convey the whole meaning of the data, as the eye wants to complete the symmetry ([1] p.97).

Redundancy in data can also be useful in situations where the data wraps around-for example, a 24-hour clock or a globe ([1] p.98).

Unintentional optical art via moiré effects creating a disconcerting illusion of movement should be avoided.  Cross hatching and fine grids should be avoided – grids can be minimised by using them only around the data variation being displayed ([1] p.120).

Complicated colour keys that require memorisation of vocabulary for a particular graphic can prohibit quick understanding of that graphic ([1] p.154).  Subtle use of colours can be used as a means of identification, but darker & lighter shading should be used for scaling values rather than colour blending.

Creating and maintaining a viewing architecture within a graph is important.  Good architecture should enable the eye to follow several different and uncluttered paths through the data ([1] p.159).  Within the framework the viewer should be able to spot patterns in the encoded data set and easily spot unusual parts of that data.

Experimentation and revision is the key to the concise depiction of information.  Trying different methods of expressing the points, axis, grid and tables can help.  Label important events in data.  Use standardised units for things like money where unit value is effected over time ([1] p.77).

Sometimes the same ink can be use to serve more than one graphical purpose.  For example, a blot on a map can convey an amount via shading rather than just being a binary indication via being black or white ([1] p.139). Too much complexity in multifunctional ink can lead to an encoding that can only be readily understood by the inventor, as graphic perception is very subjective in the first place ([1] p.56).  Some advocate mobilising the numbers being represented in the graphic themselves.  John Turkey inventor of the stem-and-leaf plot wrote “If we are going to make a mark, it may as well be a useful one.  The simplest – and most useful – meaningful mark is a digit”[5].

Data density is the amount of entries being displayed divided by the area of the data graphic.  High density data is desirable but should not come at the expense of legibility of the image.  Many graphics, particularly lower density graphics, can be shrunk down enabling comparison of many graphics in a small space.  This can be particularly useful when comparing time-series style images that are in essence snapshots of a data set over time.

An aesthetically pleasing graphic is one that follows a simple design that avoids distracting decoration and elegantly displays a complex data set.  Something that takes little effort to comprehend yet reveals something interesting about the data. ([1] p.177).  Attractive statistical graphics:

  • * Have a format and design appropriate for purpose.
  • * Use words, numbers and art together.
  • * Reflect a balance, a proportion, a sense of relevant scale.
  • * Are not cluttered from excessive detail.
  • * Have a narrative quality and tell a story about the data.
  • * Look professionally and carefully drawn.
  • * Avoid content-free information displayed on the graphic.
  • * For complex data-sets a combination of descriptive text, tables and graphics might be combined.
  • * Data graphics are paragraphs about data and should be treated as such ([1] p.181), arrange graphics in a narrative.

Use tables for simple sets of comparable values that the eye can quickly cross-reference.  The order values are presented in a table can also provide a narrative to the numbers presented.  Tables should be used in preference to pie charts especially in a series because pie chars can quite difficult to compare ([1] p.178). 

Super tables can be constructed by clustering rows together within meaningful groups.  Additional narrative can be provided by ordering rows within each cluster, and ordering the clusters themselves.  This form of reference can be more powerful than many bar little charts representing the same data ([1] p.179).

Words and pictures belong together.  Leonardo da Vinci’s manuscripts integrate illustration and tables directly in to the text explaining his ideas ([1] p.182) [12].  Forcing the viewer to skip around the display looking for figure or reference elsewhere spoils the flow of narrative.  Words used in graphics should be data themselves or explain how to read the graphic, rather than what to read.

Accessible Complexity: The Friendly Data Graphic ([1] p.183)

Friendly Unfriendly
Abbreviations avoided, mysterious elaborate encoding avoided. Abbreviations everywhere, requiring many trips away from graphic to decode.
Words run in the usual direction for the language. Words run vertically along y axis or in other jarring ways.
Labels explain data, displayed horizontally. Graphic is cryptic, requiring repeated  references elsewhere to decode.
Legends are avoided, labels used on graphic itself. Heavily encoded using shading and colours, requiring repeated reference to the legend.
Graphic attracts viewer, provokes curiosity. Graphic is repellent and filled with unnecessary distractions (“chart-junk”).
Colours are used sensitively for colour-impaired viewers.  Shading preferred.  Blue can be distinguished by most colour-deficient people. Colour used insensitively, red and green used for essential contrasts.
Type face is easy to read, letters are easily differentiated from each other. Type face is difficult to read, letters and numbers can be confused.
Type is upper and lower case.  Serif. Type is single case.  Sans Serif.

Proportion and balance in the content of a graphic are important for creating attractive graphics.  Musical scores provide a good example of how thickness of lines and proportion of symbols can not only look attractive but also carry meaning.  A very lateral example can be found in Jarbas Agnelli, “Birds on a Wire” [8].  Art deco designs can display similar reverence for the beauty that can be found in strong lines and curves of different strengths [9][10].

Graphics should tend toward the horizontal, greater in length than height.  The human eye is naturally practiced in picking detail out from a horizon thus sometimes a floating horizontal line can be made clearer by shading to the axis ([1] p.187).  Turkey wrote [6] “Perhaps the most general guidance we can offer is that smoothly-changing curves can stand being taller than wide, but a wiggly curve needs to be wider than tall”.  Rectangles of a proportion of between 1.4 to 1.8 (1.618 being the “Golden Section” [7]) tend to be aesthetically pleasing ([1] p.189).

When the nature of the data suggests the shape of the graphic follow that suggestion.  Otherwise move toward horizontal graphics about 50% wider than tall ([1] p.190). 

Applications In Software

The spirit of the book is not to follow rigid principles, but to revise and create something that can communicate the complex simply.  Often the realisation of this is something very desirable which can be a pleasure to use. 

Windows 7 Start MenuThe Windows integrated search acts as an intelligent filter for a table of results and is consistent throughout the Explorer interface.

In the example shown to the right we can see that I have typed "snip”.  By the context of where I am typing snip, it has ordered the results found starting with Programs and Control Panel, enabling me to launch the snipping tool by just pressing enter.  In an Explorer Window this would filter files and folders depending on the view type.


Ringo says: I've got a hole in my pocket How many games with inventory systems do you think might benefit from taking on this search paradigm?  Especially for games where you have very deep pockets.

Mass Effect was an awesome game, but the inventory management I personally found a chore.  A lot of space (“ink”) was devoted to prettiness rather than content, leading to a lot of navigation through the menus to get at what you want.  There were many button pushes just for turning your unwanted items into the futuristic magic goo (Omni-gel).

RPGs and adventure games have long used maps to show transitions to other levels via “Indy dots” tracing across distances to provide interest while a new section is being loaded.  Mass Effect used elevators to do the covering-up-loading thing, but it also did allow a role-player extra depth accessible via maps; encouraging players to explore to find hidden objects.  Without a doubt, maps can provide a lot of information that is easy for a viewer to understand and drill down to.  Twittervision gives tweets (single line status updates broadcast via a service called Twitter) extra context by placing a speech bubble on a map.  Watching this map update in real time you can see trending conversations for a particular location and across the globe.  I expect we will see more innovative usage of maps as games utilising social networks and reality-augmenting games via smart phones develop.

The Wolfire Editor already employs techniques suggested by this book.  Their excellent article [11] explains what they have taken from Tufte and applied it – I highly recommend you taking a look.  This article put Tufte’s book on my own reading list.

Having worked on Fable II I have seen first hand quite how many variables and stats go in to making a great AAA title.  Designers had to be able to tweak properties of creatures, items, equipment and augments.  Adjusting the price and availability of something creates a ripple effect throughout the entire game.  Make a creature too tough too early and the player might give up; make it too easy and there would be no challenge.  Better visualising this sea of numbers and their effect on one another would have made the designers’ lives easier.


I would be interested in comments on good examples of manipulating game data in an intuitive way – both in game and in tools.


Update – Still alive


More entries will materialise on this blog – it is not dead.  I have a couple of articles in draft, one about “soft skills” and another full of notes about Edward Tufte’s “The Visual Display of Quantitative Information”. 

Over the last few months this blog has taken a back seat to other priorities.  In my own time I have been learning WPF, brushing up on some mathematics and prototyping an arcade puzzle game idea in Xna.  These are all things that not only help me personally but have been positively contributing to my day-to-day work at the studio.  When time allows I write blog entries that tend to be an organised collection of my thoughts that hopefully add something useful to others as well as myself. 

Of course I have a life outside of development too; I love my wife, enjoy visiting new places, like taking pictures and have fun gaming with friends.  Perhaps I should write thoughts about games I have played here and some other less developer-centric subjects to keep to a regular update cycle.  I do want to avoid posting just for the sake of it though – I do not think any subscriber would thank for that.  I think of this post as a “ping” though and I hope you do too :0)

If there are any subjects readers out there would like me to tackle I will take suggestions here as comments on this post.  If I feel confident that I can write an entry based on a suggestion it will no doubt feature in the coming weeks.

Mid-week update


A few things have caught my attention the past couple of weeks.  First off congratulations for reaching 100 podcasts recorded.  Good choice for the centenary too – Daniel Cook’s “What is your Game Design Style” article.

Part Time Gamers” is another good podcast discussing playing games from a part-time gamer perspective.  The presenters work in the game industry but like many of us have many taxes on their spare time.  Decently produced and great conversation – they seem to be off to a good start and they are looking for comments and participants. 

A Life Well Wasted – another downloadable series about gamers and the industry is so well done that it wouldn’t sound out of place on BBC Radio.  It is due another episode but had been delayed for about a week.  The site is also looking for contributions from people working in the industry to show the human side, kicking off with this photographic article about Eskil Steenberg, creator of the game  “Love”.

I was not happy with my draft of the article I was planning to post on the weekend so will revisit it over this coming weekend.  With any luck the sunshine will continue and I will be able to work on it in the garden. 🙂

A final note; has anyone played “Uno Rush”?  It is not Uno as you know it – it is fast paced cross between Snap and Uno with some actual skill to it.  Everyone sees each others cards so it can played locally, which is lots of fun (although draining!)  Anyway if you like the demo and end up getting it perhaps I’ll see you online. 🙂

Industry featured my blog!


Many thanks to Ryan at for recording one of my articles.  It is very flattering to have been approached and amazing to be able to download a podcast with my own words read back to me on the subject of Agile Vocabulary.  Perhaps I should read aloud new articles to myself to make sure they are friendly to that format!

I have another article in draft that I started on the Bank Holiday Monday and will try and finish that over the weekend.  If anyone has a subject they might want me to try and cover suggest it and if I can I will over the next few weeks.

Reasons to Celebrate


Lionhead Studios wins a BAFTA

Lionhead won best Action & Adventure Game at the BAFTAs!  I get to say I have a credit on a BAFTA winning game!  It makes me even more proud of the team and feel very fortunate to be among their number.

A few days after the awards I organised some of the team together to pose with the BAFTA.  Many thanks to Louise for letting us borrow it for the shoot!  Gave me a chance to use my Nikon D80 SLR for something pretty special.

2009-03-12 DSC_0008 bafta tools team

Someone kindly took a picture of me with it.  You know it’s kind of heavy (in a very good way!)

2009-03-12 DSC_0051 paul e bafta

Unofficial Lionhead Party

I organised a party for Lionhead family, friends and ex-pats back at the end of February.  The night went really well thanks to my wife doing a ton of leg work for it, the Lionhead band organising themselves and a ton of attendees contributing kit for the evening.   I felt it was important to have an event to thank the friends and family who supported all of us during the later part of the development of Fable II.  Although they didn’t work directly on the game their contribution was invaluable.

Everyone was encouraged to bring some food in an American “Pot Luck” style buffet.  Traditional American, Spanish, Greek and English food (including a massive stack of Jaffa Cakes) ended up being on a big table.  We had a bar and a projector shone against a good sized wall used for Rock Band and Lips.  It felt like a fun family get together rather than just a work party.  Someone else can organise the next one though! 🙂

2009-02-28 DSC_0051 rockband test

The highlight of the evening was with out a doubt the Lionhead Band and their great set on stage.  It was their second gig – they were called in at the last moment to provide entertainment for the Christmas party.  Luckily they were already practicing for the unofficial party and I hear that first performance was great (I was visiting the States at the time of that Christmas party).


2009-02-28 DSC_0200 lionhead band

I’ll end this post with a link off to a youtube performance recorded of the guys rocking out. 😀

Agile Development – RPS Estimation



Previous article:

This estimation technique based on a simple game of rock, paper, scissors (RPS) is something we have already experimented with to estimate story points for a task.  A story point is a measure of complexity and size of a task (see Agile – Vocabulary), though could be used to estimated time too.

Rock Paper Scissor Estimation… For the win?

Planning Poker

First of all planning poker seems to be way more documented than rock paper scissors estimation, and there are tons of resources about it already on the web.  If planning poker is not familiar to you I would definitely recommend reading up a little about it.  Feel free to also comment if you have come across another discussion on rock paper scissors used in this way for estimation.

In planning poker each player involved in estimation is given cards with a Fibonacci number on representing how many story points a task is worth.  Sometimes decks use a base 2 binary numeral system – something probably more familiar to many software developers.  These are played face down, and on reveal the highest and lowest guessers talk about their estimation with the other players.  Play continues until the values converge.  The strength of rock paper scissors is no props are necessary!


The rock paper scissors method has roots in the Delphi and Wideband Delphi methods of estimation – but less formal.


Make sure the people you are playing with have the same idea about what a unit story point is.  It is best if it is a completed simple task you are all familiar with, otherwise you have to start with an agreed duration of time on a task (e.g. half a day).

Nominate one person to marshal the session.  This might be the scrum master (if you are practicing scrum) but the role should switch between team mates.  Rotating the role allows members of the team to grow in confidence talking with their co-workers, encourages joint ownership of the estimation process and it helps the team to jell.

A list of tasks should be given to the marshal to familiarise him or herself with.  The marshal then arranges the meeting.


Personally I prefer play with the binary numeral system because it is really easy to remember.  You may prefer the Fibonacci numbers which is fine.  Adjust the scale for larger problems or use both hands. Large amounts of story points indicates that the task estimated should be broken down in to more manageable chunks.

  • 0 Fingers:
  • DSC_0264  - 0 fingers Less than 1 story point
  • 1 Finger:
  • DSC_0259 - 1 finger 1 story point
  • 2 Fingers:
  • DSC_0260 - 2 fingers 2 story points
  • 3 Fingers:
  • DSC_0261  - 3 fingers 4 story points
  • 4 Fingers:
  • DSC_0262  - 4 fingers 8 story points
  • 5 Fingers:
  • DSC_0263  - 5 fingers 12 story points
  1. The marshal introduces a task to the group.  Brief further definition of task might be necessary but the marshal should not allow too much detail to be discussed at this point.  It may sway initial estimates, especially if the perceived senior authority on the subject starts talking about complexity and size too deeply.
  2. Marshal counts to three.  At the count of three everyone presents their hands at the same time to the rest of the group.
  3. The person with the largest estimate explains to the group what made them guess the highest.  This helps to draw out unknown facets of the task.
  4. The person with the lowest estimate explains to the group what made them guess the lowest.  This helps to draw out false assumptions, or perhaps something helpful that could simplify working out the task.
  5. Marshal calls for another round of estimations (back to step 2).  Process repeats until the estimates converge.  Estimates should converge, once they do return to step 1 and repeat for each task.

Scale of tasks can be an issue with this form of estimation using the binary numeral system “is Task X really twice as hard as Task Y?”.  Greater scale accuracy can be achieved by using different combinations of fingers.  If using the binary method that could be holding up different combinations of fingers (thumb and index finger could mean “3” for example).  That has it’s own problems because some combinations of fingers are actually pretty hard to carry off for people with less flexible hands.

The strength of this method is that it does not necessarily even need a formal meeting or marshal.  A group of a few people can easily estimate using this method in a corridor, at a desk, etc.


Agile Development – Vocabulary



This is the start of a series of articles about agile development.  The motivation behind writing these articles is to expand my knowledge of the topic by explaining how I currently understand it.  I imagine feedback from readers about their own experiences and understanding of the topics discussed could help my own comprehension of the material.

This entry focuses on some basic vocabulary used in the project management side of agile practices.   The vocabulary presented here concentrates on requirement gathering, describing deliverables and estimation and will be used in later articles in this series.

My Experiences

The game team I am currently on use scrum (though purists would called it “scrum but”) which is common methodology adopted for managing time on an agile project.  I have myself used “agile” style development practices like unit testing and continuous integration (using NUnit, Cruise Control .Net and Trac) in my past life outside of the games industry.  I have made sure to make time to read about these agile practices for years.

Most of the week beginning 8th of February 2009 I took part in a lab called Agile Development in C# and I had a glimpse of techniques used in teams around the mother-ship.  Spending the week building a project, managing the backlog and taking turns at being the scrum master and team lead was a very educational experience.  “Living it” with guidance in this way was like a crucible of learning (there was an element of competition too).  It was very rewarding yet quite draining!

My current lead developer attended the same course – thus I am more confident about “buy-in” of some agile practices at that level because of the extra common ground that brought me.  I have already had the opportunity to apply some development techniques discussed in that course to the main code base I work with day to day.

Vocabulary: User Stories

User stories are requirements written from the perspective of the customer.  A collection of user stories make up the “product backlog” and can be prioritised by how important they are to the user.  This allows the developer to concentrate on things that would be immediately useful to the customer for when a vertical slice of the product is delivered at the end of the “sprint”.

A user story is an expression of what they want and why, rather than any technical detail of the how.

A good example might be: “I would like my lawn to be short so it is pleasant to lay back and read in the garden.”

A poor example might be: “Mow lawn to 2 cm using a rotary petrol mower.”

The user stories concentrate on requirement gathering and intent.  These stories help communicate how the development team understand the requirements.  They also allow quick feedback from the user on any good or bad assumptions made.

Vocabulary: Story Points

Story points were introduced as a way of quantifying an estimate of a user story.  Where as estimates are usually expected as a duration, story points are based on an estimation of size and complexity.

For example a gardener might want to estimate the size of the lawn, see how overgrown it is and how much edge work would be necessary to keep it neat before attempting to estimate a time.  Depending on weather conditions on the day, it may even take slightly longer on some days than others.

The customer expects a regular fee and period of time spent in the garden rather than taking in to account difficulties from one week to the next.  One week the garden may not have grown very much and the whole thing will be quicker to complete.  Perhaps the gardener might take the time to make nice little extra touches to the garden to add value that easier week where there is ample time left over.

So a story point is not directly translatable to time because of the various unknowns.  The number of story points against something keeps track of how much effort will be required – something with more story points is harder to complete.

Vocabulary: Tasks

Tasks are derived from breaking a user story down in to small chunks of functionality.  These tasks are estimated in hours (or story points).  The estimated time remaining is updated daily during the scrum.

A task is only counted as “done” when it is verified.  A consistent measurement of “done” is important so it can be signed off for the sprint.  Quality gates can be used to measure if something is actually complete.  This could include passing tests and automated code quality tools.

Once all the tasks for a user story have been completed, functionally tested and verified, then it can be taken from the product backlog.

For example tasks for large garden could be (in the form of Task (verify) – duration):

  • Mow lawn (lawn appears visibly shorter.  Regular straight lines can be seen in the grass) – 2 hours
  • Tidy garden and rake lawn (no debris can be seen on the lawn or path, grass cuttings bagged and loaded) – 1 hour

Vocabulary: The Daily Scrum

The Daily Scrum is often a short morning stand-up meeting only including people doing the work itself.  The scrum master role can be taken up by any member of the team.  The scrum master asks the questions:

  1. What did you do?
  2. What will you do today?
  3. What is blocking you?

The estimates are updated on the current tasks, and the scrum master goes about trying to unblock the team where they are blocked.  Blockages could be something technical, through to waiting on another team to complete a task.

Continuing the gardening example – if the gardeners have noticed the lawn mower blades have become blunt and causing the team to slow down, it would be up to the scrum master to communicate with the tools team and ask them to sharpen the lawn mower blades.

Vocabulary: Sprints

A selection of user stories are chosen for the sprint.  Stories can be added or dropped from a sprint, but the deadline for a deliverable does not change.  If the deliverable, demonstrable version of the software is created ahead of time and the team feels confident they can take another story from the backlog and if it’s too big, break it down where possible.

The idea of a sprint is to always have something to show for the work everyone has done at a regular interval, to allow the users to feed back.  At the point a sprint begins, the requirements of the stories they are working on are locked.  A sprint can be completely aborted, or a story dropped… but the requirements of an existing story should not change.  The motivation is that the team are not aiming for a moving target and can concentrate on getting the unit of work done.

Ideally each sprint delivers a vertical slice to show progress.  For the running gardening example I have been using – this might be completing the more simple front garden entirely to prove the gardeners have understood the requirements sufficiently to maintain the garden to the customer expectations.  It also gives the customer an opportunity to feedback or change their mind “actually, I want my lawn to have circles rather than lines”.

Vocabulary Cheat Sheet

Daily Scrum Short stand-up meeting.  What did you do?  What will do today?  What is blocking you?
Product backlog Prioritised list of user stories.
Quality gate Checklist of things to verify work against before it is counted as completed.
Scrum master Leads daily scrum.  Tracks work completed and remaining, and organises removal of things blocking the team.  Ideally a role rotated between members of the team.
Sprint A period of time where user stories are chosen to work on.  External influences are not allowed to change the requirements of the stories being worked on.  Sprints are a fixed length, but stories can be postponed or additional ones taken from the backlog.
Story points Unit of estimation measuring complexity.
Task A user story can be broken down in to one or more tasks.  Tasks are estimated daily in hours (or story points) remaining by the developer working on them.
User stories User requirements expressed in sentences from the customer perspective.
Vertical slice Showing off a feature in an application that works from start to finish but may be limited in scope.  For example a rope bridge crossing a chasm is immediately useful and allows people to cross.  Having that in place can help to build a better bridge later.


Some of my references are company confidential and cannot be shared here – the web is full of useful information though.  The sites below are useful launch pads to further reading.

EDIT: Thought of a few more references:

Please comment any useful links you have stashed away 🙂

Podcasts – a great way to absorb extra info


Podcasts are audio recordings usually in mp3 format that often specialise in a particular niche.  If you commute, work out to music, or are at a loose end during a long build you might consider listening to one of these shows.

I subscribe to quite a few podcasts via google reader (my rss client of choice) – though the term podcast itself has iPod heritage so iTunes is also often used as a delivery mechanism.

I have listed a few of my favourite podcasts below.  Please comment and recommend any podcasts you listen too that I might be missing out on. 🙂

Game & Software Development

Industry Broadcast This is an excellent idea; the best game developer blog articles read out for your convenience
Platform Biased Behind the scenes info from Redmond
Hansel Minutes Technological discussion, usually focusing on .Net related technologies


Career Tools Solid advice about how to improve your career prospects
Manager Tools Management tips in plain language.  Even if you are not a manager it can make you think about what you are like as an employee to manage 😉
Manager Tools (Basics) I’ve been listening to these guys for years and they refer to concepts introduced in older shows… these are the foundation podcasts


The C64 Take Away Great remixes and sid tunes from the C64
The Roadhouse Signed and unsigned blues artists
The Raven and the Blues Signed and unsigned blues and blues inspired artists


Major Nelson Xbox 360 celebrity often interviews developers and reports sales figures – amongst other things.
OXM Podcast Pretty good 360 podcast from some of the guys behind the American edition of Official Xbox Magazine

Book review: Practices of an Agile Developer


A couple of weeks ago I was in Southampton and the Borders there was having a sale on all computer books… so I picked up “Practices of an Agile Developer” by Venkat Subramaniam and Andy Hunt.  Before I discuss the book I am going to go in to a little background.

One of my annual commitments I made back in October 2008 for 2009 was to learn about Agile practices… partially motivated by already being enrolled in a course that I’m taking next week.  I have been interested in these practices for quite some time though so this book is not my first exposure to agile thinking.  Before I left Ds Ltd for Lionhead Studios just over two years ago I had already co-authored an internal standards and practices document that encouraged various agile practices.

During university and the first year as a full time developer I was way more interested in learning some C++ tricks and a then newfangled language called C# that the management were for whatever reason excited about ;).  It is very easy as a developer to neglect the soft skills when there are cool new programming things to learn.

At university we were taught things like the Waterfall model, PRINCE and ISO 9000 accreditation – and whereas these things are important they do create a lot of paperwork.  ISO 9000 for example can fantastically document a horrific failure.  Things like “requests for change” that had to be signed off by more than one person are there to chain a customer to their original requirements.  GANNT charts created by MS Project used to make pretty diagrams that end up being a work of fiction because of unexpected bumps along the way.  It felt like setting up ways to assign blame for what goes wrong rather than concentrating on working together for a solution.  You get requirements, develop the software for a while, then hand it back.  It was contract negotiation over customer collaboration. Knowledge of these management principles is important so that you can make informed choices and discussions later on – but you will find every company has their own twist on the above anyway (the twist at university was everything was so literal and to the letter).

Other soft skills taught at university like uml, data flow diagramming, crc cards, database modelling are immediately useful in the real word for a graduate developer.  Sometimes these things are taught in a way that divorces them from the code too much; design everything first, code everything using the design later. This again isn’t the best way in practice (though probably a good way to create an assignment marking scheme).  While it is important to design up front, going in to too much depth lends itself to over-engineering. For example, classes created where properties would do, usage of patterns adding needless complexity to a solution, another layer of abstraction to solve a problem, “is-a” being taken too literally when applied to inheritance in class designs, etc.

So what better way to stop yourself wasting time and creating software that doesn’t please your users (or even get finished) than to read about other people’s experience first and not make the same mistakes?

“Practices of an Agile Developer” is a decent book that is all about how to be a positive influence on those around you no matter the position you hold.  It holds extra value for managers, producers and those that are in a position to lead from the top of course – but there are many things you can do on a personal level suggested in the book.  Committing to some of the things it suggests has the potential to make you more effective, and perhaps your work day more pleasant.

The book itself is quite short, but that is reflected in the price and concise style.  A chapter concentrates on a specific area of agility skills that is broken in to sections describing individual techniques.  Each technique is documented with a clear structure – identify a good and bad practice (represented by an angel and devil next to a box-out), then a personal experience from the authors and to finish off clear bulleted list of things to do.  The solutions at the end are particularly interesting in that they are always clear that taking the suggested practice too far can often be as harmful as not doing it at all.  Each technique also has “what it feels like” heading which describes the agility pay off when it has been applied in a balanced way.

The authors balanced, non-fanatical standpoint is much easier to digest than other books I have come across.  Ruby is mentioned a few times in a very rosy light, but considering the authors have close ties to the language you can let that pass.  The book does often refer to other books in the pragmatic programmers series which almost seems like advertising – though by not repeating other books it does mean the book keeps a tight communication and personal practice focus.

The first book in the pragmatic programmer series I brought was “Ship it!: A Practical Guide to Successful Software Projects” back in 2005 – which concentrates more on the practical detail of setting up a good production environment and development pipeline than team and customer relationships.  It is another insightful book full of practical examples of how to get things done – I have not read it front-to-back for a while but I may review it in the future.

“Practices of an Agile Developer” is a pragmatic look at practices that have been proven to work in the field.  It is a management book written for developers by developers.  Primarily I would recommend the book to students that aren’t quite getting the connection between modelling, code, and time management.  As for anyone else a bit more experienced  it is a good introduction to agile practices that isn’t terribly preachy – and does not smell of Power Point.


“Practices of an Agile Developer”

by Venkat Subramaniam and Andy Hunt

Buy from

Buy from

“Ship it!: A Practical Guide to Successful Software Projects”

by Jared Richardson and William Gwaltney

Buy from Buy from


P.S. Anyone have any requests for a subject you would like me to write about?  Anything sensible that doesn’t break my NDA will be considered 🙂

%d bloggers like this: