The Yes, The No and the Painful: using, and failing to use, estimates for a no-go decision

@AgileKateOneal recently asked for examples of effective estimate use in medium/long-term planning.

Example 1: Back-of-the-envelope Go/No-Go Decisions

Making a no-go decision sprang instantly to mind. Many such decisions are casual and quickly forgotten: the back of an envelope calculation which says that an idea is well beyond what we can afford; and the conversation moves on. But that estimate may have saved you from months of wasted effort.

An NoEstimator might object that one could profitably try out something rather than nothing. Which is sometimes true, but creative thinkers in commerce & IT can always generate a hundred more ideas than a team can try out. You can't try out everything.

Example 2: Small UK charity looking at CRM options

in November last year I worked with a small UK charity, www.redinternational.org who were badly in need of some kind of CRM software to keep in touch with supporters and project partners. They were running largely on spreadsheets built from downloaded reports from virginmoneygiving.com / mydonate.bt.com etc. They also had an Access database with a fair amount of donor & similar data in it.

Question: Is it better to pay for a CRM solution — typical charity starting price £10,000 going up to easily £100k – or get someone to do enough work on the Access database to make it a usable solution?
My Answer: I first spent some time discovering and documenting their main use-cases (to clarify: their 'business' use cases, that is the things the charity had to do whether manually or with IT). I gave that picture to the CRM providers so that they could give us a sensible proposal. And I worked out an estimate for extending/developing the Access database. Based on that, we could see that a CRM consultancy/solution looked like £10-£20k (5 year cost) and the DIY-option about 200-400 developer days.

Even with this level of accuracy it was good enough to see that DIY should be a no-go. I did not expect this. I thought that the charity's actual requirements were sufficiently small that we could do something useful for a few thousand pounds. But two hours spend going through their use-cases on my estimating spreadsheet showed me that I was wrong. So, I recommended the best value CRM option.

This, I think, is planning 101: a couple of hours working through the detail on paper is a lot cheaper than running the experiment; but can be enough to make a probably-good decision.

Example 3: Provide a system to automate a small team's manual processes for a capped price

This was for a financial services company in 2013. The team were working on PPI claims for an insolvency practitioner (obliged to pursue potential claims that might bring in some money for their clients' accounts) and had about ten thousand potential claims with hundreds per month being added. They had been working manually on spreadsheets for over a year.

I spent 4 days on analysis and listed a set of use-cases that covered the processes end-to-end; and I estimated that a suitable system could be done for about 40 days development work. The estimate cost about 3 or 4 hours on top of the analysis.

The contract to provide was capped-price. The customer was not open to a no-estimates approach. And I accepted being bargained down to below my estimate (Doh! I hear you say. Quite so). The actual cost came out close to (but above) my original estimate, but could have used another week's work to make it user-friendlier.

The better course would have been to use the estimate/budget mismatch to declare a no-go rather than accept a reduced budget. This might have resulted in the client agreeing to go ahead anyway (which might in turn have led to a no-estimates approach to the work). Or it might have led to no contract. Either way would have been less painful and more controlled than over-running the budget.

The Known Unknowns Matrix

I.T. is not the only industry to have happily latched onto the the former Secretary of State's famous phrase, "the unknown unknowns". It's a good phrase if you must plan or estimate anything because planning & estimating always involve risk.

But we should really consider the full matrix. There are pitfalls in at least two of the quadrants:

  Known Not Known
Knowns Things we know, and we know we know them Things we know but don't realise we know them. Tacit knowledge that we take for granted. Becomes a problem if we are responsible, and fail, to communicate them to people who don't know. Also a problem when we start work in a new context and do not realise that what we ‘know’ is no longer valid here, so they become unknown unknowns.
Unknowns Things we know that we don't know. We can record the risk, and estimate a cost for investigation & discovery Things we don't know that we don't know. This is the quadrant most likely to shipwreck plans.

Concerning the unknown unknowns, my experience with doing novel software is that when budgeting for development you should estimate for development, plus learning time, plus developing the things you learned about, plus solving problems you didn't know you'd have. A rule of thumb for novel systems might be, multiply your estimate by ten to cope with the unknowns. And/Or, have clear “abandon the project” criteria, even months into the project. Don't be a dupe for the sunk-cost fallacy.

Less dramatically, my takeaway from this is to use this quadrant when listing risks and assumptions. Just having a space for the possibility of unknown knowns & unknowns can be an impetus to discuss, “risk-storm” & consult, to help your team discover the as-yet-unknowns.

P.S.

I've just read the brief and brilliant mcfunley.com/choose-boring-technology which points out that you can't afford too much novelty. He suggests that for any new project you should grant yourself a limit of 3 novelty chips. When you've spent them, you get no more.

Well, not unless you really can overshoot your budget and timescales by over 1,000%.

Kudos

A slideshare by Danni Mannes on Agile Architecture pointed out to me all quadrants are worth some of our time.

Coplien & Bjørnvig : Lean Architecture For Agile Developers. A Review

Four years after this book came out, Agile Architecture has at last become a Thing. But as the nuance of its title hints, this book is not fad-driven. It is a carefully-thought out exposition of what architects can learn from lean and agile ideas, and what they can do better as a result.

Well. It's partly that. If you are a practising architect, it is actually four must-read books in one.

If you are not, you might dismiss this book for two reasons. The first, that judging by other reviews & my own experience, the homespun style of the first half does not suit a bullet-pointed gimme-the-headlines-now generation. The second is that if you have not experienced the pitfalls of architecting and doing software in a real organisation with actual people in it, then Coplien & Bjørnvig's pearls of wisdom may impress you much as the agile manifesto might impress a cattle rancher.

It is not a beginner's book. It is the mature distillation of the sweat-soaked notebooks of a fellow-traveller who has stumbled over rocky terrain, been through the tarpits and has some hard-bloody-earned (and, academically researched) wisdom to share as a result.

I said four must-reads in one. They are:

1) The (literally) decades of experience of a leading practitioner & thinker in the field.

2) A thought-through answer to the questions, what can we learn from lean & agile. Whilst value-chains and some technique feature, the authors' secret conviction is surely that Technology Is All About Human Beings. "Everybody, all together, from early on" is their Lean Secret. "Deferring interaction with stakeholders [users, the business, customers, domain experts, developers], or deferring decisions beyond the responsible moment slows progress, raises cost and increases frustration. A team acts like a team from the start."

3) Which leads to that which for me, as a more techy-focused reader, was the marvel of the book. Clements et al in "Software Architecture in Practise" offered attribute-driven design, a partitioning of the system based on a priority ordering of, primarily, technical quality requirements. Coplien & Bjørnvig all but deduce a partitioning based on the priority ordering of people: Users first, Development team next.

"The end-users' mental model" is the refrain on which they start early and never stop hammering. From this they suggest that the first partitioning is What the System Is from What the System Does. I am tempted to paraphrase as, Domain Model from Use Cases. But their point is partly that this also neatly matches the primary partition by rate of change, because the last thing to change in a business is, the business of the business. If you're in retail, you might change what and how and where you sell, but your business is still Selling Stuff. And at lower levels inside the organisation too: an accounting department, though there has been five thousand years of technology change, still does accounting. The users know this. They understand their domain and if the fundamental form of your software system matches the end user mental model then it can survive–nay, enable!–change and stay fundamentally fit for purpose.

Second, don't fight Conway's law. "Organizations which design systems ... are constrained to produce designs which are copies of the communication structures of these organizations". The deduction for agile teams is, partition so as to maximise the long-term autonomy of your self-organising teams. Even when the team divisions are imposed for non-technical reasons–geography, politics, whatever–still allow that fact to trump more 'technical' considerations. This may not match your vision of technical perfection, but it will still be the best way. Ruth Malan recently paraphrased Conway's law as, "if the architecture of the system and the architecture of the organization are at odds, the architecture of the organization wins". Don't be a loser.

These points seem to me obvious in hindsight, yet they turn traditional approaches to high-level designs on their head. After considering People, yes we can considering rates of change, quality attributes, technology areas. But it's always People First.

4) And finally. This is the first book-sized exposition of DCI architecture, which I would describe as an architectural pattern for systems that have users. Having separated What the System Is from What the System Does, DCI provides the design pattern for how What the System Does (the Use Cases) marshals the elements of the What the System Is. There are at least three notable outcomes.

Firstly that use cases are mapped closely to specific pieces of code; in the best case each use case can be encapsulated in its own component.

Secondly, that the relationship between Domain elements and Use Cases are expressed as "In Use Case X Domain element Y plays the Role of Z". This brings significant clarity to both, and is part of the key to 'componentising' use cases; the Roles needed for a use case become, in the code, its public dependencies. In UML-speak, the Required Interfaces for such a UseCaseComponent are the roles needed to 'play' the use case, and those roles are Interfaces which are implemented by Classes in the Domain Model. Coupling is reduced, cohesion is gained, clarity is achieved.

Thirdly, the simpleness of the mapping from business architecture to code is greatly increased. Suddenly one can draw simple straight lines between corresponding elements of architecture and code.

The authors say of their work, "This book is about a Lean approach to domain architecture that lays a foundation for agile software change". To my mind, this hits the agile architecture nail on the head. Agile software development always only ever succeeded at scale because the people doing it either knew, or had given to them, enough architecture to make it work. Software Agility, just like every other software -ility, must either be supported by the architecture or it ain't gonna happen.

But the best thing I got from this book was the proof, before my very eyes, that correct technical design flows from knowing how to put the human beings central.

Where to Buy

UK



ebay (UK): Lean Architecture for Agile Software Development


Amazon (UK): Lean Architecture for Agile Software Development

USA



Amazon (USA): Lean Architecture for Agile Software Development

Introducing: The Semantic Field. Or, The One Truly Correct Usage of Layered Architecture in the World

Bear with me if you already abandoned layered architecture long ago. You may be quite familiar with the thought that layered architectures often fail to apply the Dependency Inversion principle, and often thereby induce tight coupling of un-modular, un-testable layers.

I wish to do two things in this post. First, I propose that the notion "Semantic Field" better captures the one big idea that layered architecture nearly gets right. Second, I will discuss the One Truly Correct Usage of Layered Architecture In the World in order to show why it's the wrong choice for nearly all other usages.

Semantic Field

"Semantic Field" or "Semantic Domain" is a term from linguistics. Words are in the same semantic field if they are related to the same area of reality. (The word domain is pretty much what you'd call it as a DDDer). Orange is in the same semantic domain (let's call it the fruit domain) as Apple. But it's also with Red in the semantic domain of Colour, whereas Apple isn't. That's how natural language rolls.

Kent Beck used the term conceptual symmetry to explain why he didn't like this code snippet:

void process(){
input();
count++;
output()
}

and wants to change count++; into tally();. Somehow the count++ doesn't seem to be on the same level as the method calls. Indeed it isn't. It's the same feeling you have when you see:

void applyToJoin(Customer customer){
    if(eligibilityRules.validate(customer)){
        membershipList.accept(customer);
        htmlBtnUpdate.setEnabled();
    }
}

that a method dealing in business rules and processes should not also know about html buttons. Semantic Field is the notion we want here. The clean code rule is "One Level of Abstraction per Function" and I propose to rename it as "One semantic field per method". In fact, one semantic domain per class, namespace, module, or ... layer.

This is what layers gets right: The idea that inside a given layer you understand a specific semantic domain, and don't use vocabulary from the semantic fields of the layers above or below you.

Where layers goes wrong is, well, the layering. The belief that all top-level dependencies in a system can be expressed in one dimension, top to bottom. They just can't. Squeezing your code into 1 dimension makes you do contortions that are utterly unhelpful. Strict layering adds to this a second failure mode: It makes you write pointless passthrough code, which ought to be deleted.

(Layering does get a second thing right: no cyclic dependencies. Code with mutual dependencies will try to morph into ball of mud architecture. I'm sure this is half the reason why layered architecture become wildly popular. It was a vast improvement on ball-of-mud).

The One Truly Correct Usage of Layered Architecture In The World

The other reason we were entranced by layered architecture for a decade was the ISO OSI 7 layer model for networking. It seemed so obviously, thoroughly, beautifully, correct.
OSI 7 layer architecture

Each layer is clearly (well, it was clear up to about layer 5, after that it got a bit hazy for some of us) and cleanly independent of the other layers. Each layer is a different semantic domain. The bottom layer deals with physical connectors and with what voltage represents a 1 or a 0 and how a byte sequence is encoded as an electrical waveform. The next layer deals with packets, complete with a destination and a source. The next layer deals  in routes: how to get to this destination from the source. The next layer deals in messages: how to turn them into packages and back again. And so on.
And, the layer-cake picture precisely models the dependencies between the layers. At least to layer 5, each layer relies on and adds value to the layer beneath it.

It was beautiful. It made sense. It was what I wanted my software to look like. It was a siren, luring us all to shipwreck our software on the rock of a beautiful but evil vision of how it should always be.

Why a Layered Architecture is Nearly Always Wrong For Any Other Software System

The bit that isn't wrong

The part of the OSI model that is applicable to 99% of all known software is the separation into semantic fields. This is why we used to say that business logic shouldn't be in the UI layer; html buttons live in a different semantic domain to customers and invoices. (Except: it was the wrong way to put it. The presentation layer does reference business logic because in an interactive system usability is achieved by having the UI reflect the business logic; for instance by hiding options that are not valid for the current user).

The bit that fails miserably

The part of the OSI model that is applicable to very very few systems is the layering. In the OSI architecture the strict layering works because the language of each layer can be defined in terms of layers beneath it. Session, Frame, Bit are in separate semantic domains, but the model allows Frame to be defined in terms of Bit, Session in terms of Frame, and so on.

This is almost never the case in layered business software. The vocabulary of a UI cannot be defined in terms of the vocabulary of commerce and business administration, and the vocabulary of a business cannot be defined in terms of data entities. They just are separate domains. The fact that that second one sometimes works a little bit (you can define a customer–incorrectly–as rows in data tables) is what seduces you into thinking it should work. But it doesn't. You cannot define your business in terms of a data layer.

In particular then, a layered architecture with UI on top is always wrong; and business layer on top of data layer is always, but less obviously and more seductively, wrong. Hexagonal architecture (aka ports and adapters) is a much better model for most systems because it doesn't confine dependencies to a single dimension (in addition to the already well-known fact that it gets your dependencies pointing the right way).

DDD: Treating the UI layer as a domain

Having recognised that user interface is a separate semantic domain, should we apply some DDD thinking and treat it as a bounded context with it's own domain? The domain of an MVC web UI includes controllers, actions, routes, etc. But it must reference business logic all the time when deciding what to display, whether to accept user input, and ultimately to do anything with that input. To some, making the UI layer it's own domain context, and giving it adapters to interface with the business domain seems like over-engineering, whilst others advocate almost exactly that.

I recommend that you should at least be aware that if you do not do this de-coupling (and in MVC web apps I personally almost never have) then your UI layer will have two semantic domains inside it. It's a trade-off, but a sufficiently small one that I would usually come down in favour of which side has fewer total lines of code.