Asier's thoughts

Blogging about software development


Leave a comment

Standardisation kills innovation

I’ve recently re-watched the video from Spotify about their culture at work. It’s a really cool video, if you haven’t watched it yet you should. I’ll wait.

Part 1

And part 2

It’s been a year since I first watched it. I understand now much better many of the things they talked about. I guess because these are things I have experienced through the last year.

One of the things that stuck in my head is that autonomy is more important than standardisation.

In Spotify teams are quite autonomous. They make their decisions based in their own rules instead of following some general company rules. Rules are not good when you want innovation. Standardisation kills innovation. How can you try new things if each time you want you have to change some rule in the company? How can you learn? How  you grow? Isn’t it better to keep this rules down just to the team? You will just have to convince your team instead of the whole company!

I’ve seen a pattern coming into place in software companies who are changing from product to project teams, they are adopting standards! In these companies teams instead of working in a specific product in a company work in different products all the time. Each product will be changed by more than one team through its lifetime. In order to make everybody’s life easier everybody will have to be on the same page. Therefore the need to standardise everything: which tools to use, frameworks, naming conventions, coding standards, methodologies, ways of testing, branching, deployments, code reviews, how to write a commit message.. You need to have the same rules for every team so changing from one product to another it’s not a pain in the ass..

Now, let’s say you or your team doesn’t like one of these “rules” or just thinks it could be improved. Good luck if you want to change any of that easily.
Standards make change hard, they give no much autonomy to teams and makes continuous improvement difficult. And autonomy it’s important not just for innovation, also for job satisfaction as mentioned in Drive: The surprising truth about what motivates us.


2 Comments

Command returns true/false for success/error

I’ve seen this kind of pattern usually when working with web services.

Imaging the following piece of code:

public bool RunCommand()
{

    // .. Some logic ..

    return true; // success

}

Basically we have a method in a class that executes some logic, if this is successful it returns a true, if not a false. At first you would say this make sense, right?

It kind of reminds me the days when I was learning C 🙂

But this is just over-complicating things. Any client of this method will have to add a check.


if(RunCommand()) {
  // everything good

  // Show user success message

} else {

  // error!

  // show user something went wrong

}

Why not just assume that the method is going to be execute successfully? If not just throw and exception! Then just handle that exception. Simple!

Just adding the true/false return for success/error is basically re-implementing exceptions. Well, it’s even worse. Exceptions are explicit, they mean that something went wrong, they even have a custom error message. A boolean just means that.. a boolean: true or false, it’s open to interpretations. When we get a false it could mean anything, no error message, no custom exception.

Also returning values from command methods is a bad practice. You should have two kind of methods: querys that return values but don’t change anything in your system and commands that change something but return nothing! This is known as CQS or Command and Query Separation.


Leave a comment

Iterative waterfall

A new software project starts in your company. Budget has been approved and a (soft) deadline set.

Hey.. You are lucky this time! One of the managers “knows” about agile, he has read a book about Scrum. So we are going to do Agile this time! 🙂

The project manager gets renamed to product owner and starts writing stories for the whole project and adds them to the backlog.

Yes. All of them!  She already knows what we need for the whole project.

We will work in iterations. 2 weeks. Let’s call them sprints! 🙂

We will estimate the stories. We will check how many stories we are able to finish per sprint and based on that and the estimation we will know how many we will be able to do in the next sprint.

Let’s call this thing velocity. The velocity tells us how many stories we are able to get done per sprint based in the estimates.

Heyyy!! And look at that! If we have EVERYTHING we need in the backlog, and EVERYTHING estimated.. we can find out when we will finish the project and be able to deploy our code to production!!

OMG! We are sooo Agile! 🙂

project2

Does this sound familiar??

The story I just told you it’s real. And I have seen it many times. And no, sorry to disappoint you, but that’s NOT agile, it’s iterative waterfall. It’s the same as doing waterfall but iteratively. All the requirements are set up front as user stories in the backlog and then work is done through sprints. This plan is rigid and bound to fail. Isn’t the goal of Agile to embrace change?

I have met so many people who think they work in an agile environment. Who think they are doing scrum because they work in iterations and know all those buzzwords: sprint, stand up, retrospective, velocity, etc. These people are just following the book but miss the whole picture. Following the practices because that’s how it has to be done.

Forget the practices and look deeper into what they mean. Which are the values? Why they exists? Why should I follow them? Should I follow them?

Is this what we do really agile?


Leave a comment

Lasagne Pattern

If you have a mess in your code where methods are being called from one place to another in such a way where is really difficult to follow the flow of the program execution this is known as Spaghetti code. Following with the Italian cuisine allegory, if you have too many layers of abstractions this is known as the Lasgane Pattern.lasagne-alla-bolognese-HEM1

I am just surprised how many times I have seen this anti-pattern in place! It’s typical in client server applications where programmers have learnt about the importance of a N-layer architecture and with a bit of cargo cult programming they end up overusing it.

In software, a n-layer architecture it’s a way of separating the code of an application in different components or modules. The code will be separated around different concerns. The typical separation of components is presentation, data access and business logic. If we are talking about a server-client application another component will be added to deal with the communication between both. If you draw an image with all the components like layers one above the other with the database at the bottom and the UI on top, it looks like a n layered or multi tier architecture. We can draw arrows from the presentation layer to the database to see the flow of the actions.

3tier_WebService_2

This is a good pattern and basically almost everybody follows it. The problem is when developers start adding layers just for the sake of adding layers! One layer more for the cache, another one for the application etc. Then you end up with loads of layers and many of them doing nothing, just delegating calls to the underlying layer.. To make it even worse different type of objects will be used between layers (Entities, DTOs, ViewModels, Ducks, Chickens..) with the extra work of having to map from one to another. If you add to this the need to unit test every single layer you will end up with a lot of work to do just to get a single record from the database!!

If you are not careful a misused n-layered architecture can bring a lot of unnecessary complexity. My advise is to use the least amount of layers possible (And store procedures are considered a layer). Don’t put layers just in case, software is cheap to change later on. Yes.. I know, architecture is not that easy too change, but still it can be done, try to deffer that decision as much as possible (maybe you can do some prototyping?) and be wary about layers.

Reference: I don’t know who came with the name, but I heard about this anti pattern first time in coding horror. Though there is called Baklava Code.


1 Comment

BDD is not automating acceptant tests

Over the years I have met many who think erroneously that BDD means writing acceptance test and then automating them with a tool like cucumber.

BDD or Behaviour Driver Development was brought by Dan North as a layer on top of TDD to emphasize that we should focus on behaviour when writing our tests.

There are two types of frameworks to do so. There are the ones based on  Gherkin syntax like JBehave, Cucumber or Specflow . And there are the ones based on context/specification syntax like RSpec, NSpec or MSpec.

Here two examples of both syntax:

Gherkin

Scenario 1: Account is in credit
Given the account is in credit
And the card is valid
And the dispenser contains cash
When the customer requests cash
Then ensure the account is debited
And ensure cash is dispensed
And ensure the card is returned

Context/Specifcation – Rspec

describe Bowling, "#score" do
  it "returns 0 for all gutter game" do
    bowling = Bowling.new
    20.times { bowling.hit(0) }
    bowling.score.should eq(0)
  end
end

The point of these frameworks is too think on behaviour when writing a test case instead of just testing a method with certain inputs and expecting some other outputs.

Usually, the ones based on gherkin are used to write acceptance tests and the context/specification ones for the unit test level. Somehow, the first ones with cucumber have become more popular and when you talk to anybody about BDD they will think about Given-Then-When, cucumber and acceptance testing. But this is wrong!

BDD is not just for acceptance testing, we already had ATDD for our acceptance tests. BDD is a layer on top of both TDD and ATDD. And it’s intend is to focus on BEHAVIOUR so we can do good TDD. Don’t forget xSpec frameworks, they are also BDD frameworks! Don’t think BDD = ATDD or acceptance tests!

+ Info and sources:

There are a couple of good blog post on stack over flow talking about this matter:

And recently I watched the talk TDD, where did it all go wrong by Ian Cooper where he talks about the problem with TDD and that we should focus more on behaviour. I recommend to watch in it.