There was an error in this gadget

Sunday, September 29, 2013

3 concrete steps that will make your code unit testable


First things first the header seems to have done its job of catching you're attention :), I am by no means a unit testing expert, but after having tried and failed a few times I have come up with these three simple concepts which really helped me get from no tests to a few tests and i am sure that there is a long way to go from here ....

This post is targeted at those of us who are already sold on the benefits of unit tests, but just do not know from where to begin, so here i am not telling you that unit tests are good for you and the like

We all know they are good, but just cannot dish out code which is unit testable !!

So without further ado let me put down the 3 points, and then try and explain each of them in detail.

1) Understand your aggregate roots
2) Separate out algorithmic code from co-ordination oriented code
3 Write logic in pure functions


1) Understand your aggregate roots -- 

This is probably the most difficult to gulp down and also write about, So even before I try, I would  really encourage you to read up on CQRS and the entire IDDD book.

Now comes my attempt

Every other book that we read will tell us that our domain model must be unit testable, and then we look at our "domain model" and wonder how the hell could this ever be possible. Over time i have realized that in many cases the domain model is not testable because it tends to be influenced by ORM tools like hibernate :|.

Have a look at this Order class
public class Order {

....

@ManyToOne
private Product product;

....
}

I have written a lot of code which looked like the above and each time this kind of code base has been impossible to unit test :).

This kind of code ends up in a highly coupled and endlessly inter-connected web of classes !!

But why did i even put in the Product class within Order class, because Hibernate needs it that way and it helps out when I need to fetch the Product details from the  Order class ...

So in short the lure of ORM and the read concerns have screwed up my domain model. Now every unit test which needs to test Order must know how to create a product and this will go on forever.

An alternative ??

Maybe (I really mean maybe)


public class Order {

....

@Embeddable
private ProductId productId;

....
}


Where ProductId is a ValueObject holding the identifier for each Product, Does this look idiotic ?, but trust me before the ORM's invaded us most code was written like this with Classes referencing each other via Class identifiers rather than the class itself

We might have a script which puts in the table level foreign key constraint, but now to unit test Order, I only need to know how to create a ProductId which is much simpler.

Also from a design perspective this is more decoupled. The second Order class has lighter dependencies compared to the first Order class. This is probably difficult to gulp down for those of us who are in love with our ORM's but a read up on DDD and CQRS and it it will really help in making better use of ORM's.


2) Separate out algorithmic code from co-ordination oriented code -- 

This is a fairly easy to do and has great benefits, at a high level split the code you write into 2 roles

a) Co-ordination

b) Logic/algorithimic

Then concentrate on building a test suite on you're algorithmic code first.

This blog post goes into detail on the above point

A simple gist with some pseudo code to get across my point


3) Write logic in pure functions.

First things first what is a pure function and here .

pure functions can always be written as static functions because they operate on the arguments passed in and do not have any side effects.


/**
Delegates logic to pure functions
**/
public void proratedDuration(){
        examDuration = proratedDuration(getTemplateOperationDuration(), getNumberOfProceduresInTemplateOperation(), getNumberOfProceduresInThisOperation());
        return this;
    }

/**
Pure function
**/
    public static Duration proratedDuration(Duration templateOperationDuration, int numberOfProceduresInTemplateOperation, int numberOfProceduresInThisOperation){
        long timePerProcedure = templateOperationDuration.getMillis()/numberOfProceduresInTemplateOperation;
        long timeForThisOperation = numberOfProceduresInThisOperation * timePerProcedure;
        return new Duration(timeForThisOperation);
    }


Like the above we might have instance methods delegating the logic bits off to pure functions expressed as static functions.

We must note that points two and three are inter linked and over lapping in many ways.

Separating the code base into co-ordination and logic will automatically lead to pure functions.

So I guess that does it, IMO these 3 little nuggets really improve code test ability

Let me know what you think :)


Sunday, September 22, 2013

Building a simple Design By Contract Library for Java


This is a follow up to the original post where i was trying to build the case to develop simple DbC library which leverages on the Bean Validation API.

Just reiterating the desirable attributes for the library would be
  1. Leverage on the Bean Validation API
  2. Simple to add in to brown field projects
  3. Simple API with limited automagic


Over the week I have built the library and hosted the code on github.

The central class of the library is Contract class




The over loaded requires method are meant to check the pre conditions, the over loaded ensures methods are meant to check the post conditions and the overloaded checkInvariants methods are meant to check if the class invariants are in tact.

If you want to explore more about how to use the API, have a look at the unit tests which double up as examples on how to use the API. You can have a look at the test cases here.

The example has been picked up from the example used to explain the working of the google Contracts for Java library.

The example has been explained here.

There a few features that the current libraries mentioned in my previous post have which this library does not have, I am just listing them down here.

1) Capture the state of the object prior to the method invocation -- As the libraries use byte code instrumentation they are capable of holding the "old" value of the objects, this is helpful in certain cases.

However IMO the usefulness is over emphasized, it is primarily helpful in checking the post conditions so we might have something like this.


@Requires({"book != null", "books.count(book) >= copies"})
    @Ensures("books.count(book) == old(books.count(book)) - copies")
    public void removeBooks(Book book, int copies) {
        books.remove(book, copies);
    }

In the above example the "old" value is used to cross check whether the method has done its computation correctly, this is possible in very simple cases, however for even a slightly more involved scenario this is not possible. Consider the example below

@Ensures("result >= 0")
    public int getTotal() {
        int total = 0;
        for (Book book : books) {
            total += book.getPrice() * books.count(book);
        }
        return total;
    }

Here the post condition does not verify the computation, it just a generic validation put in. This is more realistic and does not need the "old" value.

Verifying computation or behavior is still the responsibility of unit tests according to me


2) Verifying that no state has been altered in case an exception has occurred -- Consider the example below.

@Requires("book != null")
    @Ensures("books.count(book) == old(books.count(book)) - copies")
    @ThrowEnsures("books.count(book) == old(books.count(book))")
    public void removeBooksUnsafe(Book book, int copies) {
        if (books.count(book) >= copies) {
            books.remove(book, copies);
        } else {
            throw new IllegalStateException("Not enough books to remove");
        }
    }


This is again a variation of the post condition checking, it is used to check if the state has not been altered in case of an exception. In this case it check if the count of books remains the same.

I can think of scenarios where i would want to use this,  The only way i could think about simulating such behavior is by adding extra code.
        int initialCount = books.count(book);
        /**
         * In cases where the post condition can only be verified
           using the past state of the object the past state
           needs to be captured explicitly
         */
        books.add(book, copies);
        Contract.checkInvariants(this);
        Contract.ensures(books.count(book) == initialCount + copies);

3)  Limited support for inheritance -- It has limited support for inheritance, like we cannot specify contracts on interface methods and expect them to be honored in all the implementing methods, I think this is a fairly big drawback.

These are the limitations that i could think of in comparison to the existing DbC libraries in the Java space.

Feel free to leave your comments and fork the library, I will try to put this up on maven, however just adding it into you're existing projects should not be much of a pain.

Just as a side note I would like to add that Java is seriously getting its ass kicked in this space when compared to C#, C# actually has first class compiler support baked into it :| via the Code contracts library

This video here goes into the features it provides some of which are really good.

Tuesday, September 10, 2013

Building the case for a simple Design By Contract Library for Java

Of late i have been interested in Design By Contract principles, I did quite a bit of searching to find libraries which implement these principles, so that I could start using it in my personal project.

But surprisingly I have not been able to find a suitable library which implements the principles in a simple way. In this post i basically plan to rant about the existing eco system around DbC and through the ranting build a case for creating a small and simple DbC library.

However first let me quickly spell out what Design by Contract (DbC) actually is.

A short explanation of DbC, picked up from here

Design By Contract (DbC) is a software correctness methodology. It uses preconditions and postconditions to document (or programmatically assert) the change in state caused by a piece of a program.

A snippet from Wikipedia also explaining the same concept



We have a few DbC libraries in the Java eco system already, Broadly they work based on Byte code instrumentation or AOP, but i could not find any which are compliant with the Bean Validation API.

I have made a small compilation of these libraries which is available here 

All of them are a little tricky to get going with, especially integrating it with a webapp which is built of out maven. For the AOP based libraries we need to include aspectj jars and that brings with it a fair amount of complexity.

Byte code instrumentation based libraries need to be hooked into the maven compilation process and add in the annotation pre-processors.

None of the libraries seem to leverage the Bean Validation API, which has a lot of over lap with the DbC paradigm.

The short comings in these libraries has prompted me to think about rolling out a simple library which leverages the Bean Validation API and provides helper methods to implement the DbC paradigm.

Desirable features of the proposed library.


  1. Leverage on the Bean Validation API
  2. Simple to add in to brown field projects
  3. Simple API with limited automagic


I have not yet fleshed out how to actually implement this library but after having done a fair bit of looking around I think the above features would be good to have.

I will probably come up with a follow up post soon which should have more details on how to actually implement the library.

In case you have any suggestions or comments on how to implement the same please let me know.

Monday, May 20, 2013

Context is king !!

I am writing this post to draw some attention towards the concept of Bounded Contexts, When I started off reading about DDD, I really got excited about Aggregates,Entities,Value Objects and the like, I was more interested in the technical aspects of DDD. However over time I have come to realize that the highest business value is delivered when we acknowledge the presence of multiple contexts within a application.

In this post I will try to convince you that Context really is the king !! ;)

Let me start with the familiar customer example.

Consider a ERP application, It will probably have a Inventory Context,e-Commerce Context, Product Catalog Context.

Now lets consider the concept of a Customer in each of these Contexts, the term Customer is going to mean a different thing in each context, When browsing a Catalog  the Customer is being used in the context of previous purchases,loyalty,discounts,shipping options and the like.

In the order context customer has  a different and limited meaning which may include things like customer name,total amount due,ship to address, bill to address payment terms and the like.

So the same Customer changes based on the Context in which it is used.

If we somehow tried to use a single unifying model to represent a customer across all Bounded Contexts we would easily end up with a big ball of mud :)

Disclaimer: Entire example picked up from the IDDD book.

A very similar example highlighting the importance of defining bounded contexts can be found here

Sunday, May 19, 2013

In the end everything is a CRUD operation.... Need to rethink this one?

In this post, I am trying to compare and contrast two mindsets , One which believes that everything in the end boils down to a CRUD operation and one which believes in modelling behavior .

Till not so long ago, I believed that everything boils down to a CRUD operation, this mindset reflected in the solutions I modeled for a given requirement, I had kind of heard about behaviorally rich models, however I just could not figure out how they were of any use, given that everything in the end is a CRUD operation :)

After having been introduced to DDD and reading a few books and posts on it, I seem to be getting a hang of it and in this post, I am going to take a simple requirement and then model it with a anemic/CRUD mindset and then a behaviorally rich/DDD mindset.

The requirement is fairly simple we are trying to save the details of a customer.

First to go CRUD mind set. You can find the gist here



I am sure this looks familiar to most of us.

So what is the problem with this approach. At the very least the below four

1) There is little intention revealed by the saveCustomer method.
2) The customer class is more like a DTO rather than a domain class.
3) This method can be used more than a dozen business situations, so how does one unit test this method ?
4) This method will need to be changed in more than a dozen cases, it clearly violates SRP.


So now lets go for the behaviorally rich approach.

The customer class may start looking like the below. Have a look at the gist here, it includes a snippet of the application service too.



Now with this kind of modelling, each method has only 1 reason to change :)

We know the intent of each method.

This API also supports a task based UI.

So now the question that arises is, which approach should we take ?

The answer is "it depends on the context" :) .... If  the context we are working on is the core domain we should probably tend towards a behaviorally rich model, else in a lot of cases a anemic approach may turn out just fine.

The behaviorally rich model does involve more thought and work, but in ends up with a much cleaner and more maintainable code base.

The most important thing to take home is that we should debate and discuss if everything should be blindly modeled in a anemic fashion, we do have a very real and appealing alternative , Lets consider it !!!

Disclaimer : A lot of it is directly picked up from the IDDD book ;)

Sunday, May 5, 2013

Server side pagination with mybatis

Of late I have just included support for server side pagination in the DDD/CQRS bootstrap project, the project uses MyBatis for querying the read side.

For those who are unfamiliar with server side pagination first have a look here.

So when implementing pagination the kind of information we often need is

The current page Number

The total number of pages

The total number of elements

A paginal component often looks like the below.




You can find the entire API to display a paginal component here.

With pagination you often want to show summary information about the entire result set, but want to display results only one page at a time.

Client API to use server side pagination is pretty simple, you can view it here.

You can find an example of the client API along with the implementation here

Sunday, April 21, 2013

Live Reload in action


I just came across Live Reload and thought it deserves a mention. Basically it allows you to see the changes made in the code instantly on the browser without having to refresh.


Steps to get it working

1) Download Live Reload from here
2) Add a plugin to your favorite browser.

Chrome Plugin.

Once you have the tools set, just follow the steps described in the video here