Join us for free and read amazing contents on elCurator.

Get wind of our features.

Already registered? Sign in.

In this blog post we are going to discuss ad hoc polymorphism and the Type Class Pattern in Scala in very simple terms. No knowledge of algebraic structures is required. Starting with a simple function for adding a pair of integers, we will progress by piecewise abstraction towards a polymorphic function for aggregating an arbitrary number of values.

## The problem with overloading

Let's start with a simple function for combining two integers by adding them:

Now suppose we want to use this function not only for integers, but for strings, too. We want it to be polymorphic, i.e. applicable to different types, and implemented accordingly: addition for integers, concatenation for strings.

Scala provides multiple means to implement polymorphism. The simplest one is overloading:

We can overload our function similarly for more types. However, looking at the implementation, we cannot overlook the fact that they all look the same: basically they just invoke the "+" operator on the parameters. Since duplication is bad, we ask ourselves: can we get rid of it?

Current state as gist## The limits of subtyping

What we really need is a generic function parameterized for different types. Hence we introduce a type parameter:

But how do we implement it? Since we want to combine two values of some type *A*, it better be a type which supports this kind of combination. Therefore we define such a type, name it *Addable*, and restrict the type parameter *A* in our signature to accept only subtypes of it using an upper bound clause:

Unfortunately, since we do not have control over *Integer* and *String* because they are defined in the standard library, we cannot make them subtypes of *Addable*.

But not all is lost. In fact, we do not need *Integer* and *String* to be actual subtypes of *Addable*; it would be sufficient if they were merely *convertible* to it:

Here we only specify using a view bound that the values of type *A* must be convertible to an *Addable*.

After implementing the required implicit conversions, we are good to go:

Although this approach works, it has some serious drawbacks. First, we *must* use implicit conversion, even if we just want to perform a single operation. Second, it is not natural to define a binary operation on a single value. It would be much better to define it on a separate class. For these reasons, view bounds are deprecated in Scala since version 2.11.

We will provide another solution to the problem next.

## Type classes to the rescue

Let's take a step back and consider our function signature once again:

Let's further assume that we have an *Adder* available, which can add values of type *A*, and which we can use in our implementation:

With this we can implement our function by requiring an implementation of the *Adder* through a second parameter list:

This is just simple dependency injection (parameter injection, to be precise). The reason for using a second parameter list will become clear shortly.

We create *Adder* implementations for Integers and Strings:

And now we can invoke our function, passing the right *Adder* to it:

Next, we are going to make our *Adder* dependency implicit and define *Adder* implementations as implicit objects, which we will import into the scope when we need them. This would allow us to omit the second argument in the function invocation, since the compiler will pick up the right implementation and pass it through automatically:

Now we can invoke our function like this:

Much better! Looking at the implicit parameter we see that its name is of no significance. What we actually need is just an instance of the type *Adder* to which we can delegate the task of performing the addition. Since it is marked implicit, we can just "summon" it from the context and use it. There is a function in the standard library which does exactly that:

Using it we can rewrite our implementation to:

Since this is a commonly used idiom, Scala provides syntactic sugar for declaring implicit parameters like this, which is called a context bound:

The compiler translates the clause *A: Adder* into an additional implicit parameter list containing a parameter of type *Adder[A]*, exactly as we defined previously by hand. Take note of the improved readability and semantic richness of the code: the clause *A: Adder* conveys the idea that the type A must "belong" to the type class *Adder*, much like the requirement that parameters *x* and *y* must belong the the type *A*. We are going to explore this idea further down.

We can refactor our code even more and move the invocation of implicitly to a companion object of *Adder*:

This will allow us to write our function even more succinctly:

Current state as gist## A little bit of theory

If our *Adder* implementation obeys the associativity law, i.e.

**add(x, add(y, z)) == add(add(x, y), z)** for all **x, y, z**

then it can be called a *Semigroup*.

A Semigroup is just a collection of objects - for example integers or strings - with a defined binary operation on them producing another object of the same type. For example, two integers can be added producing another integer, and two strings can be concatenated producing another string. If there are more than two objects to be combined using this operation, the order of the applications of individual operators must not matter. This property is called *associativity*.

**Side note**. What's up with the scary names?

If you have never heard of Semigroups before, or if hearing it brings back some unpleasant memories... relax. It's just another name for our *Adder*. Scary names like Semigroup, Monoid, Magma etc. were invented by lonesome mathematicians to describe simple things and intimidate other people, probably as an act of revenge for not being invited to parties.

For example, if we would not require our *Adder* to obey the associativity law, mathematicians would call it a *Magma*. Add associativity to it and you get a *Semigroup*. Further down in the text we are going to add a function to get a *zero value* from our *Adder*, effectively extending it into something called a *Monoid*. There are many more strange names to describe all kinds of algebraic structures, with all sorts of different properties. But at the end of the day, you can think of the Semigroup as just another name for our *Adder*.

**End of side note**.

Now, since we want the addition to be associative, we can officially rename our *Adder* to *Semigroup*:

**Side note**. Why is associativity important?

If the associativity law holds, the order of the application of the operators can be safely changed without changing the value of the expression. This means that any implementation is free to rearrange the expression tree, hence enabling *optimizations*.

For example, consider the following expression: **a + b + c + d**. By the way, the fact that we can omit parentheses in our notation is due to associativity of addition.

Now the question is: how do we evaluate this expression? Since addition is associative, we have multiple options, all of which are valid:

Evidently, the strategies (1) and (3) only allow for sequential evaluation. On the other hand, the strategy (2) allows for the evaluation of two subexpressions (a + b) and (c + d) to be done in parallel.

When computing the sum of four integers, parallelizing two additions does not seem to be a big gain. However, the general principle applies to any computation, as long as the associativity law holds. Take matrix multiplication or the join operator from the relational algebra for example. Since it obeys the associativity law, many database engines employ various optimization algorithms, which rearrange the subexpressions in the database queries in order to achieve faster or more resource efficient evaluation. Without associativity many such optimization techniques would not be possible.

**End of side note**.

## From two to many

One might think that we did not gain very much by employing the type class pattern in our implementation of the combine function and we just over engineered our case. However, this perception is only due to the simplicity of our function. Therefore we are going to extend it and redefine it to aggregate an arbitrary number of values.

First, we rename it accordingly and adjust its signature:

For aggregation we can use a function from the standard library called fold:

But where do we get the initial value for aggregation, called *identity* or *zero* (0 for integers and "" for strings) from? We need to extend our Semigroup and define the *zero value* for it. A Semigroup with a zero value is called a *Monoid*:

We use the same pattern as previously shown and define the apply method on the companion object in order to omit the call to implicitly in our implementation:

Finally we have everything in place and can implement our function:

Adjust the implementation for integers and strings and we are good to go:

Now we can calculate aggregates of integers and strings:

**Side note**. Why Type Class Pattern?

The concept of a *class* plays a central role in class based object oriented languages. In fact, it serves two purposes: providing a blueprint for constructing values (instances) of the class and defining a *data type*.

As a data type a class describes a collection of properties an object must have in order to belong to this specific type. For example, if it is known for an object to be of type *String*, then it is known to support the *length* operation, returning the length of it. The type checker can use this information at compile time to find errors in the source code - a process called *static type checking*.

A *type class* lifts this same concept to a higher level, applying it to types. It describes a collection of properties a type must have in order to belong to this specific type class. For example, if it is known that a type belongs to a *Semigroup* type class, then it is known that instances of that type can be combined according to an associative binary operation, producing another instance of the same type (for example addition for integers).

But how can we specify that a given type belongs to a certain type class? Unlike Haskell, Scala does not provide native syntax for this. But whenever a language does not have natural means for describing some common kind of structures, *patterns* emerge. In our case, that gap is filled by the type class pattern: the information that a type belongs to a certain type class is expressed implicitly by providing an implementation of a trait defining the properties of that type class, sometimes called *evidence*. We provided two such implementations in our scenario: one for integers and one for strings.

**End of side note**.

## Summary

In this article we discussed the motivation for introducing ad hoc polymorphism and implementing it using the Type Class Pattern in Scala. It allows us to build abstractions which are completely decoupled from the underlying data types on which they operate. Hence we can implement polymorphic functions which operate on types we have no control over, such as types defined in the standard library or some other third party library, without sacrificing the static type safety.

## Links

Cats - a Scala library using the type class pattern extensively to provide many useful abstractions for functional programming