Generalized type constraints in Scala (without a PhD)


Not long ago I stumbled upon the signature of the flatten method on Option:

def flatten[B](implicit ev: A <:< Option[B]): Option[B]

I don’t know about you, but I knew about implicit parameter lists, implicit resolution, and even type bounds. But this funny <:< “sad-with-a-hat” [1] operator [2] was entirely new to me!

Smart people [3] have written about it years ago, but it’s clear that we are talking about a feature which is not well-known and poorly documented, even though it is available since Scala 2.8. So this article is about figuring out what it means and how it works.

The following deconstruction turns out to be fairly long, but even though <:< itself may not be useful to every Scala programmer, it touches a surprisingly large number of Scala features which most Scala programmers should know.

What it does and how it’s useful

If you search the Scala standard library, you find a few other occurrences of <:<, in particular:

  • on Option:

    def orNull[A1 >: A](implicit ev: Null <:< A1): A1
  • on Traversable (via traits like GenTraversableOnce):

    def toMap[K, V](implicit ev: A <:< (K, V)): Map[K, V]
  • on Either:

    def joinRight[A1 >: A, B1 >: B, C](implicit ev: B1 <:< Either[A1, C]): Either[A1, C]
  • on Try:

    def flatten[U](implicit ev: T <:< Try[U]): Try[U]

You notice that, in all these examples, <:< is used in the same way:

  • there is an implicit parameter list, with a single parameter called ev
  • the type of this parameter is of the form Type1 <:< Type2

The lowdown is that this pattern tells the compiler:

Make sure that Type1 is a subtype of Type2, or else report an error.

This is part of a feature called generalized type constraints. [4] There is another similar construct, =:=, which tells the compiler: [5]

Make sure that Type1 is exactly the same as Type2, or else report an error.

In what follows, I am focusing on <:< which turns out to be more useful, but just know that =:= is a thing and works in a very similar way.

The why and how of this feature is the subject of the rest of this article! So for now, let’s take this as a recipe, a trick if you will, while we look at how this can be useful in practice.

Let’s start with flatten on Option:

def flatten[B](implicit ev: A <:< Option[B]): Option[B]

What does flatten do, as per the documentation? It removes a level of nesting of options:

scala> val oo: Option[Option[Int]] = Some(Some(42))
oo: Option[Option[Int]] = Some(Some(42))

scala> oo.flatten
res1: Option[Int] = Some(42)

This doesn’t make much sense if the type parameter A of Option is not, itself, an Option-of-something. So what should happen if you call flatten on, say, an Option[String]? I see two possibilities:

  1. The flatten method returns None.
  2. The compiler reports an error.

The authors of the Scala standard library picked option 2, and I think that it’s a good choice, because most likely calling flatten in this case is not what the programmer intends. And lo and behold, the compiler doesn’t let this pass:

scala> val oi: Option[Int] = Some(42)
oi: Option[Int] = Some(42)

scala> oi.flatten
<console>:21: error: Cannot prove that Int <:< Option[B].

So we have a generic type, Option[+A], which has a method, flatten, which can only be used if the type parameter A is itself an Option. All the other methods (except orNull which is similar to flatten) can be called: map, get, etc. But flatten? Only if the type of the option is right!

One thing to realize is that we have something unusual here: a method which the compiler won’t let us call, not because we pass incorrect parameters to the method (in fact flatten doesn’t even take any explicit parameters), but based on the value of a type parameter of the underlying Option class. This is not something you see in Java, and you have probably rarely seen it in Scala.

Looking again at the signature of flatten, we can see how the recipe is applied: implicit ev: A <:< Option[B] reads “make sure that A is a subtype of Option[B]”, and, since A stands for the parameter type of Option, we have:

  • in the first case “make sure that Option[Int] is a subtype of Option[B]
  • in the second case “make sure that Int is a subtype of Option[B]

Obviously, Option[Int] can be a subtype of an Option[B], where B = Int (or B = AnyVal, or B = Any). On the flip side, there is just no way Int can be a subtype of Option[B], whatever B might be. So the recipe works, and therefore the constraint works. [6]

To get the hang of it, let’s look at another nice use case , toMap on Traversable:

def toMap[K, V](implicit ev: A <:< (K, V)): Map[K, V]

Translated into English: you can convert a Traversable to a Map with toMap, but only if the element type is a tuple (K, V). This makes sense, because a Map can be seen as a collection of key/value tuples. It wouldn’t make great sense to create a Map just out of a sequence of 5 Ints, for example.

Similar rationales apply to the few other uses of <:< in the standard library, which all come down to constraining methods to work with a specific contained type only.

In light of these examples, I find that applying this recipe is easy, even though the syntax is a bit funny. But I can think of a few questions:

  1. Can’t we just use type bounds which I thought existed to enforce this kind of type constraints?
  2. If this is a pattern rather than a built-in feature, why does <:< look so much like an operator? Does the compiler have special support for it?
  3. How does this whole thing even work?
  4. Is there an easier ways to achieve the same result?

Let’s look into each of these questions in order.

Question 1: Can’t we just use type bounds?

Lower and upper bounds

Type bounds cover lower bounds and upper bounds. [7] These are well explained in the book Programming in Scala, so I won’t cover the basics here, but I will present some perspective on how they work.

As a reminder, lower and upper bounds are expressed with builtin syntax: >: and <: (spec). You can read:

  • T >: U as “type T is a supertype of type U” or “type T has type U as lower bound”
  • T <: U as “type T is a subtype of type U” or “type T has type U as upper bound”

A puzzler

Let’s consider the following:

scala> def tuple[T, U](t: T, u: U) = (t, u)
tuple: [T, U](t: T, u: U)(T, U)

My tuple function simply returns a tuple of the two values passed, whatever their types might be. Granted, it’s not very useful!

What are T and U? They are type variables: they stand for actual types that will be decided at each call site (each use of the function in the source code). Here both T and U are abstract: we don’t know what they will be when we write the function. For example we don’t say that U is a Banana, which would be a concrete type.

If we pass String and Int parameters, we get back a tuple (String, Int) in the Scala REPL:

scala> tuple("Lincoln", 42)
res1: (String, Int) = (Lincoln,42)

Now let’s consider the following modification, which is a naive attempt at enforcing type constraints with type bounds:

def tupleIfSubtype[T <: U, U](t: T, u: U) = (t, u)

I know, it’s starting to be like an alphabet (and symbol) soup! But let’s stay calm: the only change is that instead of specifying just T as type parameter, we specify T <: U, which means “T must be a subtype of U”.

The intent of tupleIfSubtype is to return a tuple of the two values passed, like tuple above, but fail at compile time if the first value is not a subtype of the second value.

So does the newly added constraint work? Do you think that the compiler will accept to compile this?

tupleIfSubtype("Lincoln", 42)

Before knowing better, I would have thought that the compiler:

  • would decide that T = String
  • would decide that U = Int
  • see the type constraint T <: U, which translates into String <: Int
  • fail compilation because obviously, String is not a subtype of Int

But it turns out that this actually compiles just fine!

How can this be? Is the constraint not considered? Is it a bug in the compiler? A weird edge case? Bad language design? Or maybe, with T <: U, the U is not the same as the second U in the type parameter section? This can quickly be proven false:

scala> def tupleIfSubtype[T <: V, U](t: T, u: U) = (t, u)
<console>:7: error: not found: type V
       def tupleIfSubtype[T <: V, U](t: T, u: U) = (t, u)

So the two Us are, as seemed to make sense intuitively, bound to each other (they refer to the same type).

The answer to this puzzler turns out to be relatively simple: it has to do with the way type inference works, namely that type inference solves a constraint system (spec).

What happens is that yes, I do pass String and Int, but it doesn’t follow that T = String and U = Int. Instead, the effective T and U are the result of the compiler working its type inference algorithm, given:

  • the types of the parameters we actually pass to the function,
  • the constraints expressed in the type parameter section,
  • and, in some cases, the expression’s return type.

If I write:

scala> def tuple[T, U](t: T, u: U) = (t, u)
tuple: [T, U](t: T, u: U)(T, U)

scala> tuple("Lincoln", 42)
res3: (String, Int) = (Lincoln,42)

then yes: T = String and U = Int because there are no other constraints. But when I introduce an upper bound, there is a constraint, and therefore a constraint system. The compiler resolves it and obtains T = String and U = Any:

scala> def tupleIfSubtype[T <: U, U](t: T, u: U) = (t, u)
tupleIfSubtype: [T <: U, U](t: T, u: U)(T, U)

scala> tupleIfSubtype("Lincoln", 42)
res4: (String, Any) = (Lincoln,42)

We can verify that the resulting types satisfy the constraints: [8]

  • String is a String of course
  • Int is a subtype of Any
  • String is also a subtype of Any

Phew! So this makes sense. It’s “just” a matter of understanding how type inference works when type bounds are present.

In the process we have learned that <: and >:, when used with abstract type parameters, do not necessarily produce results which are very useful, because the compiler can easily infer Any (or AnyVal or AnyRef) as solutions to the constraint system. [9]

Question 2: Why does <:< look so much like an operator?

Lets dig a little deeper to understand how <:< works under the hood. Here is a simple type hierarchy used in the examples below:

trait Fruit
class Apple  extends Fruit
class Banana extends Fruit

Parameter lists and type inference

Let’s start with a couple more things you need to know in Scala:

  • Functions can have more than one parameter list.
  • Type inference operates parameter list per parameter list from left to right.

In particular, an implicit parameter list can use types inferred in previous parameter lists.

So let’s write the solution, without necessarily understanding it fully yet:

def tupleIfSubtype[T, U](t: T, u: U)(implicit ev: T <:< U) = (t, u)

This function has two parameter lists:

  • (t: T, u: U)
  • (implicit ev: T <:< U)

Because type inference goes parameter list by parameter list, let’s start with the first one. You notice that there are no >: or <: type bounds! So:

  • T is whatever specific type t has (say T = Banana)
  • U is whatever specific type u has (say U = Fruit)

Infix types

Looking at the second parameter list, we have to clear a hurdle: what kind of syntax is T <:< U? This notation is called an infix type (spec). “Infix” just means that a type appears in the middle of two other types, the same way the + operator appears in the middle in 2 + 2. The type in the middle (the infix type proper) can be referred to a an infix operator. Instead of this operator being a method, as is the case in general in Scala, it is a type.

Let’s look at examples. You probably know types from the standard library such as:

Map[String, Fruit]
Either[String, Boolean]

These exact same types can be written:

String Map Fruit
String Either Boolean

The infix notation makes the parametrized type look like an operator, but an operator on types instead of values. Other than that, it’s just an alternate syntax, and really nothing to worry about!

So based on the above:

T <:< U

means the the same as:

<:<[T, U]

A symbolic type name

Now, what is a <:<? It’s a type: the same kind of stuff as Map or Either, in other words, typically a class or a trait. It’s just that this is a symbolic name instead of an alphabetic identifier like Map. It could as well have been called SubtypeOf, and maybe it should have been!

The implicit parameter

So once we reach the second (and implicit) parameter list:

(implicit ev: T <:< U)

we see that there is a parameter of type <:<, which itself has two type parameters, T and U. These are the same T and U we have in the first parameter list. They are bound to those, and these are known because type inference already did its magic on the first parameter list. Concretely, T is now assigned the type Banana and U the type Fruit!

What the implicit parameter list says is this: “Find me, somewhere in the implicit search path, an implicit val, def, or class of type <:< which satisfies <:<[T, U]”. [10] And because T and U are now known, we need to find an implicit match for <:<[Banana, Fruit].

The trick is to manage to have an implicit definition in scope which matches only if there is a subtype relationship between T and U. For example:

T U Compiler Happiness Level
Banana Fruit happy
Apple Fruit happy
Int Anyval happy
Apple Banana unhappy
String Int unhappy

If we manage to create such an implicit definition, the constraint mechanism works. And we already know that the clever engineers who devised this have found a way to create such an implicit definition!

By the way, we can play with this in the REPL using the standard implicitly function, which returns an implicit value for the given type parameter if one is found:

implicitly[Banana <:< Fruit]  // ok
implicitly[Apple  <:< Fruit]  // ok
implicitly[Int    <:< Anyval] // ok
implicitly[Apple  <:< Banana] // not ok
implicitly[String <:< Int]    // not ok

To summarize, we now have a pretty good level of understanding and we know that:

  • we are talking about a library feature
  • which relies on an implicit parameter
  • with a funny symbolic type operator <:<.

And we also know that the magic that makes it all work lies in the search for a matching implicit definition: if it is found, the subtyping relationship holds, otherwise it doesn’t and the compiler reports and error.

Question 3: How does this whole thing even work?

We could stop here and be happy to use <:< like a recipe, as if it was a core language feature. But that wouldn’t be very satisfying, would it? After all, we still miss the deeper understanding of how that magic implicit is defined, and why an implicit search for it may or may not match it. So let’s keep going!

The implementation

Let’s look at the implementation of <:<, which we find in the Scala Predef object [11]:

@implicitNotFound(msg = "Cannot prove that ${From} <:< ${To}.")
sealed abstract class <:<[-From, +To] extends (From ⇒ To) with Serializable

private[this] final val singleton_<:< = new <:<[Any, Any] {
  def apply(x: Any): Any = x

implicit def $conforms[A]: A <:< A = singleton_<:<.asInstanceOf[A <:< A]

Wow! Can we figure it out? Let’s try.

Which implicit?

Let’s think about a simpler case of implicit search:

def makeMeASandwich(implicit logger: Logger) = ...

implicit def findMyLogger: Logger = ...

val mySandwich = makeMeASandwich

The compiler, when you write makeMeASandwich without an explicit parameter, looks for an implicit in scope of type Logger. Here, the obvious matching implicit is findMyLogger, because it returns a Logger. So the compiler selects the implicit and in effect rewrites your code as:

val mySandwich = makeMeASandwich(findMyLogger)

The same mechanism is at work with implicit ev: T <:< U: the compiler must find an implicit of type T <:< U (or <:<[T, U] which is exactly the same). And there is only one implicit definition with type <:<-of-something in the whole standard library, which is:

implicit def $conforms[A]: A <:< A

Now there is a bit of a twist, because the implicit is of type <:<[A, A], with a single type parameter A, which in addition is abstract. [12] Anyhow, this means that our function parameter ev of type <:<[T, U] must, somehow, “match” with <:<[A, A].

If we ask ourselves: what does it take for this implicit of type <:<[A, A] to be successfully selected by the compiler? The answer is that one should be able, for some type A to be determined, to pass a value of type <:<[A, A] to the parameter of type <:<[T, U]. Another way to say this is that <:<[A, A] must conform to <:<[T, U]. If we can’t do this, the implicit search will fail.


How does this conformance work? This takes us to the notion of variance, which is always a fun thing. Consider a Scala immutable Seq. It is defined as trait Seq[+A]. The little + says that if I require a Seq[Fruit], I can pass a Seq[Banana] just fine:

def takeFruits(fruits: Seq[Fruit]) = ...
takeFruits(Seq(new Banana))

This is called covariance (the subtyping of the type argument goes in the same direction as the enclosing type). Without the notion of covariance and contravariance (where subtyping of the type argument goes in the opposite direction as the enclosing type), you:

  • either can never write code like this (everything is invariant)
  • or you have an unsound type system [13]

Besides collections, functions are another example where variance and contravariance matter. Say the following process function expects a single parameter, which is a function of one argument:

def process(f: Banana ⇒ Fruit)

I can of course pass to process a function with these exact same types:

def f1(f: Banana): Fruit = f

But Scala’s support for subtyping also applies to functions: a function can be a subtype of another function. So I can pass a function with types different from Banana and Fruit without breaking expectations as long as the function:

  • takes a parameter which is a supertype of Banana
  • returns a value of a subtype of Fruit

For example:

def f2(f: Fruit): Apple = new Apple

This is the magic of variance, and you can convince yourself that it makes sense from the point of view of the process function: expectations won’t be violated.

Functions are not a special case in Scala from this point of view: a function of one parameter is defined (in a simplified version) as a trait with the function parameter as contravariant and the result as covariant:

trait Function1[-From, +To] { def apply(from: From): To }

Putting everything together

After this detour in variance land, let’s get back to <:< and the implicit parameter. The implicit <:<[A, A] will conform to the parameter <:<[T, U] if it follows the variance rules. So what’s the variance on <:<[T, U] in Predef?

<:<[-From, +To]

This is the same as Function1[-From, +To] and in fact <:< extends Function1! So our problem comes down to the following question: if somebody requires a function:

T ⇒ U

what constraints must be satisfied so I can pass the following function:

A ⇒ A

With variance rules, we know it will work if:

  • A is supertype of T
  • and A is subtype of U

Written in terms of bounds:

T <: A <: U

Which means of course that T <: U: T must be a subtype of U!

To summarize the reasoning: the only eligible implicit definition in scope which can possibly be selected by the compiler to pass to our function is selected if and only if T is a subtype of U! And that’s exactly what we were looking for! [14]

You can look at this from a slightly more general angle, which is that a function A ⇒ A can only be passed to a function T ⇒ U if T is a subtype of U. [15] You can in fact test the matching logic very simply with the built-in identity function:

val f: Banana ⇒ Fruit  = identity // ok
val f: Fruit  ⇒ Banana = identity // not ok

The same works with $conforms, which returns an <:<, which is also an identity function:

val f: Banana ⇒ Fruit  = $conforms // ok
val f: Fruit  ⇒ Banana = $conforms // not ok

So it is a neat trick that the library authors [16] pulled off here, combining implicits and conformance of function types to implement constraint checking.

The nitty-gritty

The rest of the related code in Predef is about defining the actual <:< type and creating a singleton instance returned by def $conforms[A], because in case the implicit search matches, it must return a real value after all.

You could write it minimally (using <::< in these attempts so as to not clash with the standard <:<):

sealed trait <::<[-From <: To, +To] extends (From ⇒ To) {
  def apply(x: From): To = x

implicit def $conforms[A]: A <::< A =
    new <::<[A, A] {}

But oops, the compiler complains:

scala> def tupleIfSubtype[T, U](t: T, u: U)(implicit ev: T <::< U) = (t, u)
<console>:23: error: type arguments [T,U] do not conform to trait <::<'s type parameter bounds [-From <: To,+To]
       def tupleIfSubtype[T, U](t: T, u: U)(implicit ev: T <::< U) = (t, u)

The good news is that the following version, using an intermediate class, works:

sealed trait <::<[-From, +To] extends (From ⇒ To)

final class $conformance[A] extends <::<[A, A] {
  def apply(x: A): A = x

implicit def $conforms[A]: A <::< A =
  new $conformance[A]

So this works great, with a caveat: every time you use my version of <::<, a new instance of the anonymous class is created. Since we just want an identity function, which works the same for all types and doesn’t hold state, it would be good to use a singleton so as to avoid unnecessary allocations. We could try using an object, since that’s how we do singletons in Scala, but that’s a dead-end because objects cannot take type parameters:

scala> implicit object Conforms[A] extends (A ⇒ A) { def apply(x: A): A = x }
<console>:1: error: ';' expected but '[' found.
implicit object Conforms[A] extends (A ⇒ A) { def apply(x: A): A = x }

So in the end the standard implementation cheats by creating an untyped singleton using Any, and casting to [A <:< A] in the implementation of $conforms. Here is my attempt, which works fine:

private[this] final object Singleton_<::< extends <::<[Any, Any] {
  def apply(x: Any): Any = x

implicit def $conforms[A]: A <::< A =
    Singleton_<::<.asInstanceOf[A <::< A]

The actual Scala implementation opts for using a val instead of an object (maybe to avoid the cost associated with an object’s lazy initialization):

private[this] final val singleton_<:< = new <:<[Any, Any] {
  def apply(x: Any): Any = x

implicit def $conforms[A]: A <:< A =
    singleton_<:<.asInstanceOf[A <:< A]

We are only missing one last bit:

@implicitNotFound(msg = "Cannot prove that ${From} <:< ${To}.")
sealed abstract class <:<[-From, +To] ...

This helps provide the user with a nice message when the implicit is not found. From a syntax perspective, it is a regular annotation, which applies to the abstract class <:<. The annotation is known by the compiler. [17]

So here we are: the implementation is explained! It’s a bit trickier than it should be in order to prevent extra allocations. I confess that I am a bit disappointed that there doesn’t seem to be a way to avoid an instanceOf: even though it’s local to the implementation and therefore the lack of safety remains under control, it would be better if it could be avoided.

An implicit conversion

One thing you might wonder is what to do with the ev parameter. After all, a value must be passed to the function when the implicit is found (when it’s not found, the compiler blows up so ev doesn’t need an actual value).

A first answer is that you don’t absolutely need to use it. It’s there first so the compiler can check the constraint. That’s why it’s commonly called ev, for “evidence”: its presence stands there as a proof that something (an implicit) exists.

Nonetheless, ev must have a value. What is it? It’s the result of the $conforms[A] function, which is of course of type <:<[T, U]. And we have seen above that <:< extends T ⇒ U. So the result of $conforms[A] is a function, which takes an A and returns an A, that is, an identity function. And it not only returns a value of the same type A, but it actually returns the same value which was passed (that’s the idea of an identity function).

And you see that in the implementation:

def apply(x: Any): Any = x

It follows that ev has for value an identity function from T to U: it takes a value t of type T and returns that same value but with type U. This is possible, and makes sense, as we know that T is a subtype of U, otherwise the implicit wouldn’t have been found.

But there is more: ev is also an implicit conversion from T to U (from Banana to Fruit). How so? Because it has the keyword implicit in front of it, that’s why!

To contrast with regular type bounds, if you write:

def tupleIfSubtype[T <: U, U](t: T, u: U) = ...

the compiler knows that T is a subtype of U, thanks of the native semantic of <:. But with <:<, the compiler knows nothing of the sort based on the type parameter section.

However the presence of the implicit ev function makes it possible to use the value t of type T as a value of type U. The subtype relationship can be seen as an implicit conversion. This is much safer than using t.asInstanceOf[U]. You could also be extra-explicit and write:


So you can write:

def tToU[T, U](t: T, u: U)(implicit ev: T <:< U): U = t


def tToU[T, U](t: T, u: U)(implicit ev: T <:< U): U = ev(t)

Without the implicit conversion, the compiler complains:

scala> def tToU[T, U](t: T, u: U): U = t
<console>:10: error: type mismatch;
 found   : t.type (with underlying type T)
 required: U
       def tToU[T, U](t: T, u: U): U = t

You can see how Option.flatten makes use of the ev() function:

def flatten[B](implicit ev: A <:< Option[B]): Option[B] =
  if (isEmpty) None else ev(this.get)

In summary, all these features fall together to produce something that makes a lot of sense and is useful.

Question 4: Is there an easier ways to achieve the same result?

There is at least one other way something like what <:< does can be achieved. The idea is that a method such as flatten does not need to be included on the base class or trait, in this case Option. Instead, Scala has, via implicit conversions, what in effect achieves extension methods (AKA the “extend my library” pattern).

So say that such an extension method is only available on values of type Option[Option[T]]:

implicit class OptionOption[T](val oo: Option[Option[T]]) extends AnyVal {
  def flattenMe: Option[T] = oo flatMap identity

If we try to apply it to Some(Some(42)), the method is found and the flattening works:

scala> Some(Some(42)).flattenMe
res0: Option[Int] = Some(42)

If we try to apply it to Some(42), the method is not found and the compiler reports an error:

scala> Some(42).flattenMe
<console>:13: error: value flattenMe is not a member of Some[Int]

But I see a few differences with the <:< operator:

  • You need to create one implicit class for each type supporting a conversion. In the case of Option, for example, you need one implicit class taking an Option[Option[T] to support flatten, and another implicit class to support orNull. So this requires a bit more boilerplate than <:< per method.
  • I am not sure whether there something similar to @implicitNotFound to report a better error in case of problem.

So why not do it this way? I think that a good case can be made that it is easier to understand in the case of the relatively simple examples we have seen so far.

On the other hand, <:< is a more flexible library feature which you can reuse easily and even combine with other implicits, like in this example using Shapeless: [18]

def makeJava[F, A, L, S, R](f: F)(implicit
  ftp: FnToProduct.Aux[F, L => S],
  ev: S <:< Seq[R],
  ffp: FnFromProduct[L => JList[R]]

Finally, when using type bounds, the constraints expressed with <: and >: can only apply to the method type parameters (or class type parameters when they are used on a class). This is very useful, as we have seen. But when using <:<, you can constrain any two types in scope, and even impose multiple such constraints. Your imagination is the limit:

trait T[A, B] {

  type C
  type D

  def constrainTwoTraitParams         (implicit ev: A <:< B) = ()
  def constrainTraitParamAndTypeMember(implicit ev: A <:< C) = ()
  def constrainTwoTypeMembers         (implicit ev: C <:< D) = ()
  def constrainMore[Y](c: Y)          (implicit ev1: A <:< B, ev2: Y <:< C) = ()

class C extends T[Banana, Fruit] {
  type U = Fruit
  type V = String

You can even go further and constrain not only these types directly, but higher-order types, as in this (math- and symbol-heavy) example from Miles Sabin:

def size[T](t: T)(implicit ev: (¬¬[T] <:< (Int ∨ String))) = ???

In this case, the constraint is not directly on the T type parameter, but on ¬¬[T].

This might be, after all, how the term “generalized type constraint” gets its name.


We have seen how regular type bounds:

  • behave when using abstract type parameters
  • but don’t work to actually enforce certain useful constraints.

We have also seen how we can use instead a generalized type constraint expressed with <:<:

  • to implement methods which can only be used when types are aligned in a certain way
  • and how <:< is not a built-in feature of the compiler, but instead a library feature implemented via a smart trick involving implicit search and type conformance.

Finally, we have considered:

  • how the simple use cases in the standard library could be implemented differently
  • but also how <:< is a more general tool.

So is <:< is worth it? Should it be part of the standard library, and should Scala developers learn it?

I think that the feature suffers from the fact that is is not properly documented, explained, and put in perspective. It also suffers from being a symbolic name with no agreed upon way to pronounce it!

The standard library uses of <:< could be replaced with “extension methods”, which would achieve the same result via Scala features which are easier to understand and familiar to most Scala programmers. I think that this argues against the presence of <:< in the standard library, especially at the level of Predef, and if this was introduced today, my inclination would be to recommend leaving it to third-party libraries such as Shapeless which actually benefit the most from this kind of advanced features.

On the plus side, when used <:< as a recipe, it is easy to understand and useful, and I can’t help but being impressed that generalized type constraints are implemented at the library level, and that they can emerge from powerful underlying language features such as type inference and implicits.

This is typical of Scala, and in line with the principle of Martin Odersky that it is better to keep the core language small when possible. So even though the explanation of how <:< works might seem a bit tricky, you can take comfort in thinking that in other languages this might be compiler code, not library code. But I also understand how some programmers [19] might be bothered by all the machinery behind features like this.

As for me, I am keeping generalized type constraints in my toolbox, but I like seeing the feature as a gateway to a more in-depth understanding of Scala. I hope this post will help others along this path as well!

Did I get anything wrong? Please let me know!

  1. Other suggestions include “Madonna wearing a button-down shirt” and “Angry Donkey”!  ↩

  2. It is valid to call this an operator, even though it is not built into the compiler, and is not an operator on values like +: it is instead an operator on types. In fact the Scala spec calls this an infix operator.  ↩

  3. Using generalized type constraints - How to remove code with Scala 2.8.  ↩

  4. I haven’t found a good explanation for the adjective generalized. This makes you think that there are more specific type constraints. But which are those then?  ↩

  5. It seems that there was another <%< operator as well, but it’s nowhere to be found in Scala 2.11. I suspect that, since it was related to the concept of view bounds, which are being deprecated, and probably had no use in the Scala standard library, it was removed at some point.  ↩

  6. The authors of the standard library could have used =:= to say that the type has to be exactly an Option[B], but using the subtyping relationship allows the result of the expression to be a supertype. Assuming Banana <: Fruit:  ↩

    scala> Some(Some(new Banana)).flatten: Option[Fruit]
    res2: Option[Fruit] = Some(Banana())
  7. “Lower bound” and “upper bound” refer to the type hierarchy: if you draw a type hierarchy with the supertypes at the top and subtypes at the bottom, “lower” means being closer to the bottom, and “upper” means closer to the top. So a “lower bound” for a type means the type cannot be under that. Similarly, an “upper bound” means the type cannot be above that.  ↩

  8. The compiler could have chosen the solution Any / Any, or AnyRef / Any. But these would be less useful and the compiler tries to be more specific when it can.  ↩

  9. The Typelevel team in particular wants to address that kind of not-very-useful type inference.  ↩

  10. That’s how all implicit searches work, see Where does Scala look for implicits?.  ↩

  11. In a 2014 commit, the implementation switched to $conforms instead of conforms to avoid accidental shadowing.  ↩

  12. It is a bit unusual to see an implicit definition which is parametrized with an an abstract type parameter. Martin Odersky commented on this in a blog post: “The new thing in 2.8 is that implicit resolution as a whole has been made more flexible, in that type parameters may now be instantiated by an implicits search. And that improvement made these classes useful.”  ↩

  13. Mutable Scala collections and arrays in Scala, in particular, are invariant so you cannot assign a mutable.ArrayBuffer[Banana] to a mutable.ArrayBuffer[Fruit], or an Array[Banana] to an Array[Fruit]. Immutable Scala collections are covariant, because it is convenient and safe for them to be. Java arrays are covariant and therefore unsafe!  ↩

  14. The compiler needs to be able to figure out conformance of types outside of implicit search, including every time you pass a parameter to a function. So it’s relatively easy to imagine how the compiler goes through the implicit search path, checking each available implicit, and pondering: “Does this particular implicit have a type which conforms to the required implicit parameter type? If so, I’ll use it, otherwise I’ll continue my search (and fail if the search ends without a match).”.  ↩

  15. In versions of Scala prior to 2.8, the predefined identity function was defined as implicit, and you could use it to implement generalized type constraints. However this early implementation had issues related to implicit search, therefore a new solution was implemented in 2.8 and <:< was introduced. But in fact <:< acts exactly like an implicit identity function under another name! James Iry commented on this topic:  ↩

    BTW, prior to 2.8 the idea could more or less be expressed with

    def accruedInterest(convention: String)(implicit ev: I ⇒ CouponBond): Int = ...

    I say more or less because ev could be supplied by any implicit function that converts I to CouponBond. Normally you expect ev be the identity function, but of course somebody could have written an implicit conversion from say DiscountBond to CouponBond which would screw things up royally.

  16. Jason Zaugg appears to be the mastermind behind it.  ↩

  17. Here is a short blog post on this annotation.  ↩

  18. For more uses of <:<, see Unboxed union types in Scala via the Curry-Howard isomorphism by Miles Sabin.  ↩

  19. See Yang Zhang’s post, which made some noise a while back.  ↩

iPhone 6: Pay less with a little-known T-Mobile plan

T-Mobile SIM Kit
T-Mobile SIM Kit

TL;DR: If you have the cash to buy an unlocked iPhone 6 upfront, don’t mind running on the T-Mobile network, and mostly care about data as opposed to voice, you can save well over $500 over a period of two years compared to mainstream plans by AT&T, Verizon or even T-Mobile’s flagship plans.

NOTE: The following post is specific to the US smartphone market.

Over the last 2 years I have been on an AT&T business plan [1] which was not a bad deal by US standards:

  • Upfront cost for the iPhone 5: $363.74 [2]
  • Monthly cost: $74 ($69.99 plus taxes, fees and phone subsidy)
  • Monthly data: 3 GB
  • Contract duration: 2 years

I usually stayed under the included 3 GB, but occasionally went over and had to pay an extra $10 for an additional 1 GB. I made very limited use of voice and text.

As I wanted to get a new iPhone 6 Plus, I considered my options. With that same AT&T business plan, here is what the cost would have been for the next 2 years:

  • Upfront cost for the iPhone 6 Plus 64 GB: $435.91 ($399 + tax) [2]
  • Monthly cost: $74
  • Monthly data: 3 GB
  • Contract duration: 2 years
  • Total cost of ownership: $2,211.90 ($92 / month)

The price of an unlocked iPhone 6 Plus 64 GB, bought directly from Apple, is $927.53 ($849.00 without sales tax). If we spread the total cost over 2 years, we get the following breakdown:

  • Monthly device payment: $927.53 / 24 = $38.65
  • Monthly service cost: $92 - $38.65 = $53.35

Looking at the monthly cost over the same period of time is useful as it allows us to do meaningful comparisons.

Now let’s look at the T-Mobile plans advertised for the iPhone 6. They give you quite a bit (unlimited talk, text and data with data throttling), but they are not cheap: they range from $50 to $80 per month, “plus taxes, fees and monthly device payment”, that is without phone subsidy. They mainly differ by the amount of 4G LTE data you get (from 1 GB to unlimited, and then “your data speed will automatically convert to up to 2G web speeds for the remainder of your billing cycle”). [3]

For my data usage I would probably need the $60 plan (which, remember, doesn’t include taxes and fees, so is probably at least $65 in practice) to have something equivalent to my AT&T plan. This is about $12 more per month ($288 more over 24 months) than my previous AT&T service.

In short, T-Mobile is not a particularly good deal if you care mostly about 4G data. [4] And, by the way, AT&T now has comparable prices as well.

But luckily there is more: T-Mobile also offers prepaid plans. And although the flagship prepaid plans that T-Mobile advertises are the same as their regular plans, you will find, hidden in plain sight, the following:

$30 per month - Unlimited web and text with 100 minutes talk

100 minutes talk | Unlimited text | First 5 GB at up to 4G speeds

Now get unlimited international texting from the U.S. to virtually anywhere included in your plan—at no extra charge.

This plan is only available for devices purchased from Wal-Mart or devices activated on

I had heard of this plan from friends who have been using it for quite a while with Android phones. T-Mobile clearly doesn’t want you to know too much about this: it is a little bit buried, and details of the plan are lacking. But it’s there! [5]

The question now is: does this work with the iPhone 6 Plus? The answer is yes, it does work! Here is what you have to do:

  1. Buy your unlocked (“Contract-free for use on T-Mobile”) iPhone 6 (or 6 Plus) from Apple. [6]
  2. Order the T-Mobile SIM Starter Kit with nano SIM. The kit is $10 but T-Mobile sometimes has promotions (I bought the kit for one cent).
  3. Don’t activate the T-Mobile SIM which comes with your iPhone. [7]
  4. Once you receive the SIM, place it in your iPhone.
  5. Proceed with activation online [8] and choose the $30 plan. [9]
  6. Profit!!!

So now let’s look at the total cost of ownership of this solution over two years:

  • iPhone 6 Plus 64 GB, unlocked, with sales tax: $927.53
  • Monthly cost of plan: $30 [10]
  • Monthly data: 5 GB
  • Total provider cost over 2 years: $30 × 24 = $720
  • Total cost per month including the iPhone: $68.65
  • Total cost of ownership: $1,647.53
  • Savings over my earlier AT&T plan over 2 years: $564.38

Of course, this is still not cheap overall, but it’s a bit better, and in addition I get:

  • 2 GB more 4G data per month than with the AT&T business plan
  • tethering [11]
  • an unlocked phone which I can use on many networks around the world
  • no contract commitment whatsoever
  • the ability to upgrade the phone at any time (just sell it and buy a new one!)
  • the pleasure of giving money to a company a little bit less evil than AT&T and Verizon

There are drawbacks to this solution, in particular:

  • It is unclear whether I could have ported my phone number and still qualify for a “new activation”. I did not try it because I use Google Voice to forward my calls anyway.
  • You are on the T-Mobile network, and this means that you won’t have as much coverage as with AT&T or Verizon.
  • This can be seen as a benefit or a drawback: you have to pay upfront for the phone, and T-Mobile won’t help you pay for it when you get prepaid plans.
  • There are way less voice minutes (an option to call regular phones is using Skype, Google Hangouts, or other VoIP solutions).
  • It is unclear whether fancy features such as Wi-Fi calling [12] or VoLTE are or will be enabled. But since these are voice features and this solution is for people who care more about data than voice, it doesn’t matter much to me.

No matter what, I will see how this fares over the next few months, and in the meanwhile I hope this post will be useful to others!

Disclaimer: This has been working for friends with Android phones and appears to be working for me so far with the iPhone 6 Plus, but I cannot be held responsible if you go this route and have issues of any kind.

For another article comparing plans by major providers, see iPhone 6 Plans Compared: AT&T, Verizon, Sprint, and T-Mobile. Keep in mind that this looks at and iPhone 6, not 6 Plus (so about $100 of difference) and only 2 GB / month plans.

  1. If you have a company, I recommend you ask AT&T about these plans. You have great customer support, and contrary to business cable, you get more for your money compared with consumer plans.  ↩

  2. This is of course not the full price of the phone. It is a downpayment you make on it, and you pay for your phone as part of your monthly plan, in ways which until recently were usually not detailed by providers.  ↩

  3. T-Mobile also has a “Simple Starter 2GB Plan” for $45/month, which includes 2 GB of 4G LTE data, but then cuts out your data. This is not really an option for me.  ↩

  4. T-Mobile also has business plans, but for one line the prices are the same.  ↩

  5. In fact, it is surprising that they even have this plan at all on their site. It makes sense at Wal-Mart, but online? Could it be that they legally have to list it on their site if they provide it at Wal-Mart? I would be curious to know.  ↩

  6. That’s what I did. It might be the same if you get it from T-Mobile, but I haven’t tried.  ↩

  7. I didn’t try to activate it, but I suspect that the activation instructions would lead you to the regular T-Mobile plans without including the $30 prepaid plan. Since the SIM kit was $ 0.01, I figured I would go the safer route. But even for $10 the price remains reasonable.  ↩

  8. Ignore the voice-based activation which starts when you turn on the phone. Also, I had some trouble with Chrome and then switched to Firefox.  ↩

  9. The plan is marked “for new activations only”, and I am not sure what it means, although by any definition of “new activation” I can think of, mine was a “new activation”.  ↩

  10. And by the way the plan is a round $30 per month: there is are no additional taxes or fees.  ↩

  11. With some devices, such as the Nexus 5, tethering is disabled by T-Mobile, while it works fine with the Nexus 4. It is entirely possible that T-Mobile will disable tethering on the iPhone 6 when they get to it. But for now it works.  ↩

  12. Although the T-Mobile site says “WiFi Calling for all T-Mobile customers with a capable device”.  ↩

Reading plan: October checkin

My goal for the month of September was making progress reading Intuition Pumps and Other Tools for Thinking. I am now at page 60, so I consider that this was a success. It does help to have small, achievable goals!

Reading plan: September checkin

My goal for the month of July was:

  • continue on the synthesis and try to have a closure on it, spending about 2 more hours

I didn’t have any goals for August.

I only managed to spend another hour in July, and I don’t have a closure. So I have decided to park this work for a while.

The good news is that I have just started reading Daniel Dennett’s Intuition Pumps and Other Tools for Thinking this month, one of the books on my reading plan for 2014.

I have decided to read this book not in digital format, but on good old paper. I have also decided to allow myself to trash this book with highlights and annotations as I see fit.

My goal for the rest of the month is simply to make progress reading this book.

Rationalizing the iPhone 6 Plus

iPhone 6 and iPhone 6 Plus
iPhone 6 and iPhone 6 Plus

Daniel Miessler asked himself which iPhone 6 to get and I did the same. Here are my thoughts.

First, whether considering the 6 or 6 Plus, there is definitely a decrease in pocketability. But I see the move to larger screens as necessary [1] as we use the devices we call phones more and more as computers-which-you-carry-in-your-pocket. [2]

After Apple’s keynote, I hesitated a little bit between the iPhone 6 and the 6 Plus. Initially I was pretty sure that I wanted the larger size. Then I printed the templates and realized that the Plus was larger than I had expected. I started having doubts about whether I would like the larger device, in particular:

  • Will it fit in a pocket relatively comfortably, or will it be a constant annoyance? [3]
  • Will I be able to use it with a single hand at least part of the time? [4]

In the end I decided to get the 6 Plus and to consider it an experiment: a device so different from the ones I have had so far (iPhone 3G, [5] iPhone 4, iPhone 5) might change my habits in some interesting ways.

I am also experimenting in another way: I have had AT&T contracts since the iPhone 3G in 2008. This time around I ordered an unlocked iPhone 6 Plus for use on the T-Mobile network. [6] I like the idea of having an unlocked device, as well as having more options for plans. I will probably try to get one of the T-Mobile prepaid plans (which they don’t advertise much). [7]

Here are the features specific to the 6 Plus I am looking forward to:

  • Improved camera with optical stabilization. I have kids and I consider the camera which I carry with me at all times important. [8]

  • Bigger screen. Many activities should be more comfortable with a larger screen. Will I use my phone for reading more? Will I still be interested in getting the next Kindle?

  • Improved battery life. Depending on which feature you are looking at, the battery life is supposed to be better across the board. For example, Wi-Fi browsing is 10 % longer and standby 60 % longer.

I am also looking forward to the following features shared by the 6 and 6 Plus:

  • Improved speed. The CPU improvement announced is “only” 25% over the iPhone 5S, but the 5S was about twice faster than the 5, so that will be a nice improvement.

  • The new hardware design. I like the rounded body, which looks more like the original iPhone and should make the device pleasant to hold. The iPhone 3G, while plasticky, was also great to hold due to the curve of its back, and from this perspective the iPhone 4 to iPhone 5S design was a step back.

  • Apple Pay. I don’t need to pay in stores all day long, and this won’t revolutionize payments, [9] but I am intrigued by this combination of Touch ID and NFC. Will it work as reliably and fast? Will it work in stores I am likely to visit? The US is finally implementing “Chip and PIN” cards to help prevent fraud. This means that it might become a little slower to pay with cards than it has been so far, as you will have to enter your PIN. [10] Could Apple Pay be slightly more interesting due to this move?

I am looking forward to retire my beat up iPhone 5!

  1. We can say it now that Apple is finally in the race!  ↩

  2. Not in all pockets in the case of the iPhone 6 Plus, Galaxy Note, or the 6" Nokia Lumia 1520. There is word that Google might come out with a large Nexus phone soon as well.  ↩

  3. If not, I can always consider 5.11 TacLite Pro Pant. I don’t think I can consider a European carryall.  ↩

  4. The iPhone 6 Plus has a feature called Reachability to help with this: a double-tap of the home button brings down the content to make it reachable by the thumb.  ↩

  5. Steve Jobs referred to the original iPhone’s screen as “giant”. Times have changed.  ↩

  6. I already knew the price of the device, but it was still a bit of a shock to see the final price in the shopping cart (almost $1,000 with sales tax!). It is a neat trick that the big US carriers have pulled to subsidize the price of devices over 18–24 months. I would bet that a large majority of smartphone users do not know the actual price of the device.  ↩

  7. I don’t have an absolute guarantee that this will work out. But the phone will be unlocked and T-Mobile has a “bring your own device” option so I am hoping things will be smooth.  ↩

  8. The camera of my iPhone 5 has gathered dust inside, and I find myself reaching for my wife’s iPhone 5S regularly. I also take my SLR on specific occasions.  ↩

  9. For two reasons: because the major credit card companies are still involved, and because the system is limited to the Apple ecosystem.  ↩

  10. This American Express FAQ says: “If you have a Chip and PIN enabled Card, you must use your PIN (Personal Identification Number) when prompted, to pay for goods and services”.  ↩

Reading plan: July checkin

My goals for the month of June were:

  • spend 3 quality hours
  • complete synthesis of the organ book

I spent 5 hours in June, which is good as that means I did more than planned! However that was not enough to complete that synthesis.

It’s already mid-July and I have spent another hour on the synthesis. For the rest of the month, I will continue on the synthesis and try to have a closure on it, spending about 2 more hours.

For next month, I want to separate the tasks:

  • planned reading
  • writing about what I read

I do not plan to write about all the books I read, certainly not in depth. Still, if I want to read at least one other book this year, I better start doing the reading part regularly, and if writing is needed consider that a separately planned task.

Thoughts on the Swift language

What it is

I am not a language designer but I love programming languages, so I can’t resist putting down a few rough thought on Swift, the new programming language announced on Monday by Apple. It is designed to make Objective-C, the main language used to build apps on iOS and OS X, a thing of the past. I think it’s fair to say that this was, for developers, the highlight of Monday’s WWDC keynote.

Objective-C is a dinosaur language, invented in the early 1980s. If you know any relatively more modern higher-level language (pick one, including C#, Scala, even Hack), it is clear that it has too much historical baggage and not enough of the features programmers expect.

John Siracusa captured the general idea in his 2005 Avoiding Copland 2010 article and its revision, Copland 2010 revisited: Apple’s language and API future, and has kept building a really good case since, in various podcasts, that Apple had to get their act together. Something, anything, had to be done. [1]

There was a possibility that Apple would keep patching Objective-C, moving toward a superset of a safe subset of it. But I don’t think that anybody not working at Apple saw Swift coming that, well, swiftly. [2]

Why this is good for programmers

Reactions to Swift so far seem mostly positive. (I don’t tend to take the negative reactions I have seen seriously as they are not argumented.) As Jeff Atwood tweeted: “TIL nobody actually liked Objective-C.”. I share the positive feeling for three reasons:

First, I believe that programming languages matter:

  • they can make developers more or less productive,
  • they can encourage or instead discourage entire classes of errors,
  • they can help or hinder reuse of code,
  • they can make developers more or less happy.

With brute force and billions of dollars, you can overcome many programming languages deficiencies. But it remains a waste of valuable resources to write code in an inferior language. Apple has now shown that it understands that and has acted on it, and they should be commended for it.

Second, concepts which many Objective-C developers might not have been familiar with, like closures, immutable variables, functional programming, generics, pattern matching, and probably more, will now be absorbed and understood. This will lead to better, more maintainable programs. This will also make these developers interested in other languages, like Scala, which push some of these concepts further. The bar will be generally raised.

Finally, arguments over the heavy, ugly syntax of Objective-C, and its lack of modern features can be put to rest: Apple has decided the future path for iOS and OS X developers. That ship has sailed.

Where it fits

What kind of language is Swift? I noticed on Twitter that many had a bit of trouble positioning the language. Did Apple reinvent JavaScript? Or Go? Is Swift functional first? Is it even like Scala? What about C#? Or Clojure or XQuery?

I haven’t seen anything in Swift that is not in other programming languages. In fact, Swift features can be found in dozens of other languages (in Lattner’s own words, “drawing ideas from Objective-C, Rust, Haskell, Ruby, Python, C#, CLU, and far too many others to list”), and that’s why many have found similarities with their language of choice. So Swift is not “innovative”. Instead it is a reasonable mix and match of features which make sense for the Apple ecosystem.

Here are a few essential aspects of Swift which are not language features but which put it in context. These all appear to be essential to Apple:

  1. Owned by Apple: Swift is fully owned by Apple. It does not depend on Oracle (Java/JVM), Microsoft (.NET), or Google.

  2. Objective-C integration: Swift is designed to integrate really well with Objective-C. In fact, this is likely the second most important reason Apple felt they had to create their own language (in addition to ownership). There are precedents: Groovy, Scala, Clojure, Kotlin, Ceylon and others are designed to interoperate well with Java; CoffeeScript with JavaScript; Hack with PHP; Microsoft’s CLR was designed from the get go as a multi-language VM. This is important for initial adoption so that existing code can be reused and the new language progressively introduced. It would have been possible, but much harder, for Apple to pick an existing language.

  3. Static typing: Swift is a statically-typed language. There is type inference, which means you don’t have to actually write down the types everywhere, in particular within functions. But types are there nonetheless. So it looks more like dynamic languages, but is not one. [3]

  4. A dynamic feel: This is part of the “modern” aspect of Swift: a move toward concision which appeals to programmers used to dynamic languages, but with the presence of static typing under the hood. This combination of terseness and static typing is something Swift shares with Scala.

    Swift has a REPL and Playgrounds (the interactive demo by Chris Lattner looked impressive), which includes what some other environments call “worksheets” and a bit more. Clearly that’s the direction development tools are taking. All of this is becoming mainstream, which again raises the bar.

  5. Native compilation: Swift is compiled down to native code, like C, C++, Objective-C, Go, and Rust. There is no interpreter or VM, as in Java, JavaScript, C#, Ruby, PHP, or all dynamic languages, besides the small Objective-C runtime. Also, it doesn’t have a real garbage collector: it uses automatic reference counting (ARC).

    Swift is a bit odd in that native compilation and lack of full garbage collection make it closer to systems language, yet it is clearly designed to build applications. I wish the balance had moved more toward the higher level rather than the lower level, but it’s an interesting middle ground.

What’s disappointing

Here are a few aspects of Swift which, at first glance, disappoint me a bit. Keeping in mind that this is a first version of Swift which has room to grow:

  1. Openness: So far Apple has not announced that the Swift compiler would be open source, like the Objective-C compiler. This is a big question mark. It would be the right thing for them to do to open the compiler, and I am hopeful that they will.

  2. Garbage collection: It’s likely that Apple considered that ARC was good enough in most situations, and it makes interoperability with Objective-C (compatibility in terms of memory management) much easier to handle. Still, this would give me trouble. Lack of proper garbage collection means more memory bugs to hunt down.

  3. Concurrency support: Swift doesn’t have async/await, like C#, Scala, and soon JavaScript, or futures and promises. Async support is important in client apps as much as in server apps.

  4. Type system: The type system appears very simple. This might be seen as good or bad. The reference book doesn’t even mention the word “variance”. (I suppose Swift picks a default, but doesn’t allow programmers to control that.)

  5. Persistent data structures: There doesn’t seem to be persistent data structures (which are truly immutable yet can be updated efficiently thanks to structural sharing), as in Clojure and Scala. These are incredible tools which many programmers have now found to be essential. Immutability, in general, gives you much increased confidence about the correctness of your code. I would miss them in Swift.

  6. Well, innovation: Dart, Go, Hack, and Swift show that it is very hard for big companies to come up with something really unique in their programming languages. Academia remains the place where new ideas are born and grow. Still, it would have been nice if there was one or two new things in Swift that would make it special, like for example Scala’s implicits which have turned out to have far-reaching consequences (several of which I really like).

Browser and server

I am curious to see if Swift will see adoption on the server, for services. It might make sense for Apple to use Swift internally for their services, although having a language is not enough: you need proper infrastructure for concurrent and distributed computing. Swift is not there yet. But it could be in the future. This is a bit less important to Apple than the client at this time.

What about the browser? Could one conceivably create a Swift-to-JavaScript compiler? I don’t see why not. JVM languages, from Java to Clojure to Scala, now compile to JavaScript. Swift currently uses ARC, but in a browser environment it could probably work with the JavaScript VM’s garbage collector.

So there might be room, in years to come, for Swift to conquer more environments.


Where does Google stand with regards to this? It’s curious, but I think now that it’s Google which might have a programming language problem! Android uses Java but, as famous programming languages guy Erik Meijer tweeted, “Swift makes Java look tired.” (To be fair, most languages make Java look tired.)

Google also has Dart, which so far hasn’t been positioned as a language to develop Android or server apps. But maybe that will come. Go is liked by some for certain types of server applications, but is even more of a “systems language” than Swift, and again Google hasn’t committed to bringing it as a language to write Android apps.

Will Google come up with yet another programming language, targeted at Android? The future will tell. If it was me, which of course it isn’t, Scala or a successor would be my choice as a great, forward-looking language for Android. And Google could point their Android developers to Scala and say “Look, it looks very much like Swift which you already know!” ;)

Did I miss anything? Let me know in the comments or on Twitter.

  1. Back in 2009 I even tweeted:  ↩

    MS has Anders Hejlsberg (C#). The JVM world has Martin Odersky (Scala). Apple should work with Odersky on the next language for OS X.

    Obviously it wasn’t Odersky, but Chris Lattner, who got to be the mastermind of Swift.

  2. Good job by Apple, by the way, to have managed to keep it under covers so well since July 2010!  ↩

  3. There is a difference with with languages that have optional types, like Dart and Hack. Dynamic, optionally typed, and statically typed languages can, from a syntax perspective, look very similar. But under the hood some pretty different things take place.  ↩