Scala on Tessel 2

Tessel 2 and its relay module
Tessel 2 and its relay module

What is Tessel 2?

Tessel 2 is a Wi-Fi-enabled development board programmable in JavaScript with Node.js. The first units shipped this month. There is a lot that I like about Tessel 2:

  • It is high-level. JavaScript and Node.js greatly lower the barrier of entry since there are so many developers familiar with these technologies.[1]
  • It works out of the box. Take the device, plug it, push some JavaScript, and it does its magic! There is no need to install your own Linux distribution or additional software.
  • It is autonomous. Thanks to its powerful hardware[2], built-in Wi-Fi and Node.js, it runs independently from other computers or even cables (except for power).
  • It is open. The Tessel 2 software and hardware are open source. In fact, Tessel is not even a company but “just a collection of people who find it worthwhile to spend [their] time building towards the Tessel Project mission.”[3]

It short Tessel 2 seems perfect for playing with IoT!

From JavaScript to Scala

As soon as I got my Tessel 2, I followed the tutorial to get a basic hang of it, and that went quite smoothly.

But my plan all along had been to use Scala on Tessel 2. You might know Scala primarily as a server-side language running on the Java VM. But Scala also compiles to JavaScript thanks to Scala.js, and it does it spectacularly well.

So I set to do something simple like toggling relays, but in Scala instead of JavaScript. Here are the rough steps:

  • setup a Scala.js sbt project
  • write an app calling the Tessel LED and relay module APIs
  • run sbt fullOptJs to compile the Scala code to optimized JavaScript
  • run t2 run target/scala-2.11/tessel-scala-opt.js to deploy the resulting JavaScript to Tessel

After I figured out a couple of tweaks (scalaJSOutputWrapper and .tesselinclude), it just worked! Here is the code:

object Demo extends js.JSApp {

  def main(): Unit = {

    println(s"starting with node version ${g.process.version}")

    val tessel    = g.require("tessel")
    val relayMono = g.require("relay-mono")

    val relay = relayMono.use(tessel.port.A)

    relay.on("ready", () ⇒ {
      println("Relay ready!")

      js.timers.setInterval(2.seconds) {

      js.timers.setInterval(1.seconds) {

    relay.on("latch", (channel: Int, value: Boolean) ⇒ {
        println(s"Latch on relay channel $channel switched to $value")

        if (value)
          tessel.led.selectDynamic((channel + 1).toString).on()
          tessel.led.selectDynamic((channel + 1).toString).off()

Notice how I can call Tessel APIs from Scala without much ado.[4] When used this way, Scala.js works like JavaScript: it’s all dynamic.[5]

Types and facades

But a major reason to use Scala instead of JavaScript is to get help from types. So after that initial attempt I wrote some minimal facades[6] for the Tessel and Node APIs I needed. Facades expose typed APIs to Scala, which allows the compiler to check that you are calling the APIs properly, and also gives your text editor a chance to provide autocompletion and suggestions. You can see this in action in IntelliJ:

Code completion in IntelliJ
Code completion in IntelliJ

Here are the minimal facades I have so far:

Along the way I realized that working on facades is also a great way to learn APIs in depth! This is the resulting code (which you can find on github):

object Demo extends js.JSApp {

  def main(): Unit = {

    println(s"starting with node version ${g.process.version}")

    val tessel    = Tessel()
    val relayMono = RelayMono()

    val relay = relayMono.use(tessel.port.A)

    relay.onReady {
      println("Relay ready!")

      js.timers.setInterval(2.seconds) {

      js.timers.setInterval(1.seconds) {

    relay.onLatch { (channel, value) ⇒
        println(s"Latch on relay channel $channel switched to $value")

        if (value)
          tessel.led(channel + 1).on()
          tessel.led(channel + 1).off()

As you can see, it’s not very different from the dynamic example, except that I now get help from the editor and compiler.

Why do this, again?

Now you might argue that in both cases the code looks more or less like JavaScript, so why go through the trouble?

It’s true that, superficially, JavaScript and Scala look very similar in these examples. But underneath there is Scala’s type system at work, and this is for me the main reason to want to use that language.

This said, there is more, such as:

  • Immutability by default. I like this because it helps reduce errors and works great with functional programming idioms.
  • Collections. Scala has a very complete collection library, including immutable collections (but you can also use mutable collections).
  • Functional programming. Scala was designed for functional from the get go and has some pretty neat functional programming third-party libraries too.

And I could go on with features like case classes, pattern matching and destructuring, for-comprehensions, and more. But I should also mention a few drawbacks of using Scala instead of JavaScript:

  • Harder language. Scala is a super interesting language, but no matter how you look at it, it is a bigger beast than JavaScript.
  • Executable size. Scala.js has an amazing optimizer which also strips the resulting JavaScript from pretty much any unused bit of code[7]. Still, you will likely have resulting files which are larger than what you would get by writing JavaScript by hand. So expect your app to yield uncompressed JavaScript files in the order of a few hundreds of KB (much smaller when compressed). Tessel doesn’t seem to have any issues with that so far, so it might not be a problem at all, but it’s worth keeping an eye on this as Tessel doesn’t have Gigabytes of RAM.
  • Compilation step. There is a compilation and optimization step in addition to publishing the software to Tessel. For my very simple demo, this takes a couple of seconds only. For larger projects, the time will increase. Now this is very manageable thanks to sbt’s incremental compilation, and if you consider that pushing a project to Tessel can take several seconds anyway, I would say that right now it’s not an issue.

So who would want to program Tessel in Scala? Probably not everybody, but it’s a great option to have if you already know the language or are interested in learning it, especially if you are going to write large amounts of code.

What’s next?

I plan to continue playing with Tessel 2 and Scala. The next step is to try to do something fun (and maybe even useful) beyond blinking LEDs and relays!

  1. This trend is definitely in the air. Read for example Why JavaScript is Good for Embedded Systems.  ↩

  2. Tessel 2 is fairly beefy compared to an Arduino board, for example: it features a 580 MHz CPU, built-in 802.11 b/g/n Wi-Fi, and 64 MB of RAM and 32 MB of Flash. You can add more storage via USB.  ↩

  3. From Code of Conduct/About the Tessel Project/How to Get Your Issue Fixed. It is all the more impressive that they managed to make and ship such cool hardware and software.  ↩

  4. There is one exception, which I had missed in an earlier version of this post, which is access to JavaScript arrays. If you only rely on dynamic calls, you have to cast to js.Array[_], or use the selectDynamic() method. Here I chose the latter way. Things look nicer when you use facades.  ↩

  5. Under the hood, this is thanks to Scala’s Dynamic support.  ↩

  6. Scala.js facades are lot like TypeScript declaration files.  ↩

  7. Also known as DCE for Dead Code Elimination.  ↩

Generalized type constraints in Scala (without a PhD)


Not long ago I stumbled upon the signature of the flatten method on Option:

def flatten[B](implicit ev: A <:< Option[B]): Option[B]

I don’t know about you, but I knew about implicit parameter lists, implicit resolution, and even type bounds. But this funny <:< “sad-with-a-hat” [1] operator [2] was entirely new to me!

Smart people [3] have written about it years ago, but it’s clear that we are talking about a feature which is not well-known and poorly documented, even though it is available since Scala 2.8. So this article is about figuring out what it means and how it works.

The following deconstruction turns out to be fairly long, but even though <:< itself may not be useful to every Scala programmer, it touches a surprisingly large number of Scala features which most Scala programmers should know.

What it does and how it’s useful

If you search the Scala standard library, you find a few other occurrences of <:<, in particular:

  • on Option:

    def orNull[A1 >: A](implicit ev: Null <:< A1): A1
  • on Traversable (via traits like GenTraversableOnce):

    def toMap[K, V](implicit ev: A <:< (K, V)): Map[K, V]
  • on Either:

    def joinRight[A1 >: A, B1 >: B, C](implicit ev: B1 <:< Either[A1, C]): Either[A1, C]
  • on Try:

    def flatten[U](implicit ev: T <:< Try[U]): Try[U]

You notice that, in all these examples, <:< is used in the same way:

  • there is an implicit parameter list, with a single parameter called ev
  • the type of this parameter is of the form Type1 <:< Type2

The lowdown is that this pattern tells the compiler:

Make sure that Type1 is a subtype of Type2, or else report an error.

This is part of a feature called generalized type constraints. [4] There is another similar construct, =:=, which tells the compiler: [5]

Make sure that Type1 is exactly the same as Type2, or else report an error.

In what follows, I am focusing on <:< which turns out to be more useful, but just know that =:= is a thing and works in a very similar way.

The why and how of this feature is the subject of the rest of this article! So for now, let’s take this as a recipe, a trick if you will, while we look at how this can be useful in practice.

Let’s start with flatten on Option:

def flatten[B](implicit ev: A <:< Option[B]): Option[B]

What does flatten do, as per the documentation? It removes a level of nesting of options:

scala> val oo: Option[Option[Int]] = Some(Some(42))
oo: Option[Option[Int]] = Some(Some(42))

scala> oo.flatten
res1: Option[Int] = Some(42)

This doesn’t make much sense if the type parameter A of Option is not, itself, an Option-of-something. So what should happen if you call flatten on, say, an Option[String]? I see two possibilities:

  1. The flatten method returns None.
  2. The compiler reports an error.

The authors of the Scala standard library picked option 2, and I think that it’s a good choice, because most likely calling flatten in this case is not what the programmer intends. And lo and behold, the compiler doesn’t let this pass:

scala> val oi: Option[Int] = Some(42)
oi: Option[Int] = Some(42)

scala> oi.flatten
<console>:21: error: Cannot prove that Int <:< Option[B].

So we have a generic type, Option[+A], which has a method, flatten, which can only be used if the type parameter A is itself an Option. All the other methods (except orNull which is similar to flatten) can be called: map, get, etc. But flatten? Only if the type of the option is right!

One thing to realize is that we have something unusual here: a method which the compiler won’t let us call, not because we pass incorrect parameters to the method (in fact flatten doesn’t even take any explicit parameters), but based on the value of a type parameter of the underlying Option class. This is not something you see in Java, and you have probably rarely seen it in Scala.

Looking again at the signature of flatten, we can see how the recipe is applied: implicit ev: A <:< Option[B] reads “make sure that A is a subtype of Option[B]”, and, since A stands for the parameter type of Option, we have:

  • in the first case “make sure that Option[Int] is a subtype of Option[B]
  • in the second case “make sure that Int is a subtype of Option[B]

Obviously, Option[Int] can be a subtype of an Option[B], where B = Int (or B = AnyVal, or B = Any). On the flip side, there is just no way Int can be a subtype of Option[B], whatever B might be. So the recipe works, and therefore the constraint works. [6]

To get the hang of it, let’s look at another nice use case , toMap on Traversable:

def toMap[K, V](implicit ev: A <:< (K, V)): Map[K, V]

Translated into English: you can convert a Traversable to a Map with toMap, but only if the element type is a tuple (K, V). This makes sense, because a Map can be seen as a collection of key/value tuples. It wouldn’t make great sense to create a Map just out of a sequence of 5 Ints, for example.

Similar rationales apply to the few other uses of <:< in the standard library, which all come down to constraining methods to work with a specific contained type only.

In light of these examples, I find that applying this recipe is easy, even though the syntax is a bit funny. But I can think of a few questions:

  1. Can’t we just use type bounds which I thought existed to enforce this kind of type constraints?
  2. If this is a pattern rather than a built-in feature, why does <:< look so much like an operator? Does the compiler have special support for it?
  3. How does this whole thing even work?
  4. Is there an easier ways to achieve the same result?

Let’s look into each of these questions in order.

Question 1: Can’t we just use type bounds?

Lower and upper bounds

Type bounds cover lower bounds and upper bounds. [7] These are well explained in the book Programming in Scala, so I won’t cover the basics here, but I will present some perspective on how they work.

As a reminder, lower and upper bounds are expressed with builtin syntax: >: and <: (spec). You can read:

  • T >: U as “type T is a supertype of type U” or “type T has type U as lower bound”
  • T <: U as “type T is a subtype of type U” or “type T has type U as upper bound”

A puzzler

Let’s consider the following:

scala> def tuple[T, U](t: T, u: U) = (t, u)
tuple: [T, U](t: T, u: U)(T, U)

My tuple function simply returns a tuple of the two values passed, whatever their types might be. Granted, it’s not very useful!

What are T and U? They are type variables: they stand for actual types that will be decided at each call site (each use of the function in the source code). Here both T and U are abstract: we don’t know what they will be when we write the function. For example we don’t say that U is a Banana, which would be a concrete type.

If we pass String and Int parameters, we get back a tuple (String, Int) in the Scala REPL:

scala> tuple("Lincoln", 42)
res1: (String, Int) = (Lincoln,42)

Now let’s consider the following modification, which is a naive attempt at enforcing type constraints with type bounds:

def tupleIfSubtype[T <: U, U](t: T, u: U) = (t, u)

I know, it’s starting to be like an alphabet (and symbol) soup! But let’s stay calm: the only change is that instead of specifying just T as type parameter, we specify T <: U, which means “T must be a subtype of U”.

The intent of tupleIfSubtype is to return a tuple of the two values passed, like tuple above, but fail at compile time if the first value is not a subtype of the second value.

So does the newly added constraint work? Do you think that the compiler will accept to compile this?

tupleIfSubtype("Lincoln", 42)

Before knowing better, I would have thought that the compiler:

  • would decide that T = String
  • would decide that U = Int
  • see the type constraint T <: U, which translates into String <: Int
  • fail compilation because obviously, String is not a subtype of Int

But it turns out that this actually compiles just fine!

How can this be? Is the constraint not considered? Is it a bug in the compiler? A weird edge case? Bad language design? Or maybe, with T <: U, the U is not the same as the second U in the type parameter section? This can quickly be proven false:

scala> def tupleIfSubtype[T <: V, U](t: T, u: U) = (t, u)
<console>:7: error: not found: type V
       def tupleIfSubtype[T <: V, U](t: T, u: U) = (t, u)

So the two Us are, as seemed to make sense intuitively, bound to each other (they refer to the same type).

The answer to this puzzler turns out to be relatively simple: it has to do with the way type inference works, namely that type inference solves a constraint system (spec).

What happens is that yes, I do pass String and Int, but it doesn’t follow that T = String and U = Int. Instead, the effective T and U are the result of the compiler working its type inference algorithm, given:

  • the types of the parameters we actually pass to the function,
  • the constraints expressed in the type parameter section,
  • and, in some cases, the expression’s return type.

If I write:

scala> def tuple[T, U](t: T, u: U) = (t, u)
tuple: [T, U](t: T, u: U)(T, U)

scala> tuple("Lincoln", 42)
res3: (String, Int) = (Lincoln,42)

then yes: T = String and U = Int because there are no other constraints. But when I introduce an upper bound, there is a constraint, and therefore a constraint system. The compiler resolves it and obtains T = String and U = Any:

scala> def tupleIfSubtype[T <: U, U](t: T, u: U) = (t, u)
tupleIfSubtype: [T <: U, U](t: T, u: U)(T, U)

scala> tupleIfSubtype("Lincoln", 42)
res4: (String, Any) = (Lincoln,42)

We can verify that the resulting types satisfy the constraints: [8]

  • String is a String of course
  • Int is a subtype of Any
  • String is also a subtype of Any

Phew! So this makes sense. It’s “just” a matter of understanding how type inference works when type bounds are present.

In the process we have learned that <: and >:, when used with abstract type parameters, do not necessarily produce results which are very useful, because the compiler can easily infer Any (or AnyVal or AnyRef) as solutions to the constraint system. [9]

Question 2: Why does <:< look so much like an operator?

Lets dig a little deeper to understand how <:< works under the hood. Here is a simple type hierarchy used in the examples below:

trait Fruit
class Apple  extends Fruit
class Banana extends Fruit

Parameter lists and type inference

Let’s start with a couple more things you need to know in Scala:

  • Functions can have more than one parameter list.
  • Type inference operates parameter list per parameter list from left to right.

In particular, an implicit parameter list can use types inferred in previous parameter lists.

So let’s write the solution, without necessarily understanding it fully yet:

def tupleIfSubtype[T, U](t: T, u: U)(implicit ev: T <:< U) = (t, u)

This function has two parameter lists:

  • (t: T, u: U)
  • (implicit ev: T <:< U)

Because type inference goes parameter list by parameter list, let’s start with the first one. You notice that there are no >: or <: type bounds! So:

  • T is whatever specific type t has (say T = Banana)
  • U is whatever specific type u has (say U = Fruit)

Infix types

Looking at the second parameter list, we have to clear a hurdle: what kind of syntax is T <:< U? This notation is called an infix type (spec). “Infix” just means that a type appears in the middle of two other types, the same way the + operator appears in the middle in 2 + 2. The type in the middle (the infix type proper) can be referred to a an infix operator. Instead of this operator being a method, as is the case in general in Scala, it is a type.

Let’s look at examples. You probably know types from the standard library such as:

Map[String, Fruit]
Either[String, Boolean]

These exact same types can be written:

String Map Fruit
String Either Boolean

The infix notation makes the parametrized type look like an operator, but an operator on types instead of values. Other than that, it’s just an alternate syntax, and really nothing to worry about!

So based on the above:

T <:< U

means the the same as:

<:<[T, U]

A symbolic type name

Now, what is a <:<? It’s a type: the same kind of stuff as Map or Either, in other words, typically a class or a trait. It’s just that this is a symbolic name instead of an alphabetic identifier like Map. It could as well have been called SubtypeOf, and maybe it should have been!

The implicit parameter

So once we reach the second (and implicit) parameter list:

(implicit ev: T <:< U)

we see that there is a parameter of type <:<, which itself has two type parameters, T and U. These are the same T and U we have in the first parameter list. They are bound to those, and these are known because type inference already did its magic on the first parameter list. Concretely, T is now assigned the type Banana and U the type Fruit!

What the implicit parameter list says is this: “Find me, somewhere in the implicit search path, an implicit val, def, or class of type <:< which satisfies <:<[T, U]”. [10] And because T and U are now known, we need to find an implicit match for <:<[Banana, Fruit].

The trick is to manage to have an implicit definition in scope which matches only if there is a subtype relationship between T and U. For example:

T U Compiler Happiness Level
Banana Fruit happy
Apple Fruit happy
Int Anyval happy
Apple Banana unhappy
String Int unhappy

If we manage to create such an implicit definition, the constraint mechanism works. And we already know that the clever engineers who devised this have found a way to create such an implicit definition!

By the way, we can play with this in the REPL using the standard implicitly function, which returns an implicit value for the given type parameter if one is found:

implicitly[Banana <:< Fruit]  // ok
implicitly[Apple  <:< Fruit]  // ok
implicitly[Int    <:< Anyval] // ok
implicitly[Apple  <:< Banana] // not ok
implicitly[String <:< Int]    // not ok

To summarize, we now have a pretty good level of understanding and we know that:

  • we are talking about a library feature
  • which relies on an implicit parameter
  • with a funny symbolic type operator <:<.

And we also know that the magic that makes it all work lies in the search for a matching implicit definition: if it is found, the subtyping relationship holds, otherwise it doesn’t and the compiler reports and error.

Question 3: How does this whole thing even work?

We could stop here and be happy to use <:< like a recipe, as if it was a core language feature. But that wouldn’t be very satisfying, would it? After all, we still miss the deeper understanding of how that magic implicit is defined, and why an implicit search for it may or may not match it. So let’s keep going!

The implementation

Let’s look at the implementation of <:<, which we find in the Scala Predef object [11]:

@implicitNotFound(msg = "Cannot prove that ${From} <:< ${To}.")
sealed abstract class <:<[-From, +To] extends (From ⇒ To) with Serializable

private[this] final val singleton_<:< = new <:<[Any, Any] {
  def apply(x: Any): Any = x

implicit def $conforms[A]: A <:< A = singleton_<:<.asInstanceOf[A <:< A]

Wow! Can we figure it out? Let’s try.

Which implicit?

Let’s think about a simpler case of implicit search:

def makeMeASandwich(implicit logger: Logger) = ...

implicit def findMyLogger: Logger = ...

val mySandwich = makeMeASandwich

The compiler, when you write makeMeASandwich without an explicit parameter, looks for an implicit in scope of type Logger. Here, the obvious matching implicit is findMyLogger, because it returns a Logger. So the compiler selects the implicit and in effect rewrites your code as:

val mySandwich = makeMeASandwich(findMyLogger)

The same mechanism is at work with implicit ev: T <:< U: the compiler must find an implicit of type T <:< U (or <:<[T, U] which is exactly the same). And there is only one implicit definition with type <:<-of-something in the whole standard library, which is:

implicit def $conforms[A]: A <:< A

Now there is a bit of a twist, because the implicit is of type <:<[A, A], with a single type parameter A, which in addition is abstract. [12] Anyhow, this means that our function parameter ev of type <:<[T, U] must, somehow, “match” with <:<[A, A].

If we ask ourselves: what does it take for this implicit of type <:<[A, A] to be successfully selected by the compiler? The answer is that one should be able, for some type A to be determined, to pass a value of type <:<[A, A] to the parameter of type <:<[T, U]. Another way to say this is that <:<[A, A] must conform to <:<[T, U]. If we can’t do this, the implicit search will fail.


How does this conformance work? This takes us to the notion of variance, which is always a fun thing. Consider a Scala immutable Seq. It is defined as trait Seq[+A]. The little + says that if I require a Seq[Fruit], I can pass a Seq[Banana] just fine:

def takeFruits(fruits: Seq[Fruit]) = ...
takeFruits(Seq(new Banana))

This is called covariance (the subtyping of the type argument goes in the same direction as the enclosing type). Without the notion of covariance and contravariance (where subtyping of the type argument goes in the opposite direction as the enclosing type), you:

  • either can never write code like this (everything is invariant)
  • or you have an unsound type system [13]

Besides collections, functions are another example where variance and contravariance matter. Say the following process function expects a single parameter, which is a function of one argument:

def process(f: Banana ⇒ Fruit)

I can of course pass to process a function with these exact same types:

def f1(f: Banana): Fruit = f

But Scala’s support for subtyping also applies to functions: a function can be a subtype of another function. So I can pass a function with types different from Banana and Fruit without breaking expectations as long as the function:

  • takes a parameter which is a supertype of Banana
  • returns a value of a subtype of Fruit

For example:

def f2(f: Fruit): Apple = new Apple

This is the magic of variance, and you can convince yourself that it makes sense from the point of view of the process function: expectations won’t be violated.

Functions are not a special case in Scala from this point of view: a function of one parameter is defined (in a simplified version) as a trait with the function parameter as contravariant and the result as covariant:

trait Function1[-From, +To] { def apply(from: From): To }

Putting everything together

After this detour in variance land, let’s get back to <:< and the implicit parameter. The implicit <:<[A, A] will conform to the parameter <:<[T, U] if it follows the variance rules. So what’s the variance on <:<[T, U] in Predef?

<:<[-From, +To]

This is the same as Function1[-From, +To] and in fact <:< extends Function1! So our problem comes down to the following question: if somebody requires a function:

T ⇒ U

what constraints must be satisfied so I can pass the following function:

A ⇒ A

With variance rules, we know it will work if:

  • A is supertype of T
  • and A is subtype of U

Written in terms of bounds:

T <: A <: U

Which means of course that T <: U: T must be a subtype of U!

To summarize the reasoning: the only eligible implicit definition in scope which can possibly be selected by the compiler to pass to our function is selected if and only if T is a subtype of U! And that’s exactly what we were looking for! [14]

You can look at this from a slightly more general angle, which is that a function A ⇒ A can only be passed to a function T ⇒ U if T is a subtype of U. [15] You can in fact test the matching logic very simply with the built-in identity function:

val f: Banana ⇒ Fruit  = identity // ok
val f: Fruit  ⇒ Banana = identity // not ok

The same works with $conforms, which returns an <:<, which is also an identity function:

val f: Banana ⇒ Fruit  = $conforms // ok
val f: Fruit  ⇒ Banana = $conforms // not ok

So it is a neat trick that the library authors [16] pulled off here, combining implicits and conformance of function types to implement constraint checking.

The nitty-gritty

The rest of the related code in Predef is about defining the actual <:< type and creating a singleton instance returned by def $conforms[A], because in case the implicit search matches, it must return a real value after all.

You could write it minimally (using <::< in these attempts so as to not clash with the standard <:<):

sealed trait <::<[-From <: To, +To] extends (From ⇒ To) {
  def apply(x: From): To = x

implicit def $conforms[A]: A <::< A =
    new <::<[A, A] {}

But oops, the compiler complains:

scala> def tupleIfSubtype[T, U](t: T, u: U)(implicit ev: T <::< U) = (t, u)
<console>:23: error: type arguments [T,U] do not conform to trait <::<'s type parameter bounds [-From <: To,+To]
       def tupleIfSubtype[T, U](t: T, u: U)(implicit ev: T <::< U) = (t, u)

The good news is that the following version, using an intermediate class, works:

sealed trait <::<[-From, +To] extends (From ⇒ To)

final class $conformance[A] extends <::<[A, A] {
  def apply(x: A): A = x

implicit def $conforms[A]: A <::< A =
  new $conformance[A]

So this works great, with a caveat: every time you use my version of <::<, a new instance of the anonymous class is created. Since we just want an identity function, which works the same for all types and doesn’t hold state, it would be good to use a singleton so as to avoid unnecessary allocations. We could try using an object, since that’s how we do singletons in Scala, but that’s a dead-end because objects cannot take type parameters:

scala> implicit object Conforms[A] extends (A ⇒ A) { def apply(x: A): A = x }
<console>:1: error: ';' expected but '[' found.
implicit object Conforms[A] extends (A ⇒ A) { def apply(x: A): A = x }

So in the end the standard implementation cheats by creating an untyped singleton using Any, and casting to [A <:< A] in the implementation of $conforms. Here is my attempt, which works fine:

private[this] final object Singleton_<::< extends <::<[Any, Any] {
  def apply(x: Any): Any = x

implicit def $conforms[A]: A <::< A =
    Singleton_<::<.asInstanceOf[A <::< A]

The actual Scala implementation opts for using a val instead of an object (maybe to avoid the cost associated with an object’s lazy initialization):

private[this] final val singleton_<:< = new <:<[Any, Any] {
  def apply(x: Any): Any = x

implicit def $conforms[A]: A <:< A =
    singleton_<:<.asInstanceOf[A <:< A]

We are only missing one last bit:

@implicitNotFound(msg = "Cannot prove that ${From} <:< ${To}.")
sealed abstract class <:<[-From, +To] ...

This helps provide the user with a nice message when the implicit is not found. From a syntax perspective, it is a regular annotation, which applies to the abstract class <:<. The annotation is known by the compiler. [17]

So here we are: the implementation is explained! It’s a bit trickier than it should be in order to prevent extra allocations. I confess that I am a bit disappointed that there doesn’t seem to be a way to avoid an instanceOf: even though it’s local to the implementation and therefore the lack of safety remains under control, it would be better if it could be avoided.

An implicit conversion

One thing you might wonder is what to do with the ev parameter. After all, a value must be passed to the function when the implicit is found (when it’s not found, the compiler blows up so ev doesn’t need an actual value).

A first answer is that you don’t absolutely need to use it. It’s there first so the compiler can check the constraint. That’s why it’s commonly called ev, for “evidence”: its presence stands there as a proof that something (an implicit) exists.

Nonetheless, ev must have a value. What is it? It’s the result of the $conforms[A] function, which is of course of type <:<[T, U]. And we have seen above that <:< extends T ⇒ U. So the result of $conforms[A] is a function, which takes an A and returns an A, that is, an identity function. And it not only returns a value of the same type A, but it actually returns the same value which was passed (that’s the idea of an identity function).

And you see that in the implementation:

def apply(x: Any): Any = x

It follows that ev has for value an identity function from T to U: it takes a value t of type T and returns that same value but with type U. This is possible, and makes sense, as we know that T is a subtype of U, otherwise the implicit wouldn’t have been found.

But there is more: ev is also an implicit conversion from T to U (from Banana to Fruit). How so? Because it has the keyword implicit in front of it, that’s why!

To contrast with regular type bounds, if you write:

def tupleIfSubtype[T <: U, U](t: T, u: U) = ...

the compiler knows that T is a subtype of U, thanks of the native semantic of <:. But with <:<, the compiler knows nothing of the sort based on the type parameter section.

However the presence of the implicit ev function makes it possible to use the value t of type T as a value of type U. The subtype relationship can be seen as an implicit conversion. This is much safer than using t.asInstanceOf[U]. You could also be extra-explicit and write:


So you can write:

def tToU[T, U](t: T, u: U)(implicit ev: T <:< U): U = t


def tToU[T, U](t: T, u: U)(implicit ev: T <:< U): U = ev(t)

Without the implicit conversion, the compiler complains:

scala> def tToU[T, U](t: T, u: U): U = t
<console>:10: error: type mismatch;
 found   : t.type (with underlying type T)
 required: U
       def tToU[T, U](t: T, u: U): U = t

You can see how Option.flatten makes use of the ev() function:

def flatten[B](implicit ev: A <:< Option[B]): Option[B] =
  if (isEmpty) None else ev(this.get)

In summary, all these features fall together to produce something that makes a lot of sense and is useful.

Question 4: Is there an easier ways to achieve the same result?

There is at least one other way something like what <:< does can be achieved. The idea is that a method such as flatten does not need to be included on the base class or trait, in this case Option. Instead, Scala has, via implicit conversions, what in effect achieves extension methods (AKA the “extend my library” pattern).

So say that such an extension method is only available on values of type Option[Option[T]]:

implicit class OptionOption[T](val oo: Option[Option[T]]) extends AnyVal {
  def flattenMe: Option[T] = oo flatMap identity

If we try to apply it to Some(Some(42)), the method is found and the flattening works:

scala> Some(Some(42)).flattenMe
res0: Option[Int] = Some(42)

If we try to apply it to Some(42), the method is not found and the compiler reports an error:

scala> Some(42).flattenMe
<console>:13: error: value flattenMe is not a member of Some[Int]

But I see a few differences with the <:< operator:

  • You need to create one implicit class for each type supporting a conversion. In the case of Option, for example, you need one implicit class taking an Option[Option[T] to support flatten, and another implicit class to support orNull. So this requires a bit more boilerplate than <:< per method.
  • I am not sure whether there something similar to @implicitNotFound to report a better error in case of problem.

So why not do it this way? I think that a good case can be made that it is easier to understand in the case of the relatively simple examples we have seen so far.

UPDATE 2015–12–10: Somebody kindly pointed out that at the time generalized type constraints were implemented, Scala didn’t yet have value classes or implicit classes. Missing value classes meant boxing overhead when running extension methods, while missing implicit classes just meant more boilerplate. So using an implicit value class as I did above was not a great option at the time.

On the other hand, <:< is a more flexible library feature which you can reuse easily and even combine with other implicits, like in this example using Shapeless: [18]

def makeJava[F, A, L, S, R](f: F)(implicit
  ftp: FnToProduct.Aux[F, L => S],
  ev: S <:< Seq[R],
  ffp: FnFromProduct[L => JList[R]]

Finally, when using type bounds, the constraints expressed with <: and >: can only apply to the method type parameters (or class type parameters when they are used on a class). This is very useful, as we have seen. But when using <:<, you can constrain any two types in scope, and even impose multiple such constraints. Your imagination is the limit:

trait T[A, B] {

  type C
  type D

  def constrainTwoTraitParams         (implicit ev: A <:< B) = ()
  def constrainTraitParamAndTypeMember(implicit ev: A <:< C) = ()
  def constrainTwoTypeMembers         (implicit ev: C <:< D) = ()
  def constrainMore[Y](c: Y)          (implicit ev1: A <:< B, ev2: Y <:< C) = ()

class C extends T[Banana, Fruit] {
  type U = Fruit
  type V = String

You can even go further and constrain not only these types directly, but higher-order types, as in this (math- and symbol-heavy) example from Miles Sabin:

def size[T](t: T)(implicit ev: (¬¬[T] <:< (Int ∨ String))) = ???

In this case, the constraint is not directly on the T type parameter, but on ¬¬[T].

This might be, after all, how the term “generalized type constraint” gets its name.


We have seen how regular type bounds:

  • behave when using abstract type parameters
  • but don’t work to actually enforce certain useful constraints.

We have also seen how we can use instead a generalized type constraint expressed with <:<:

  • to implement methods which can only be used when types are aligned in a certain way
  • and how <:< is not a built-in feature of the compiler, but instead a library feature implemented via a smart trick involving implicit search and type conformance.

Finally, we have considered:

  • how the simple use cases in the standard library could be implemented differently
  • but also how <:< is a more general tool.

So is <:< is worth it? Should it be part of the standard library, and should Scala developers learn it?

I think that the feature suffers from the fact that is is not properly documented, explained, and put in perspective. It also suffers from being a symbolic name with no agreed upon way to pronounce it!

The standard library uses of <:< could be replaced with “extension methods”, which would achieve the same result via Scala features which are easier to understand and familiar to most Scala programmers. I think that this argues against the presence of <:< in the standard library, especially at the level of Predef, and if this was introduced today, my inclination would be to recommend leaving it to third-party libraries such as Shapeless which actually benefit the most from this kind of advanced features.

On the plus side, when used <:< as a recipe, it is easy to understand and useful, and I can’t help but being impressed that generalized type constraints are implemented at the library level, and that they can emerge from powerful underlying language features such as type inference and implicits.

This is typical of Scala, and in line with the principle of Martin Odersky that it is better to keep the core language small when possible. So even though the explanation of how <:< works might seem a bit tricky, you can take comfort in thinking that in other languages this might be compiler code, not library code. But I also understand how some programmers [19] might be bothered by all the machinery behind features like this.

As for me, I am keeping generalized type constraints in my toolbox, but I like seeing the feature as a gateway to a more in-depth understanding of Scala. I hope this post will help others along this path as well!

Did I get anything wrong? Please let me know!

  1. Other suggestions include “Madonna wearing a button-down shirt” and “Angry Donkey”!  ↩

  2. It is valid to call this an operator, even though it is not built into the compiler, and is not an operator on values like +: it is instead an operator on types. In fact the Scala spec calls this an infix operator.  ↩

  3. Using generalized type constraints - How to remove code with Scala 2.8.  ↩

  4. I haven’t found a good explanation for the adjective generalized. This makes you think that there are more specific type constraints. But which are those then?  ↩

  5. It seems that there was another <%< operator as well, but it’s nowhere to be found in Scala 2.11. I suspect that, since it was related to the concept of view bounds, which are being deprecated, and probably had no use in the Scala standard library, it was removed at some point.  ↩

  6. The authors of the standard library could have used =:= to say that the type has to be exactly an Option[B], but using the subtyping relationship allows the result of the expression to be a supertype. Assuming Banana <: Fruit:  ↩

    scala> Some(Some(new Banana)).flatten: Option[Fruit]
    res2: Option[Fruit] = Some(Banana())
  7. “Lower bound” and “upper bound” refer to the type hierarchy: if you draw a type hierarchy with the supertypes at the top and subtypes at the bottom, “lower” means being closer to the bottom, and “upper” means closer to the top. So a “lower bound” for a type means the type cannot be under that. Similarly, an “upper bound” means the type cannot be above that.  ↩

  8. The compiler could have chosen the solution Any / Any, or AnyRef / Any. But these would be less useful and the compiler tries to be more specific when it can.  ↩

  9. The Typelevel team in particular wants to address that kind of not-very-useful type inference.  ↩

  10. That’s how all implicit searches work, see Where does Scala look for implicits?.  ↩

  11. In a 2014 commit, the implementation switched to $conforms instead of conforms to avoid accidental shadowing.  ↩

  12. It is a bit unusual to see an implicit definition which is parametrized with an an abstract type parameter. Martin Odersky commented on this in a blog post: “The new thing in 2.8 is that implicit resolution as a whole has been made more flexible, in that type parameters may now be instantiated by an implicits search. And that improvement made these classes useful.”  ↩

  13. Mutable Scala collections and arrays in Scala, in particular, are invariant so you cannot assign a mutable.ArrayBuffer[Banana] to a mutable.ArrayBuffer[Fruit], or an Array[Banana] to an Array[Fruit]. Immutable Scala collections are covariant, because it is convenient and safe for them to be. Java arrays are covariant and therefore unsafe!  ↩

  14. The compiler needs to be able to figure out conformance of types outside of implicit search, including every time you pass a parameter to a function. So it’s relatively easy to imagine how the compiler goes through the implicit search path, checking each available implicit, and pondering: “Does this particular implicit have a type which conforms to the required implicit parameter type? If so, I’ll use it, otherwise I’ll continue my search (and fail if the search ends without a match).”.  ↩

  15. In versions of Scala prior to 2.8, the predefined identity function was defined as implicit, and you could use it to implement generalized type constraints. However this early implementation had issues related to implicit search, therefore a new solution was implemented in 2.8 and <:< was introduced. But in fact <:< acts exactly like an implicit identity function under another name! James Iry commented on this topic:  ↩

    BTW, prior to 2.8 the idea could more or less be expressed with

    def accruedInterest(convention: String)(implicit ev: I ⇒ CouponBond): Int = ...

    I say more or less because ev could be supplied by any implicit function that converts I to CouponBond. Normally you expect ev be the identity function, but of course somebody could have written an implicit conversion from say DiscountBond to CouponBond which would screw things up royally.

  16. Jason Zaugg appears to be the mastermind behind it.  ↩

  17. Here is a short blog post on this annotation.  ↩

  18. For more uses of <:<, see Unboxed union types in Scala via the Curry-Howard isomorphism by Miles Sabin.  ↩

  19. See Yang Zhang’s post, which made some noise a while back.  ↩

iPhone 6: Pay less with a little-known T-Mobile plan

T-Mobile SIM Kit
T-Mobile SIM Kit

TL;DR: If you have the cash to buy an unlocked iPhone 6 upfront, don’t mind running on the T-Mobile network, and mostly care about data as opposed to voice, you can save well over $500 over a period of two years compared to mainstream plans by AT&T, Verizon or even T-Mobile’s flagship plans.

NOTE: The following post is specific to the US smartphone market.

Over the last 2 years I have been on an AT&T business plan [1] which was not a bad deal by US standards:

  • Upfront cost for the iPhone 5: $363.74 [2]
  • Monthly cost: $74 ($69.99 plus taxes, fees and phone subsidy)
  • Monthly data: 3 GB
  • Contract duration: 2 years

I usually stayed under the included 3 GB, but occasionally went over and had to pay an extra $10 for an additional 1 GB. I made very limited use of voice and text.

As I wanted to get a new iPhone 6 Plus, I considered my options. With that same AT&T business plan, here is what the cost would have been for the next 2 years:

  • Upfront cost for the iPhone 6 Plus 64 GB: $435.91 ($399 + tax) [2]
  • Monthly cost: $74
  • Monthly data: 3 GB
  • Contract duration: 2 years
  • Total cost of ownership: $2,211.90 ($92 / month)

The price of an unlocked iPhone 6 Plus 64 GB, bought directly from Apple, is $927.53 ($849.00 without sales tax). If we spread the total cost over 2 years, we get the following breakdown:

  • Monthly device payment: $927.53 / 24 = $38.65
  • Monthly service cost: $92 - $38.65 = $53.35

Looking at the monthly cost over the same period of time is useful as it allows us to do meaningful comparisons.

Now let’s look at the T-Mobile plans advertised for the iPhone 6. They give you quite a bit (unlimited talk, text and data with data throttling), but they are not cheap: they range from $50 to $80 per month, “plus taxes, fees and monthly device payment”, that is without phone subsidy. They mainly differ by the amount of 4G LTE data you get (from 1 GB to unlimited, and then “your data speed will automatically convert to up to 2G web speeds for the remainder of your billing cycle”). [3]

For my data usage I would probably need the $60 plan (which, remember, doesn’t include taxes and fees, so is probably at least $65 in practice) to have something equivalent to my AT&T plan. This is about $12 more per month ($288 more over 24 months) than my previous AT&T service.

In short, T-Mobile is not a particularly good deal if you care mostly about 4G data. [4] And, by the way, AT&T now has comparable prices as well.

But luckily there is more: T-Mobile also offers prepaid plans. And although the flagship prepaid plans that T-Mobile advertises are the same as their regular plans, you will find, hidden in plain sight, the following:

$30 per month - Unlimited web and text with 100 minutes talk

100 minutes talk | Unlimited text | First 5 GB at up to 4G speeds

Now get unlimited international texting from the U.S. to virtually anywhere included in your plan—at no extra charge.

This plan is only available for devices purchased from Wal-Mart or devices activated on

I had heard of this plan from friends who have been using it for quite a while with Android phones. T-Mobile clearly doesn’t want you to know too much about this: it is a little bit buried, and details of the plan are lacking. But it’s there! [5]

The question now is: does this work with the iPhone 6 Plus? The answer is yes, it does work! Here is what you have to do:

  1. Buy your unlocked (“Contract-free for use on T-Mobile”) iPhone 6 (or 6 Plus) from Apple. [6]
  2. Order the T-Mobile SIM Starter Kit with nano SIM. The kit is $10 but T-Mobile sometimes has promotions (I bought the kit for one cent).
  3. Don’t activate the T-Mobile SIM which comes with your iPhone. [7]
  4. Once you receive the SIM, place it in your iPhone.
  5. Proceed with activation online [8] and choose the $30 plan. [9]
  6. Profit!!!

So now let’s look at the total cost of ownership of this solution over two years:

  • iPhone 6 Plus 64 GB, unlocked, with sales tax: $927.53
  • Monthly cost of plan: $30 [10]
  • Monthly data: 5 GB
  • Total provider cost over 2 years: $30 × 24 = $720
  • Total cost per month including the iPhone: $68.65
  • Total cost of ownership: $1,647.53
  • Savings over my earlier AT&T plan over 2 years: $564.38

Of course, this is still not cheap overall, but it’s a bit better, and in addition I get:

  • 2 GB more 4G data per month than with the AT&T business plan
  • tethering [11]
  • an unlocked phone which I can use on many networks around the world
  • no contract commitment whatsoever
  • the ability to upgrade the phone at any time (just sell it and buy a new one!)
  • the pleasure of giving money to a company a little bit less evil than AT&T and Verizon

There are drawbacks to this solution, in particular:

  • It is unclear whether I could have ported my phone number and still qualify for a “new activation”. I did not try it because I use Google Voice to forward my calls anyway.
  • You are on the T-Mobile network, and this means that you won’t have as much coverage as with AT&T or Verizon.
  • This can be seen as a benefit or a drawback: you have to pay upfront for the phone, and T-Mobile won’t help you pay for it when you get prepaid plans.
  • There are way less voice minutes (an option to call regular phones is using Skype, Google Hangouts, or other VoIP solutions).
  • It is unclear whether fancy features such as Wi-Fi calling [12] or VoLTE are or will be enabled. But since these are voice features and this solution is for people who care more about data than voice, it doesn’t matter much to me.

No matter what, I will see how this fares over the next few months, and in the meanwhile I hope this post will be useful to others!

Disclaimer: This has been working for friends with Android phones and appears to be working for me so far with the iPhone 6 Plus, but I cannot be held responsible if you go this route and have issues of any kind.

For another article comparing plans by major providers, see iPhone 6 Plans Compared: AT&T, Verizon, Sprint, and T-Mobile. Keep in mind that this looks at and iPhone 6, not 6 Plus (so about $100 of difference) and only 2 GB / month plans.

  1. If you have a company, I recommend you ask AT&T about these plans. You have great customer support, and contrary to business cable, you get more for your money compared with consumer plans.  ↩

  2. This is of course not the full price of the phone. It is a downpayment you make on it, and you pay for your phone as part of your monthly plan, in ways which until recently were usually not detailed by providers.  ↩

  3. T-Mobile also has a “Simple Starter 2GB Plan” for $45/month, which includes 2 GB of 4G LTE data, but then cuts out your data. This is not really an option for me.  ↩

  4. T-Mobile also has business plans, but for one line the prices are the same.  ↩

  5. In fact, it is surprising that they even have this plan at all on their site. It makes sense at Wal-Mart, but online? Could it be that they legally have to list it on their site if they provide it at Wal-Mart? I would be curious to know.  ↩

  6. That’s what I did. It might be the same if you get it from T-Mobile, but I haven’t tried.  ↩

  7. I didn’t try to activate it, but I suspect that the activation instructions would lead you to the regular T-Mobile plans without including the $30 prepaid plan. Since the SIM kit was $ 0.01, I figured I would go the safer route. But even for $10 the price remains reasonable.  ↩

  8. Ignore the voice-based activation which starts when you turn on the phone. Also, I had some trouble with Chrome and then switched to Firefox.  ↩

  9. The plan is marked “for new activations only”, and I am not sure what it means, although by any definition of “new activation” I can think of, mine was a “new activation”.  ↩

  10. And by the way the plan is a round $30 per month: there is are no additional taxes or fees.  ↩

  11. With some devices, such as the Nexus 5, tethering is disabled by T-Mobile, while it works fine with the Nexus 4. It is entirely possible that T-Mobile will disable tethering on the iPhone 6 when they get to it. But for now it works.  ↩

  12. Although the T-Mobile site says “WiFi Calling for all T-Mobile customers with a capable device”.  ↩

Reading plan: October checkin

My goal for the month of September was making progress reading Intuition Pumps and Other Tools for Thinking. I am now at page 60, so I consider that this was a success. It does help to have small, achievable goals!

Reading plan: September checkin

My goal for the month of July was:

  • continue on the synthesis and try to have a closure on it, spending about 2 more hours

I didn’t have any goals for August.

I only managed to spend another hour in July, and I don’t have a closure. So I have decided to park this work for a while.

The good news is that I have just started reading Daniel Dennett’s Intuition Pumps and Other Tools for Thinking this month, one of the books on my reading plan for 2014.

I have decided to read this book not in digital format, but on good old paper. I have also decided to allow myself to trash this book with highlights and annotations as I see fit.

My goal for the rest of the month is simply to make progress reading this book.

Rationalizing the iPhone 6 Plus

iPhone 6 and iPhone 6 Plus
iPhone 6 and iPhone 6 Plus

Daniel Miessler asked himself which iPhone 6 to get and I did the same. Here are my thoughts.

First, whether considering the 6 or 6 Plus, there is definitely a decrease in pocketability. But I see the move to larger screens as necessary [1] as we use the devices we call phones more and more as computers-which-you-carry-in-your-pocket. [2]

After Apple’s keynote, I hesitated a little bit between the iPhone 6 and the 6 Plus. Initially I was pretty sure that I wanted the larger size. Then I printed the templates and realized that the Plus was larger than I had expected. I started having doubts about whether I would like the larger device, in particular:

  • Will it fit in a pocket relatively comfortably, or will it be a constant annoyance? [3]
  • Will I be able to use it with a single hand at least part of the time? [4]

In the end I decided to get the 6 Plus and to consider it an experiment: a device so different from the ones I have had so far (iPhone 3G, [5] iPhone 4, iPhone 5) might change my habits in some interesting ways.

I am also experimenting in another way: I have had AT&T contracts since the iPhone 3G in 2008. This time around I ordered an unlocked iPhone 6 Plus for use on the T-Mobile network. [6] I like the idea of having an unlocked device, as well as having more options for plans. I will probably try to get one of the T-Mobile prepaid plans (which they don’t advertise much). [7]

Here are the features specific to the 6 Plus I am looking forward to:

  • Improved camera with optical stabilization. I have kids and I consider the camera which I carry with me at all times important. [8]

  • Bigger screen. Many activities should be more comfortable with a larger screen. Will I use my phone for reading more? Will I still be interested in getting the next Kindle?

  • Improved battery life. Depending on which feature you are looking at, the battery life is supposed to be better across the board. For example, Wi-Fi browsing is 10 % longer and standby 60 % longer.

I am also looking forward to the following features shared by the 6 and 6 Plus:

  • Improved speed. The CPU improvement announced is “only” 25% over the iPhone 5S, but the 5S was about twice faster than the 5, so that will be a nice improvement.

  • The new hardware design. I like the rounded body, which looks more like the original iPhone and should make the device pleasant to hold. The iPhone 3G, while plasticky, was also great to hold due to the curve of its back, and from this perspective the iPhone 4 to iPhone 5S design was a step back.

  • Apple Pay. I don’t need to pay in stores all day long, and this won’t revolutionize payments, [9] but I am intrigued by this combination of Touch ID and NFC. Will it work as reliably and fast? Will it work in stores I am likely to visit? The US is finally implementing “Chip and PIN” cards to help prevent fraud. This means that it might become a little slower to pay with cards than it has been so far, as you will have to enter your PIN. [10] Could Apple Pay be slightly more interesting due to this move?

I am looking forward to retire my beat up iPhone 5!

  1. We can say it now that Apple is finally in the race!  ↩

  2. Not in all pockets in the case of the iPhone 6 Plus, Galaxy Note, or the 6" Nokia Lumia 1520. There is word that Google might come out with a large Nexus phone soon as well.  ↩

  3. If not, I can always consider 5.11 TacLite Pro Pant. I don’t think I can consider a European carryall.  ↩

  4. The iPhone 6 Plus has a feature called Reachability to help with this: a double-tap of the home button brings down the content to make it reachable by the thumb.  ↩

  5. Steve Jobs referred to the original iPhone’s screen as “giant”. Times have changed.  ↩

  6. I already knew the price of the device, but it was still a bit of a shock to see the final price in the shopping cart (almost $1,000 with sales tax!). It is a neat trick that the big US carriers have pulled to subsidize the price of devices over 18–24 months. I would bet that a large majority of smartphone users do not know the actual price of the device.  ↩

  7. I don’t have an absolute guarantee that this will work out. But the phone will be unlocked and T-Mobile has a “bring your own device” option so I am hoping things will be smooth.  ↩

  8. The camera of my iPhone 5 has gathered dust inside, and I find myself reaching for my wife’s iPhone 5S regularly. I also take my SLR on specific occasions.  ↩

  9. For two reasons: because the major credit card companies are still involved, and because the system is limited to the Apple ecosystem.  ↩

  10. This American Express FAQ says: “If you have a Chip and PIN enabled Card, you must use your PIN (Personal Identification Number) when prompted, to pay for goods and services”.  ↩

Reading plan: July checkin

My goals for the month of June were:

  • spend 3 quality hours
  • complete synthesis of the organ book

I spent 5 hours in June, which is good as that means I did more than planned! However that was not enough to complete that synthesis.

It’s already mid-July and I have spent another hour on the synthesis. For the rest of the month, I will continue on the synthesis and try to have a closure on it, spending about 2 more hours.

For next month, I want to separate the tasks:

  • planned reading
  • writing about what I read

I do not plan to write about all the books I read, certainly not in depth. Still, if I want to read at least one other book this year, I better start doing the reading part regularly, and if writing is needed consider that a separately planned task.