Remington 12 (1924)

One Underwood and two Royals: [1] in April 2017, I was clearly missing another big typewriter brand! So when I saw a Craigslist ad for a Remington 12 which looked in beautiful condition, I jumped on it. This became machine number 5 [2] in the collection.

Remington 12 (1924)
Remington 12 (1924)

The remington 12, made from 1922 to 1943, is a successor to the Remington 10, the first front-strike machine that Remington started producing in 1908. [3]

The 12 shows its ancient heritage from the Remington family:

  • It has vertical ribbon spools, a design coming directly from the upstrike machines introduced more than 40 years earlier.
  • The same goes for the right-hand carriage return lever, which made more sense for the upstrikes with their left-hand handle to lift the carriage.

On the other hand, the 12 takes standard width ribbons and, like the late model 10s (as opposed to the early model 10s), it has a modern segment. But unlike the 10 which had an open frame, the body of the 12 is fully enclosed by metal panels. The two side panels have cute little doors to access the ribbon spools.

The right-hand side door
The right-hand side door

I didn’t have much to do on this machine. I did not attempt to remove the carriage, but I removed all the panels, cleaned the inside, including a PB Blaster and mineral spirits round, cleaned the types, and used a black sharpie to touch up some areas of the body. It could use some new feet and a new platen: the current rubber is in good shape but very hard.

I particularly like the 5 red tab keys on this machine. The key tops are in great condition. The tab system, which the Remington 10 had already, is nice because you can directly jump to a tab position by pressing the corresponding key, instead of having to press multiple times. This comes at the cost of more mechanical complexity, with a clever system at the back of the carriage to grab the right tab stop depending on the specific tab key pressed. I am missing, unfortunately, 3 of the 5 tab stops in the back.

The 5 red tab keys
The 5 red tab keys

The Remington 12 is not a very rare machine, but I just love it and it is now one of my prized machines, proudly displayed in my house.

Pretty view
Pretty view


  1. The Underwood 6, the Royal KHM, and the Royal 5.  ↩

  2. In the meanwhile, I also obtained a fairly cheap Underwood 5, which is machine number 4. But that machine was really not in good condition and I quickly decided to set it aside.  ↩

  3. All the Remington machines before the 10 were upstrike machines, from the Sholes and Glidden to the Remington 6/7/8/9.  ↩

Royal 5 (1913)

After two typewriters in good condition obtained in April 2017, [1] I was ready for something else for machine number 3. I found on eBay a 1913 Royal 5 [2] and I decided to get it. Shipping was about the same as the price of the machine itself, but I just liked the look of the machine and the idea of a challenge!

Indeed, this machine was not great when I got it: the carriage was very rusty, the internals were extremely filthy, a platen knob was broken, and the letter “E”’s glass was shattered. It was missing the draw band and a rubber foot (the other 3 feet were in bad shape). It didn’t work at all. When I got it and took the first pictures, I thought that there was some beauty in its degradation, but I just couldn’t let it stay this way!

Rusty and dusty Royal 5 (1913)
Rusty and dusty Royal 5 (1913)

Soon after receiving the machine, I started cleaning it. I removed the carriage (which was fairly easy after my experience with the KHM), dunked it into Evapo-Rust, removed all the panels, used PB Blaster to clean the inside, etc. I also made my first mistake: while cleaning the exterior, I made some of the decals worse. Live and learn! But to be fair, they were covered in grime and shellac and whatnot, which appears to be typical for Royal 5s.

In October 2017, I cleaned the keys, which look surprisingly good now, and sanded the space bar, whose paint was flaking. I plan to keep the wood look, as I think it looks pretty good. I might coat it if I can figure out how.

At the same time, I realized that the carriage was missing its feed roller entirely! That’s something I hadn’t noticed at all when acquiring the machine. But, checking pictures, there was no doubt: the machine had been missing that part since the beginning. No matter how well I would clean it, it wouldn’t work without that feed roller. So I put the project on a break.

In May 2018, thanks to the magic of a Facebook group and a kind donor, I obtained a set of spare parts for Royal 5 which happened to include said feed roller, platen knobs (one of the original ones was broken), and even a screw which I was missing! So I had all the parts needed to fix the machine!

And so, on a nice day in August 2018, the machine typed again! I even installed some new rubber feet and made a new typebar rest with felt. Now, it doesn’t type perfectly yet. The carriage is a bit wobbly. With shift lock, the carriage gets stuck about halfway through the line. I am not sure if increasing the spring tension will help. But even with these issues, it is possible to type a page pretty nicely. I have already used the machine to write several daily journal entries.

The 1913 Royal 5 typing again!
The 1913 Royal 5 typing again!

I for one find this machine quite fascinating. Its flat look is striking, of course, but the way the keys slide through the keyboard panels and pull the typebars is also unique. It is relatively simple mechanically but rock-solid: think that a cleaning and replacing a couple of broken parts was enough to make it type again - in two colors, and even with a tabulator!

The Royal 5 keyboard
The Royal 5 keyboard

This said, this Royal 5 is not what I would call “restored” yet. There are a few things I would like to do:

  1. Restore the “E” key. I made an acrylic replacement for the broken glass and printed a neat “E” letter, so I am almost there.
  2. Repaint the two panels that cover the basket. Those had some rust originally, and the paint started flaking while cleaning. I tried some replacement panels that I got with the lot of spare parts, but those don’t quite fit for some reason! It might have been possible to salvage the original ones, but I decided instead to repaint them. I have already stripped the panels of their paint (which means that they have started to rust again). After painting, ideally, I should reproduce the pinstripes.
  3. Clean the paper table. I only dusted it so far. I need to be careful here, as the decal is in reasonable condition.
  4. Replace the bail rollers. The rubber on this machine is bad in general, but these rollers are badly cracked and need replacing. Maybe I should bite the bullet and replace all the rubber.
  5. Repaint the panels under the keys. Here too some paint has flaked and rust is taking over. I’d like to repaint these panels, which means removing all the keys! Some people have done it, but it’s not exactly quick.
  6. Restore some of the decals and pinstripes. Those on the right side and front of the machine are in bad shape.

I hope to have updates on the progress of this restoration soon!


  1. The Underwood 6 and the Royal KHM.  ↩

  2. Royal came up with their Royal Standard (later rebaptized No. 1) in 1906, and improved on it with the Royal 5 in 1910. These machines are sometimes called “flatbeds” because of their distinctively low profile. It appears that people (or businesses) didn’t exactly love the look and, starting with the beautiful Royal 10 in 1913, the company started making machines that looked more like other successful machines on the market like the Underwood or the Remington.  ↩

Royal KHM (1938)

After the Underwood 6 from 1936, there was an good chance I would look into getting another typewriter. So this one came up on eBay, and I noticed that it didn’t sell and that the seller, while not exactly local, was not very far. So I contacted him and asked if he would be willing to keep the machine for me. That’s how I got machine number 2 in my collection in April 2017: a 1938 Royal KHM, fairly priced. We took a small family road trip to get it.

Royal KHM (1938)
Royal KHM (1938)

The KHM probably has the best action of all the machines I have so far. Royal got this part right really early on. Machine from other big brands like Underwood and Remington don’t come close during this period of time (the 1930s). The KHM is a workhorse and a real pleasure to type with. Also, it’s a basket shift, which means that the shift key is easier to use.

Based on my smooth experience removing the carriage of the Underwood 6 for cleaning, I tried doing this with the Royal. But I learned that not all typewriters have a carriage that comes out easily. I persisted and managed to get it out and back in, with difficulty. It did make cleaning much easier, but I wouldn’t recommend going through this unless really necessary. Adjusting the carriage after putting it back is not exactly easy either. I have since learned that the common wisdom about carriages is “Don’t remove the carriage unless absolutely necessary.” In some cases though, like with early Underwoods, it is so easy that this wisdom shouldn’t apply. [1]

The Royal KHM is a successor to the Royal 10, another classic typewriter known for its side windows - yes, glass windows on the sides! The KHM doesn’t have windows but has two panels which can be removed easily. The panels protect the interior of the machine from dust and give it a fairly modern look. In general I like the look of the KHM, including the smooth black paint and the distinctive ribbon covers on top.

Some keys (“6”, “E”, “R”, “T”, “I”, “O”, “L”) clearly got replaced on this machine during its lifetime: they are white instead of yellow, and convex instead of concave.

The keyboard with replaced keys
The keyboard with replaced keys

The left ribbon cover is missing a part so it doesn’t lock properly. I do need to fix an issue with the right margin release. I am also considering putting on new decals - which I have already bought. It seems typical that these machines lose their decals after a while. I think that with new decals well applied, the machine will look amazing.


  1. The carriage is held in a very similar way on the Royal 5 and 10. You also have two ball bearings, which you don’t have on the Underwood. But if I remember well, it was extra difficult on the KHM for some reason.  ↩

Underwood 6 (1936)

In April 2017 I bought my first typewriter: a classic Underwood 6 from 1936. It was actually advertised as an Underwood 5, which would have been even more classic, but I liked the machine and went for it.

Since then, I have accumulated a small collection of typewriters, ranging from the 1890s to the 1980s. I thought I would do a little write-up on some of those machines. [1]

So I am starting with this same Underwood 6 from 1936, machine number 1 in my collection. I spent $100 on it via Craigslist (I would push for less today) but I don’t regret spending this much as the machine is really in great condition and it types really well.

In general, it is good to buy large “desktop” typewriters (standards, as they are called) locally, since shipping them is risky as sellers often don’t know how to pack them well. Many typewriters get damaged in transit this way.

Underwood 6 (1936)
Underwood 6 (1936)

This is a direct descendant of the mythical Underwood 5, the first truly modern typewriter which set the standard for how typewriters should look like for decades. Like the 5, it features an open frame: you can see the inside of the machine and there is only a panel in front. This is nice if you have an engineer’s mindset, but of course that means that the machine can gather dust more easily. It looks a little bit more modern than the 5, without the fancy decals of the early 5s, for example. But the decal on the paper table of the Underwood 6 is really beautiful, as are the green and black keys.

The machine is a carriage shift, which means that shifting to uppercase is more tiring than with newer machines: you have to lift the entire carriage up with your little fingers using the shift keys. It’s fine with a bit of practice. Other than that, typing on this machine feels pretty good to me, and I typed many pages on this machine.

Underwoods in general have a great design and are very robust, so it’s no wonder they sold millions of them in the end. It is easy to remove the carriage and to access the inside, [2] so they are good beginner machines if you want to clean in depth or tinker a bit. I love that the bell is visible on the side, and the 6’s tab system is quite nice.

I didn’t have to do much on this machine: a good cleaning, new rubber feet bought online, and a new ribbon (I was so excited to receive the ribbon I had ordered for the machine online!).

As for issues, the color selector is hard to adjust perfectly so that the red doesn’t bleed (this seems typical on Underwoods). The workaround consists in just using a black ribbon or the black part of a two-color ribbon on this machine. Also, the left margin is a little bit lazy, but it’s not a big problem. Finally, I would still like to bend back the seal on the left into shape.

Overall this is one of the favorite machines in my collection and I am really happy to have it.


  1. Only a subset of those are truly working and in display condition. The others need work.  ↩

  2. It is not easy to remove the carriage of many typewriters, including Royals and Remingtons.  ↩

Scala on Tessel 2

Tessel 2 and its relay module
Tessel 2 and its relay module

What is Tessel 2?

Tessel 2 is a Wi-Fi-enabled development board programmable in JavaScript with Node.js. The first units shipped this month. There is a lot that I like about Tessel 2:

  • It is high-level. JavaScript and Node.js greatly lower the barrier of entry since there are so many developers familiar with these technologies.[1]
  • It works out of the box. Take the device, plug it, push some JavaScript, and it does its magic! There is no need to install your own Linux distribution or additional software.
  • It is autonomous. Thanks to its powerful hardware[2], built-in Wi-Fi and Node.js, it runs independently from other computers or even cables (except for power).
  • It is open. The Tessel 2 software and hardware are open source. In fact, Tessel is not even a company but “just a collection of people who find it worthwhile to spend [their] time building towards the Tessel Project mission.”[3]

It short Tessel 2 seems perfect for playing with IoT!

From JavaScript to Scala

As soon as I got my Tessel 2, I followed the tutorial to get a basic hang of it, and that went quite smoothly.

But my plan all along had been to use Scala on Tessel 2. You might know Scala primarily as a server-side language running on the Java VM. But Scala also compiles to JavaScript thanks to Scala.js, and it does it spectacularly well.

So I set to do something simple like toggling relays, but in Scala instead of JavaScript. Here are the rough steps:

  • setup a Scala.js sbt project
  • write an app calling the Tessel LED and relay module APIs
  • run sbt fullOptJs to compile the Scala code to optimized JavaScript
  • run t2 run target/scala-2.11/tessel-scala-opt.js to deploy the resulting JavaScript to Tessel

After I figured out a couple of tweaks (scalaJSOutputWrapper and .tesselinclude), it just worked! Here is the code:

object Demo extends js.JSApp {

  def main(): Unit = {

    println(s"starting with node version ${g.process.version}")

    val tessel    = g.require("tessel")
    val relayMono = g.require("relay-mono")

    val relay = relayMono.use(tessel.port.A)

    relay.on("ready", () ⇒ {
      println("Relay ready!")

      js.timers.setInterval(2.seconds) {

      js.timers.setInterval(1.seconds) {

    relay.on("latch", (channel: Int, value: Boolean) ⇒ {
        println(s"Latch on relay channel $channel switched to $value")

        if (value)
          tessel.led.selectDynamic((channel + 1).toString).on()
          tessel.led.selectDynamic((channel + 1).toString).off()

Notice how I can call Tessel APIs from Scala without much ado.[4] When used this way, Scala.js works like JavaScript: it’s all dynamic.[5]

Types and facades

But a major reason to use Scala instead of JavaScript is to get help from types. So after that initial attempt I wrote some minimal facades[6] for the Tessel and Node APIs I needed. Facades expose typed APIs to Scala, which allows the compiler to check that you are calling the APIs properly, and also gives your text editor a chance to provide autocompletion and suggestions. You can see this in action in IntelliJ:

Code completion in IntelliJ
Code completion in IntelliJ

Here are the minimal facades I have so far:

Along the way I realized that working on facades is also a great way to learn APIs in depth! This is the resulting code (which you can find on github):

object Demo extends js.JSApp {

  def main(): Unit = {

    println(s"starting with node version ${g.process.version}")

    val tessel    = Tessel()
    val relayMono = RelayMono()

    val relay = relayMono.use(tessel.port.A)

    relay.onReady {
      println("Relay ready!")

      js.timers.setInterval(2.seconds) {

      js.timers.setInterval(1.seconds) {

    relay.onLatch { (channel, value) ⇒
        println(s"Latch on relay channel $channel switched to $value")

        if (value)
          tessel.led(channel + 1).on()
          tessel.led(channel + 1).off()

As you can see, it’s not very different from the dynamic example, except that I now get help from the editor and compiler.

Why do this, again?

Now you might argue that in both cases the code looks more or less like JavaScript, so why go through the trouble?

It’s true that, superficially, JavaScript and Scala look very similar in these examples. But underneath there is Scala’s type system at work, and this is for me the main reason to want to use that language.

This said, there is more, such as:

  • Immutability by default. I like this because it helps reduce errors and works great with functional programming idioms.
  • Collections. Scala has a very complete collection library, including immutable collections (but you can also use mutable collections).
  • Functional programming. Scala was designed for functional from the get go and has some pretty neat functional programming third-party libraries too.

And I could go on with features like case classes, pattern matching and destructuring, for-comprehensions, and more. But I should also mention a few drawbacks of using Scala instead of JavaScript:

  • Harder language. Scala is a super interesting language, but no matter how you look at it, it is a bigger beast than JavaScript.
  • Executable size. Scala.js has an amazing optimizer which also strips the resulting JavaScript from pretty much any unused bit of code[7]. Still, you will likely have resulting files which are larger than what you would get by writing JavaScript by hand. So expect your app to yield uncompressed JavaScript files in the order of a few hundreds of KB (much smaller when compressed). Tessel doesn’t seem to have any issues with that so far, so it might not be a problem at all, but it’s worth keeping an eye on this as Tessel doesn’t have Gigabytes of RAM.
  • Compilation step. There is a compilation and optimization step in addition to publishing the software to Tessel. For my very simple demo, this takes a couple of seconds only. For larger projects, the time will increase. Now this is very manageable thanks to sbt’s incremental compilation, and if you consider that pushing a project to Tessel can take several seconds anyway, I would say that right now it’s not an issue.

So who would want to program Tessel in Scala? Probably not everybody, but it’s a great option to have if you already know the language or are interested in learning it, especially if you are going to write large amounts of code.

What’s next?

I plan to continue playing with Tessel 2 and Scala. The next step is to try to do something fun (and maybe even useful) beyond blinking LEDs and relays!

  1. This trend is definitely in the air. Read for example Why JavaScript is Good for Embedded Systems.  ↩

  2. Tessel 2 is fairly beefy compared to an Arduino board, for example: it features a 580 MHz CPU, built-in 802.11 b/g/n Wi-Fi, and 64 MB of RAM and 32 MB of Flash. You can add more storage via USB.  ↩

  3. From Code of Conduct/About the Tessel Project/How to Get Your Issue Fixed. It is all the more impressive that they managed to make and ship such cool hardware and software.  ↩

  4. There is one exception, which I had missed in an earlier version of this post, which is access to JavaScript arrays. If you only rely on dynamic calls, you have to cast to js.Array[_], or use the selectDynamic() method. Here I chose the latter way. Things look nicer when you use facades.  ↩

  5. Under the hood, this is thanks to Scala’s Dynamic support.  ↩

  6. Scala.js facades are lot like TypeScript declaration files.  ↩

  7. Also known as DCE for Dead Code Elimination.  ↩

Generalized type constraints in Scala (without a PhD)


Not long ago I stumbled upon the signature of the flatten method on Option:

def flatten[B](implicit ev: A <:< Option[B]): Option[B]

I don’t know about you, but I knew about implicit parameter lists, implicit resolution, and even type bounds. But this funny <:< “sad-with-a-hat” [1] operator [2] was entirely new to me!

Smart people [3] have written about it years ago, but it’s clear that we are talking about a feature which is not well-known and poorly documented, even though it is available since Scala 2.8. So this article is about figuring out what it means and how it works.

The following deconstruction turns out to be fairly long, but even though <:< itself may not be useful to every Scala programmer, it touches a surprisingly large number of Scala features which most Scala programmers should know.

What it does and how it’s useful

If you search the Scala standard library, you find a few other occurrences of <:<, in particular:

  • on Option:

    def orNull[A1 >: A](implicit ev: Null <:< A1): A1
  • on Traversable (via traits like GenTraversableOnce):

    def toMap[K, V](implicit ev: A <:< (K, V)): Map[K, V]
  • on Either:

    def joinRight[A1 >: A, B1 >: B, C](implicit ev: B1 <:< Either[A1, C]): Either[A1, C]
  • on Try:

    def flatten[U](implicit ev: T <:< Try[U]): Try[U]

You notice that, in all these examples, <:< is used in the same way:

  • there is an implicit parameter list, with a single parameter called ev
  • the type of this parameter is of the form Type1 <:< Type2

The lowdown is that this pattern tells the compiler:

Make sure that Type1 is a subtype of Type2, or else report an error.

This is part of a feature called generalized type constraints. [4] There is another similar construct, =:=, which tells the compiler: [5]

Make sure that Type1 is exactly the same as Type2, or else report an error.

In what follows, I am focusing on <:< which turns out to be more useful, but just know that =:= is a thing and works in a very similar way.

The why and how of this feature is the subject of the rest of this article! So for now, let’s take this as a recipe, a trick if you will, while we look at how this can be useful in practice.

Let’s start with flatten on Option:

def flatten[B](implicit ev: A <:< Option[B]): Option[B]

What does flatten do, as per the documentation? It removes a level of nesting of options:

scala> val oo: Option[Option[Int]] = Some(Some(42))
oo: Option[Option[Int]] = Some(Some(42))

scala> oo.flatten
res1: Option[Int] = Some(42)

This doesn’t make much sense if the type parameter A of Option is not, itself, an Option-of-something. So what should happen if you call flatten on, say, an Option[String]? I see two possibilities:

  1. The flatten method returns None.
  2. The compiler reports an error.

The authors of the Scala standard library picked option 2, and I think that it’s a good choice, because most likely calling flatten in this case is not what the programmer intends. And lo and behold, the compiler doesn’t let this pass:

scala> val oi: Option[Int] = Some(42)
oi: Option[Int] = Some(42)

scala> oi.flatten
<console>:21: error: Cannot prove that Int <:< Option[B].

So we have a generic type, Option[+A], which has a method, flatten, which can only be used if the type parameter A is itself an Option. All the other methods (except orNull which is similar to flatten) can be called: map, get, etc. But flatten? Only if the type of the option is right!

One thing to realize is that we have something unusual here: a method which the compiler won’t let us call, not because we pass incorrect parameters to the method (in fact flatten doesn’t even take any explicit parameters), but based on the value of a type parameter of the underlying Option class. This is not something you see in Java, and you have probably rarely seen it in Scala.

Looking again at the signature of flatten, we can see how the recipe is applied: implicit ev: A <:< Option[B] reads “make sure that A is a subtype of Option[B]”, and, since A stands for the parameter type of Option, we have:

  • in the first case “make sure that Option[Int] is a subtype of Option[B]
  • in the second case “make sure that Int is a subtype of Option[B]

Obviously, Option[Int] can be a subtype of an Option[B], where B = Int (or B = AnyVal, or B = Any). On the flip side, there is just no way Int can be a subtype of Option[B], whatever B might be. So the recipe works, and therefore the constraint works. [6]

To get the hang of it, let’s look at another nice use case , toMap on Traversable:

def toMap[K, V](implicit ev: A <:< (K, V)): Map[K, V]

Translated into English: you can convert a Traversable to a Map with toMap, but only if the element type is a tuple (K, V). This makes sense, because a Map can be seen as a collection of key/value tuples. It wouldn’t make great sense to create a Map just out of a sequence of 5 Ints, for example.

Similar rationales apply to the few other uses of <:< in the standard library, which all come down to constraining methods to work with a specific contained type only.

In light of these examples, I find that applying this recipe is easy, even though the syntax is a bit funny. But I can think of a few questions:

  1. Can’t we just use type bounds which I thought existed to enforce this kind of type constraints?
  2. If this is a pattern rather than a built-in feature, why does <:< look so much like an operator? Does the compiler have special support for it?
  3. How does this whole thing even work?
  4. Is there an easier ways to achieve the same result?

Let’s look into each of these questions in order.

Question 1: Can’t we just use type bounds?

Lower and upper bounds

Type bounds cover lower bounds and upper bounds. [7] These are well explained in the book Programming in Scala, so I won’t cover the basics here, but I will present some perspective on how they work.

As a reminder, lower and upper bounds are expressed with builtin syntax: >: and <: (spec). You can read:

  • T >: U as “type T is a supertype of type U” or “type T has type U as lower bound”
  • T <: U as “type T is a subtype of type U” or “type T has type U as upper bound”

A puzzler

Let’s consider the following:

scala> def tuple[T, U](t: T, u: U) = (t, u)
tuple: [T, U](t: T, u: U)(T, U)

My tuple function simply returns a tuple of the two values passed, whatever their types might be. Granted, it’s not very useful!

What are T and U? They are type variables: they stand for actual types that will be decided at each call site (each use of the function in the source code). Here both T and U are abstract: we don’t know what they will be when we write the function. For example we don’t say that U is a Banana, which would be a concrete type.

If we pass String and Int parameters, we get back a tuple (String, Int) in the Scala REPL:

scala> tuple("Lincoln", 42)
res1: (String, Int) = (Lincoln,42)

Now let’s consider the following modification, which is a naive attempt at enforcing type constraints with type bounds:

def tupleIfSubtype[T <: U, U](t: T, u: U) = (t, u)

I know, it’s starting to be like an alphabet (and symbol) soup! But let’s stay calm: the only change is that instead of specifying just T as type parameter, we specify T <: U, which means “T must be a subtype of U”.

The intent of tupleIfSubtype is to return a tuple of the two values passed, like tuple above, but fail at compile time if the first value is not a subtype of the second value.

So does the newly added constraint work? Do you think that the compiler will accept to compile this?

tupleIfSubtype("Lincoln", 42)

Before knowing better, I would have thought that the compiler:

  • would decide that T = String
  • would decide that U = Int
  • see the type constraint T <: U, which translates into String <: Int
  • fail compilation because obviously, String is not a subtype of Int

But it turns out that this actually compiles just fine!

How can this be? Is the constraint not considered? Is it a bug in the compiler? A weird edge case? Bad language design? Or maybe, with T <: U, the U is not the same as the second U in the type parameter section? This can quickly be proven false:

scala> def tupleIfSubtype[T <: V, U](t: T, u: U) = (t, u)
<console>:7: error: not found: type V
       def tupleIfSubtype[T <: V, U](t: T, u: U) = (t, u)

So the two Us are, as seemed to make sense intuitively, bound to each other (they refer to the same type).

The answer to this puzzler turns out to be relatively simple: it has to do with the way type inference works, namely that type inference solves a constraint system (spec).

What happens is that yes, I do pass String and Int, but it doesn’t follow that T = String and U = Int. Instead, the effective T and U are the result of the compiler working its type inference algorithm, given:

  • the types of the parameters we actually pass to the function,
  • the constraints expressed in the type parameter section,
  • and, in some cases, the expression’s return type.

If I write:

scala> def tuple[T, U](t: T, u: U) = (t, u)
tuple: [T, U](t: T, u: U)(T, U)

scala> tuple("Lincoln", 42)
res3: (String, Int) = (Lincoln,42)

then yes: T = String and U = Int because there are no other constraints. But when I introduce an upper bound, there is a constraint, and therefore a constraint system. The compiler resolves it and obtains T = String and U = Any:

scala> def tupleIfSubtype[T <: U, U](t: T, u: U) = (t, u)
tupleIfSubtype: [T <: U, U](t: T, u: U)(T, U)

scala> tupleIfSubtype("Lincoln", 42)
res4: (String, Any) = (Lincoln,42)

We can verify that the resulting types satisfy the constraints: [8]

  • String is a String of course
  • Int is a subtype of Any
  • String is also a subtype of Any

Phew! So this makes sense. It’s “just” a matter of understanding how type inference works when type bounds are present.

In the process we have learned that <: and >:, when used with abstract type parameters, do not necessarily produce results which are very useful, because the compiler can easily infer Any (or AnyVal or AnyRef) as solutions to the constraint system. [9]

Question 2: Why does <:< look so much like an operator?

Lets dig a little deeper to understand how <:< works under the hood. Here is a simple type hierarchy used in the examples below:

trait Fruit
class Apple  extends Fruit
class Banana extends Fruit

Parameter lists and type inference

Let’s start with a couple more things you need to know in Scala:

  • Functions can have more than one parameter list.
  • Type inference operates parameter list per parameter list from left to right.

In particular, an implicit parameter list can use types inferred in previous parameter lists.

So let’s write the solution, without necessarily understanding it fully yet:

def tupleIfSubtype[T, U](t: T, u: U)(implicit ev: T <:< U) = (t, u)

This function has two parameter lists:

  • (t: T, u: U)
  • (implicit ev: T <:< U)

Because type inference goes parameter list by parameter list, let’s start with the first one. You notice that there are no >: or <: type bounds! So:

  • T is whatever specific type t has (say T = Banana)
  • U is whatever specific type u has (say U = Fruit)

Infix types

Looking at the second parameter list, we have to clear a hurdle: what kind of syntax is T <:< U? This notation is called an infix type (spec). “Infix” just means that a type appears in the middle of two other types, the same way the + operator appears in the middle in 2 + 2. The type in the middle (the infix type proper) can be referred to a an infix operator. Instead of this operator being a method, as is the case in general in Scala, it is a type.

Let’s look at examples. You probably know types from the standard library such as:

Map[String, Fruit]
Either[String, Boolean]

These exact same types can be written:

String Map Fruit
String Either Boolean

The infix notation makes the parametrized type look like an operator, but an operator on types instead of values. Other than that, it’s just an alternate syntax, and really nothing to worry about!

So based on the above:

T <:< U

means the the same as:

<:<[T, U]

A symbolic type name

Now, what is a <:<? It’s a type: the same kind of stuff as Map or Either, in other words, typically a class or a trait. It’s just that this is a symbolic name instead of an alphabetic identifier like Map. It could as well have been called SubtypeOf, and maybe it should have been!

The implicit parameter

So once we reach the second (and implicit) parameter list:

(implicit ev: T <:< U)

we see that there is a parameter of type <:<, which itself has two type parameters, T and U. These are the same T and U we have in the first parameter list. They are bound to those, and these are known because type inference already did its magic on the first parameter list. Concretely, T is now assigned the type Banana and U the type Fruit!

What the implicit parameter list says is this: “Find me, somewhere in the implicit search path, an implicit val, def, or class of type <:< which satisfies <:<[T, U]”. [10] And because T and U are now known, we need to find an implicit match for <:<[Banana, Fruit].

The trick is to manage to have an implicit definition in scope which matches only if there is a subtype relationship between T and U. For example:

T U Compiler Happiness Level
Banana Fruit happy
Apple Fruit happy
Int Anyval happy
Apple Banana unhappy
String Int unhappy

If we manage to create such an implicit definition, the constraint mechanism works. And we already know that the clever engineers who devised this have found a way to create such an implicit definition!

By the way, we can play with this in the REPL using the standard implicitly function, which returns an implicit value for the given type parameter if one is found:

implicitly[Banana <:< Fruit]  // ok
implicitly[Apple  <:< Fruit]  // ok
implicitly[Int    <:< Anyval] // ok
implicitly[Apple  <:< Banana] // not ok
implicitly[String <:< Int]    // not ok

To summarize, we now have a pretty good level of understanding and we know that:

  • we are talking about a library feature
  • which relies on an implicit parameter
  • with a funny symbolic type operator <:<.

And we also know that the magic that makes it all work lies in the search for a matching implicit definition: if it is found, the subtyping relationship holds, otherwise it doesn’t and the compiler reports and error.

Question 3: How does this whole thing even work?

We could stop here and be happy to use <:< like a recipe, as if it was a core language feature. But that wouldn’t be very satisfying, would it? After all, we still miss the deeper understanding of how that magic implicit is defined, and why an implicit search for it may or may not match it. So let’s keep going!

The implementation

Let’s look at the implementation of <:<, which we find in the Scala Predef object [11]:

@implicitNotFound(msg = "Cannot prove that ${From} <:< ${To}.")
sealed abstract class <:<[-From, +To] extends (From ⇒ To) with Serializable

private[this] final val singleton_<:< = new <:<[Any, Any] {
  def apply(x: Any): Any = x

implicit def $conforms[A]: A <:< A = singleton_<:<.asInstanceOf[A <:< A]

Wow! Can we figure it out? Let’s try.

Which implicit?

Let’s think about a simpler case of implicit search:

def makeMeASandwich(implicit logger: Logger) = ...

implicit def findMyLogger: Logger = ...

val mySandwich = makeMeASandwich

The compiler, when you write makeMeASandwich without an explicit parameter, looks for an implicit in scope of type Logger. Here, the obvious matching implicit is findMyLogger, because it returns a Logger. So the compiler selects the implicit and in effect rewrites your code as:

val mySandwich = makeMeASandwich(findMyLogger)

The same mechanism is at work with implicit ev: T <:< U: the compiler must find an implicit of type T <:< U (or <:<[T, U] which is exactly the same). And there is only one implicit definition with type <:<-of-something in the whole standard library, which is:

implicit def $conforms[A]: A <:< A

Now there is a bit of a twist, because the implicit is of type <:<[A, A], with a single type parameter A, which in addition is abstract. [12] Anyhow, this means that our function parameter ev of type <:<[T, U] must, somehow, “match” with <:<[A, A].

If we ask ourselves: what does it take for this implicit of type <:<[A, A] to be successfully selected by the compiler? The answer is that one should be able, for some type A to be determined, to pass a value of type <:<[A, A] to the parameter of type <:<[T, U]. Another way to say this is that <:<[A, A] must conform to <:<[T, U]. If we can’t do this, the implicit search will fail.


How does this conformance work? This takes us to the notion of variance, which is always a fun thing. Consider a Scala immutable Seq. It is defined as trait Seq[+A]. The little + says that if I require a Seq[Fruit], I can pass a Seq[Banana] just fine:

def takeFruits(fruits: Seq[Fruit]) = ...
takeFruits(Seq(new Banana))

This is called covariance (the subtyping of the type argument goes in the same direction as the enclosing type). Without the notion of covariance and contravariance (where subtyping of the type argument goes in the opposite direction as the enclosing type), you:

  • either can never write code like this (everything is invariant)
  • or you have an unsound type system [13]

Besides collections, functions are another example where variance and contravariance matter. Say the following process function expects a single parameter, which is a function of one argument:

def process(f: Banana ⇒ Fruit)

I can of course pass to process a function with these exact same types:

def f1(f: Banana): Fruit = f

But Scala’s support for subtyping also applies to functions: a function can be a subtype of another function. So I can pass a function with types different from Banana and Fruit without breaking expectations as long as the function:

  • takes a parameter which is a supertype of Banana
  • returns a value of a subtype of Fruit

For example:

def f2(f: Fruit): Apple = new Apple

This is the magic of variance, and you can convince yourself that it makes sense from the point of view of the process function: expectations won’t be violated.

Functions are not a special case in Scala from this point of view: a function of one parameter is defined (in a simplified version) as a trait with the function parameter as contravariant and the result as covariant:

trait Function1[-From, +To] { def apply(from: From): To }

Putting everything together

After this detour in variance land, let’s get back to <:< and the implicit parameter. The implicit <:<[A, A] will conform to the parameter <:<[T, U] if it follows the variance rules. So what’s the variance on <:<[T, U] in Predef?

<:<[-From, +To]

This is the same as Function1[-From, +To] and in fact <:< extends Function1! So our problem comes down to the following question: if somebody requires a function:

T ⇒ U

what constraints must be satisfied so I can pass the following function:

A ⇒ A

With variance rules, we know it will work if:

  • A is supertype of T
  • and A is subtype of U

Written in terms of bounds:

T <: A <: U

Which means of course that T <: U: T must be a subtype of U!

To summarize the reasoning: the only eligible implicit definition in scope which can possibly be selected by the compiler to pass to our function is selected if and only if T is a subtype of U! And that’s exactly what we were looking for! [14]

You can look at this from a slightly more general angle, which is that a function A ⇒ A can only be passed to a function T ⇒ U if T is a subtype of U. [15] You can in fact test the matching logic very simply with the built-in identity function:

val f: Banana ⇒ Fruit  = identity // ok
val f: Fruit  ⇒ Banana = identity // not ok

The same works with $conforms, which returns an <:<, which is also an identity function:

val f: Banana ⇒ Fruit  = $conforms // ok
val f: Fruit  ⇒ Banana = $conforms // not ok

So it is a neat trick that the library authors [16] pulled off here, combining implicits and conformance of function types to implement constraint checking.

The nitty-gritty

The rest of the related code in Predef is about defining the actual <:< type and creating a singleton instance returned by def $conforms[A], because in case the implicit search matches, it must return a real value after all.

You could write it minimally (using <::< in these attempts so as to not clash with the standard <:<):

sealed trait <::<[-From <: To, +To] extends (From ⇒ To) {
  def apply(x: From): To = x

implicit def $conforms[A]: A <::< A =
    new <::<[A, A] {}

But oops, the compiler complains:

scala> def tupleIfSubtype[T, U](t: T, u: U)(implicit ev: T <::< U) = (t, u)
<console>:23: error: type arguments [T,U] do not conform to trait <::<'s type parameter bounds [-From <: To,+To]
       def tupleIfSubtype[T, U](t: T, u: U)(implicit ev: T <::< U) = (t, u)

The good news is that the following version, using an intermediate class, works:

sealed trait <::<[-From, +To] extends (From ⇒ To)

final class $conformance[A] extends <::<[A, A] {
  def apply(x: A): A = x

implicit def $conforms[A]: A <::< A =
  new $conformance[A]

So this works great, with a caveat: every time you use my version of <::<, a new instance of the anonymous class is created. Since we just want an identity function, which works the same for all types and doesn’t hold state, it would be good to use a singleton so as to avoid unnecessary allocations. We could try using an object, since that’s how we do singletons in Scala, but that’s a dead-end because objects cannot take type parameters:

scala> implicit object Conforms[A] extends (A ⇒ A) { def apply(x: A): A = x }
<console>:1: error: ';' expected but '[' found.
implicit object Conforms[A] extends (A ⇒ A) { def apply(x: A): A = x }

So in the end the standard implementation cheats by creating an untyped singleton using Any, and casting to [A <:< A] in the implementation of $conforms. Here is my attempt, which works fine:

private[this] final object Singleton_<::< extends <::<[Any, Any] {
  def apply(x: Any): Any = x

implicit def $conforms[A]: A <::< A =
    Singleton_<::<.asInstanceOf[A <::< A]

The actual Scala implementation opts for using a val instead of an object (maybe to avoid the cost associated with an object’s lazy initialization):

private[this] final val singleton_<:< = new <:<[Any, Any] {
  def apply(x: Any): Any = x

implicit def $conforms[A]: A <:< A =
    singleton_<:<.asInstanceOf[A <:< A]

We are only missing one last bit:

@implicitNotFound(msg = "Cannot prove that ${From} <:< ${To}.")
sealed abstract class <:<[-From, +To] ...

This helps provide the user with a nice message when the implicit is not found. From a syntax perspective, it is a regular annotation, which applies to the abstract class <:<. The annotation is known by the compiler. [17]

So here we are: the implementation is explained! It’s a bit trickier than it should be in order to prevent extra allocations. I confess that I am a bit disappointed that there doesn’t seem to be a way to avoid an instanceOf: even though it’s local to the implementation and therefore the lack of safety remains under control, it would be better if it could be avoided.

An implicit conversion

One thing you might wonder is what to do with the ev parameter. After all, a value must be passed to the function when the implicit is found (when it’s not found, the compiler blows up so ev doesn’t need an actual value).

A first answer is that you don’t absolutely need to use it. It’s there first so the compiler can check the constraint. That’s why it’s commonly called ev, for “evidence”: its presence stands there as a proof that something (an implicit) exists.

Nonetheless, ev must have a value. What is it? It’s the result of the $conforms[A] function, which is of course of type <:<[T, U]. And we have seen above that <:< extends T ⇒ U. So the result of $conforms[A] is a function, which takes an A and returns an A, that is, an identity function. And it not only returns a value of the same type A, but it actually returns the same value which was passed (that’s the idea of an identity function).

And you see that in the implementation:

def apply(x: Any): Any = x

It follows that ev has for value an identity function from T to U: it takes a value t of type T and returns that same value but with type U. This is possible, and makes sense, as we know that T is a subtype of U, otherwise the implicit wouldn’t have been found.

But there is more: ev is also an implicit conversion from T to U (from Banana to Fruit). How so? Because it has the keyword implicit in front of it, that’s why!

To contrast with regular type bounds, if you write:

def tupleIfSubtype[T <: U, U](t: T, u: U) = ...

the compiler knows that T is a subtype of U, thanks of the native semantic of <:. But with <:<, the compiler knows nothing of the sort based on the type parameter section.

However the presence of the implicit ev function makes it possible to use the value t of type T as a value of type U. The subtype relationship can be seen as an implicit conversion. This is much safer than using t.asInstanceOf[U]. You could also be extra-explicit and write:


So you can write:

def tToU[T, U](t: T, u: U)(implicit ev: T <:< U): U = t


def tToU[T, U](t: T, u: U)(implicit ev: T <:< U): U = ev(t)

Without the implicit conversion, the compiler complains:

scala> def tToU[T, U](t: T, u: U): U = t
<console>:10: error: type mismatch;
 found   : t.type (with underlying type T)
 required: U
       def tToU[T, U](t: T, u: U): U = t

You can see how Option.flatten makes use of the ev() function:

def flatten[B](implicit ev: A <:< Option[B]): Option[B] =
  if (isEmpty) None else ev(this.get)

In summary, all these features fall together to produce something that makes a lot of sense and is useful.

Question 4: Is there an easier ways to achieve the same result?

There is at least one other way something like what <:< does can be achieved. The idea is that a method such as flatten does not need to be included on the base class or trait, in this case Option. Instead, Scala has, via implicit conversions, what in effect achieves extension methods (AKA the “extend my library” pattern).

So say that such an extension method is only available on values of type Option[Option[T]]:

implicit class OptionOption[T](val oo: Option[Option[T]]) extends AnyVal {
  def flattenMe: Option[T] = oo flatMap identity

If we try to apply it to Some(Some(42)), the method is found and the flattening works:

scala> Some(Some(42)).flattenMe
res0: Option[Int] = Some(42)

If we try to apply it to Some(42), the method is not found and the compiler reports an error:

scala> Some(42).flattenMe
<console>:13: error: value flattenMe is not a member of Some[Int]

But I see a few differences with the <:< operator:

  • You need to create one implicit class for each type supporting a conversion. In the case of Option, for example, you need one implicit class taking an Option[Option[T] to support flatten, and another implicit class to support orNull. So this requires a bit more boilerplate than <:< per method.
  • I am not sure whether there something similar to @implicitNotFound to report a better error in case of problem.

So why not do it this way? I think that a good case can be made that it is easier to understand in the case of the relatively simple examples we have seen so far.

UPDATE 2015–12–10: Somebody kindly pointed out that at the time generalized type constraints were implemented, Scala didn’t yet have value classes or implicit classes. Missing value classes meant boxing overhead when running extension methods, while missing implicit classes just meant more boilerplate. So using an implicit value class as I did above was not a great option at the time.

On the other hand, <:< is a more flexible library feature which you can reuse easily and even combine with other implicits, like in this example using Shapeless: [18]

def makeJava[F, A, L, S, R](f: F)(implicit
  ftp: FnToProduct.Aux[F, L => S],
  ev: S <:< Seq[R],
  ffp: FnFromProduct[L => JList[R]]

Finally, when using type bounds, the constraints expressed with <: and >: can only apply to the method type parameters (or class type parameters when they are used on a class). This is very useful, as we have seen. But when using <:<, you can constrain any two types in scope, and even impose multiple such constraints. Your imagination is the limit:

trait T[A, B] {

  type C
  type D

  def constrainTwoTraitParams         (implicit ev: A <:< B) = ()
  def constrainTraitParamAndTypeMember(implicit ev: A <:< C) = ()
  def constrainTwoTypeMembers         (implicit ev: C <:< D) = ()
  def constrainMore[Y](c: Y)          (implicit ev1: A <:< B, ev2: Y <:< C) = ()

class C extends T[Banana, Fruit] {
  type U = Fruit
  type V = String

You can even go further and constrain not only these types directly, but higher-order types, as in this (math- and symbol-heavy) example from Miles Sabin:

def size[T](t: T)(implicit ev: (¬¬[T] <:< (Int ∨ String))) = ???

In this case, the constraint is not directly on the T type parameter, but on ¬¬[T].

This might be, after all, how the term “generalized type constraint” gets its name.


We have seen how regular type bounds:

  • behave when using abstract type parameters
  • but don’t work to actually enforce certain useful constraints.

We have also seen how we can use instead a generalized type constraint expressed with <:<:

  • to implement methods which can only be used when types are aligned in a certain way
  • and how <:< is not a built-in feature of the compiler, but instead a library feature implemented via a smart trick involving implicit search and type conformance.

Finally, we have considered:

  • how the simple use cases in the standard library could be implemented differently
  • but also how <:< is a more general tool.

So is <:< is worth it? Should it be part of the standard library, and should Scala developers learn it?

I think that the feature suffers from the fact that is is not properly documented, explained, and put in perspective. It also suffers from being a symbolic name with no agreed upon way to pronounce it!

The standard library uses of <:< could be replaced with “extension methods”, which would achieve the same result via Scala features which are easier to understand and familiar to most Scala programmers. I think that this argues against the presence of <:< in the standard library, especially at the level of Predef, and if this was introduced today, my inclination would be to recommend leaving it to third-party libraries such as Shapeless which actually benefit the most from this kind of advanced features.

On the plus side, when used <:< as a recipe, it is easy to understand and useful, and I can’t help but being impressed that generalized type constraints are implemented at the library level, and that they can emerge from powerful underlying language features such as type inference and implicits.

This is typical of Scala, and in line with the principle of Martin Odersky that it is better to keep the core language small when possible. So even though the explanation of how <:< works might seem a bit tricky, you can take comfort in thinking that in other languages this might be compiler code, not library code. But I also understand how some programmers [19] might be bothered by all the machinery behind features like this.

As for me, I am keeping generalized type constraints in my toolbox, but I like seeing the feature as a gateway to a more in-depth understanding of Scala. I hope this post will help others along this path as well!

Did I get anything wrong? Please let me know!

  1. Other suggestions include “Madonna wearing a button-down shirt” and “Angry Donkey”!  ↩

  2. It is valid to call this an operator, even though it is not built into the compiler, and is not an operator on values like +: it is instead an operator on types. In fact the Scala spec calls this an infix operator.  ↩

  3. Using generalized type constraints - How to remove code with Scala 2.8.  ↩

  4. I haven’t found a good explanation for the adjective generalized. This makes you think that there are more specific type constraints. But which are those then?  ↩

  5. It seems that there was another <%< operator as well, but it’s nowhere to be found in Scala 2.11. I suspect that, since it was related to the concept of view bounds, which are being deprecated, and probably had no use in the Scala standard library, it was removed at some point.  ↩

  6. The authors of the standard library could have used =:= to say that the type has to be exactly an Option[B], but using the subtyping relationship allows the result of the expression to be a supertype. Assuming Banana <: Fruit:  ↩

    scala> Some(Some(new Banana)).flatten: Option[Fruit]
    res2: Option[Fruit] = Some(Banana())
  7. “Lower bound” and “upper bound” refer to the type hierarchy: if you draw a type hierarchy with the supertypes at the top and subtypes at the bottom, “lower” means being closer to the bottom, and “upper” means closer to the top. So a “lower bound” for a type means the type cannot be under that. Similarly, an “upper bound” means the type cannot be above that.  ↩

  8. The compiler could have chosen the solution Any / Any, or AnyRef / Any. But these would be less useful and the compiler tries to be more specific when it can.  ↩

  9. The Typelevel team in particular wants to address that kind of not-very-useful type inference.  ↩

  10. That’s how all implicit searches work, see Where does Scala look for implicits?.  ↩

  11. In a 2014 commit, the implementation switched to $conforms instead of conforms to avoid accidental shadowing.  ↩

  12. It is a bit unusual to see an implicit definition which is parametrized with an an abstract type parameter. Martin Odersky commented on this in a blog post: “The new thing in 2.8 is that implicit resolution as a whole has been made more flexible, in that type parameters may now be instantiated by an implicits search. And that improvement made these classes useful.”  ↩

  13. Mutable Scala collections and arrays in Scala, in particular, are invariant so you cannot assign a mutable.ArrayBuffer[Banana] to a mutable.ArrayBuffer[Fruit], or an Array[Banana] to an Array[Fruit]. Immutable Scala collections are covariant, because it is convenient and safe for them to be. Java arrays are covariant and therefore unsafe!  ↩

  14. The compiler needs to be able to figure out conformance of types outside of implicit search, including every time you pass a parameter to a function. So it’s relatively easy to imagine how the compiler goes through the implicit search path, checking each available implicit, and pondering: “Does this particular implicit have a type which conforms to the required implicit parameter type? If so, I’ll use it, otherwise I’ll continue my search (and fail if the search ends without a match).”.  ↩

  15. In versions of Scala prior to 2.8, the predefined identity function was defined as implicit, and you could use it to implement generalized type constraints. However this early implementation had issues related to implicit search, therefore a new solution was implemented in 2.8 and <:< was introduced. But in fact <:< acts exactly like an implicit identity function under another name! James Iry commented on this topic:  ↩

    BTW, prior to 2.8 the idea could more or less be expressed with

    def accruedInterest(convention: String)(implicit ev: I ⇒ CouponBond): Int = ...

    I say more or less because ev could be supplied by any implicit function that converts I to CouponBond. Normally you expect ev be the identity function, but of course somebody could have written an implicit conversion from say DiscountBond to CouponBond which would screw things up royally.

  16. Jason Zaugg appears to be the mastermind behind it.  ↩

  17. Here is a short blog post on this annotation.  ↩

  18. For more uses of <:<, see Unboxed union types in Scala via the Curry-Howard isomorphism by Miles Sabin.  ↩

  19. See Yang Zhang’s post, which made some noise a while back.  ↩

iPhone 6: Pay less with a little-known T-Mobile plan

T-Mobile SIM Kit
T-Mobile SIM Kit

TL;DR: If you have the cash to buy an unlocked iPhone 6 upfront, don’t mind running on the T-Mobile network, and mostly care about data as opposed to voice, you can save well over $500 over a period of two years compared to mainstream plans by AT&T, Verizon or even T-Mobile’s flagship plans.

NOTE: The following post is specific to the US smartphone market.

Over the last 2 years I have been on an AT&T business plan [1] which was not a bad deal by US standards:

  • Upfront cost for the iPhone 5: $363.74 [2]
  • Monthly cost: $74 ($69.99 plus taxes, fees and phone subsidy)
  • Monthly data: 3 GB
  • Contract duration: 2 years

I usually stayed under the included 3 GB, but occasionally went over and had to pay an extra $10 for an additional 1 GB. I made very limited use of voice and text.

As I wanted to get a new iPhone 6 Plus, I considered my options. With that same AT&T business plan, here is what the cost would have been for the next 2 years:

  • Upfront cost for the iPhone 6 Plus 64 GB: $435.91 ($399 + tax) [2]
  • Monthly cost: $74
  • Monthly data: 3 GB
  • Contract duration: 2 years
  • Total cost of ownership: $2,211.90 ($92 / month)

The price of an unlocked iPhone 6 Plus 64 GB, bought directly from Apple, is $927.53 ($849.00 without sales tax). If we spread the total cost over 2 years, we get the following breakdown:

  • Monthly device payment: $927.53 / 24 = $38.65
  • Monthly service cost: $92 - $38.65 = $53.35

Looking at the monthly cost over the same period of time is useful as it allows us to do meaningful comparisons.

Now let’s look at the T-Mobile plans advertised for the iPhone 6. They give you quite a bit (unlimited talk, text and data with data throttling), but they are not cheap: they range from $50 to $80 per month, “plus taxes, fees and monthly device payment”, that is without phone subsidy. They mainly differ by the amount of 4G LTE data you get (from 1 GB to unlimited, and then “your data speed will automatically convert to up to 2G web speeds for the remainder of your billing cycle”). [3]

For my data usage I would probably need the $60 plan (which, remember, doesn’t include taxes and fees, so is probably at least $65 in practice) to have something equivalent to my AT&T plan. This is about $12 more per month ($288 more over 24 months) than my previous AT&T service.

In short, T-Mobile is not a particularly good deal if you care mostly about 4G data. [4] And, by the way, AT&T now has comparable prices as well.

But luckily there is more: T-Mobile also offers prepaid plans. And although the flagship prepaid plans that T-Mobile advertises are the same as their regular plans, you will find, hidden in plain sight, the following:

$30 per month - Unlimited web and text with 100 minutes talk

100 minutes talk | Unlimited text | First 5 GB at up to 4G speeds

Now get unlimited international texting from the U.S. to virtually anywhere included in your plan—at no extra charge.

This plan is only available for devices purchased from Wal-Mart or devices activated on

I had heard of this plan from friends who have been using it for quite a while with Android phones. T-Mobile clearly doesn’t want you to know too much about this: it is a little bit buried, and details of the plan are lacking. But it’s there! [5]

The question now is: does this work with the iPhone 6 Plus? The answer is yes, it does work! Here is what you have to do:

  1. Buy your unlocked (“Contract-free for use on T-Mobile”) iPhone 6 (or 6 Plus) from Apple. [6]
  2. Order the T-Mobile SIM Starter Kit with nano SIM. The kit is $10 but T-Mobile sometimes has promotions (I bought the kit for one cent).
  3. Don’t activate the T-Mobile SIM which comes with your iPhone. [7]
  4. Once you receive the SIM, place it in your iPhone.
  5. Proceed with activation online [8] and choose the $30 plan. [9]
  6. Profit!!!

So now let’s look at the total cost of ownership of this solution over two years:

  • iPhone 6 Plus 64 GB, unlocked, with sales tax: $927.53
  • Monthly cost of plan: $30 [10]
  • Monthly data: 5 GB
  • Total provider cost over 2 years: $30 × 24 = $720
  • Total cost per month including the iPhone: $68.65
  • Total cost of ownership: $1,647.53
  • Savings over my earlier AT&T plan over 2 years: $564.38

Of course, this is still not cheap overall, but it’s a bit better, and in addition I get:

  • 2 GB more 4G data per month than with the AT&T business plan
  • tethering [11]
  • an unlocked phone which I can use on many networks around the world
  • no contract commitment whatsoever
  • the ability to upgrade the phone at any time (just sell it and buy a new one!)
  • the pleasure of giving money to a company a little bit less evil than AT&T and Verizon

There are drawbacks to this solution, in particular:

  • It is unclear whether I could have ported my phone number and still qualify for a “new activation”. I did not try it because I use Google Voice to forward my calls anyway.
  • You are on the T-Mobile network, and this means that you won’t have as much coverage as with AT&T or Verizon.
  • This can be seen as a benefit or a drawback: you have to pay upfront for the phone, and T-Mobile won’t help you pay for it when you get prepaid plans.
  • There are way less voice minutes (an option to call regular phones is using Skype, Google Hangouts, or other VoIP solutions).
  • It is unclear whether fancy features such as Wi-Fi calling [12] or VoLTE are or will be enabled. But since these are voice features and this solution is for people who care more about data than voice, it doesn’t matter much to me.

No matter what, I will see how this fares over the next few months, and in the meanwhile I hope this post will be useful to others!

Disclaimer: This has been working for friends with Android phones and appears to be working for me so far with the iPhone 6 Plus, but I cannot be held responsible if you go this route and have issues of any kind.

For another article comparing plans by major providers, see iPhone 6 Plans Compared: AT&T, Verizon, Sprint, and T-Mobile. Keep in mind that this looks at and iPhone 6, not 6 Plus (so about $100 of difference) and only 2 GB / month plans.

  1. If you have a company, I recommend you ask AT&T about these plans. You have great customer support, and contrary to business cable, you get more for your money compared with consumer plans.  ↩

  2. This is of course not the full price of the phone. It is a downpayment you make on it, and you pay for your phone as part of your monthly plan, in ways which until recently were usually not detailed by providers.  ↩

  3. T-Mobile also has a “Simple Starter 2GB Plan” for $45/month, which includes 2 GB of 4G LTE data, but then cuts out your data. This is not really an option for me.  ↩

  4. T-Mobile also has business plans, but for one line the prices are the same.  ↩

  5. In fact, it is surprising that they even have this plan at all on their site. It makes sense at Wal-Mart, but online? Could it be that they legally have to list it on their site if they provide it at Wal-Mart? I would be curious to know.  ↩

  6. That’s what I did. It might be the same if you get it from T-Mobile, but I haven’t tried.  ↩

  7. I didn’t try to activate it, but I suspect that the activation instructions would lead you to the regular T-Mobile plans without including the $30 prepaid plan. Since the SIM kit was $ 0.01, I figured I would go the safer route. But even for $10 the price remains reasonable.  ↩

  8. Ignore the voice-based activation which starts when you turn on the phone. Also, I had some trouble with Chrome and then switched to Firefox.  ↩

  9. The plan is marked “for new activations only”, and I am not sure what it means, although by any definition of “new activation” I can think of, mine was a “new activation”.  ↩

  10. And by the way the plan is a round $30 per month: there is are no additional taxes or fees.  ↩

  11. With some devices, such as the Nexus 5, tethering is disabled by T-Mobile, while it works fine with the Nexus 4. It is entirely possible that T-Mobile will disable tethering on the iPhone 6 when they get to it. But for now it works.  ↩

  12. Although the T-Mobile site says “WiFi Calling for all T-Mobile customers with a capable device”.  ↩