Tuesday, June 3, 2014

Thoughts on the Swift language

What it is

I am not a language designer but I love programming languages, so I can’t resist putting down a few rough thought on Swift, the new programming language announced on Monday by Apple. It is designed to make Objective-C, the main language used to build apps on iOS and OS X, a thing of the past. I think it’s fair to say that this was, for developers, the highlight of Monday’s WWDC keynote.

Objective-C is a dinosaur language, invented in the early 1980s. If you know any relatively more modern higher-level language (pick one, including C#, Scala, even Hack), it is clear that it has too much historical baggage and not enough of the features programmers expect.

John Siracusa captured the general idea in his 2005 Avoiding Copland 2010 article and its revision, Copland 2010 revisited: Apple’s language and API future, and has kept building a really good case since, in various podcasts, that Apple had to get their act together. Something, anything, had to be done. [1]

There was a possibility that Apple would keep patching Objective-C, moving toward a superset of a safe subset of it. But I don’t think that anybody not working at Apple saw Swift coming that, well, swiftly. [2]

Why this is good for programmers

Reactions to Swift so far seem mostly positive. (I don’t tend to take the negative reactions I have seen seriously as they are not argumented.) As Jeff Atwood tweeted: “TIL nobody actually liked Objective-C.”. I share the positive feeling for three reasons:

First, I believe that programming languages matter:

  • they can make developers more or less productive,
  • they can encourage or instead discourage entire classes of errors,
  • they can help or hinder reuse of code,
  • they can make developers more or less happy.

With brute force and billions of dollars, you can overcome many programming languages deficiencies. But it remains a waste of valuable resources to write code in an inferior language. Apple has now shown that it understands that and has acted on it, and they should be commended for it.

Second, concepts which many Objective-C developers might not have been familiar with, like closures, immutable variables, functional programming, generics, pattern matching, and probably more, will now be absorbed and understood. This will lead to better, more maintainable programs. This will also make these developers interested in other languages, like Scala, which push some of these concepts further. The bar will be generally raised.

Finally, arguments over the heavy, ugly syntax of Objective-C, and its lack of modern features can be put to rest: Apple has decided the future path for iOS and OS X developers. That ship has sailed.

Where it fits

What kind of language is Swift? I noticed on Twitter that many had a bit of trouble positioning the language. Did Apple reinvent JavaScript? Or Go? Is Swift functional first? Is it even like Scala? What about C#? Or Clojure or XQuery?

I haven’t seen anything in Swift that is not in other programming languages. In fact, Swift features can be found in dozens of other languages (in Lattner’s own words, “drawing ideas from Objective-C, Rust, Haskell, Ruby, Python, C#, CLU, and far too many others to list”), and that’s why many have found similarities with their language of choice. So Swift is not “innovative”. Instead it is a reasonable mix and match of features which make sense for the Apple ecosystem.

Here are a few essential aspects of Swift which are not language features but which put it in context. These all appear to be essential to Apple:

  1. Owned by Apple: Swift is fully owned by Apple. It does not depend on Oracle (Java/JVM), Microsoft (.NET), or Google.

  2. Objective-C integration: Swift is designed to integrate really well with Objective-C. In fact, this is likely the second most important reason Apple felt they had to create their own language (in addition to ownership). There are precedents: Groovy, Scala, Clojure, Kotlin, Ceylon and others are designed to interoperate well with Java; CoffeeScript with JavaScript; Hack with PHP; Microsoft’s CLR was designed from the get go as a multi-language VM. This is important for initial adoption so that existing code can be reused and the new language progressively introduced. It would have been possible, but much harder, for Apple to pick an existing language.

  3. Static typing: Swift is a statically-typed language. There is type inference, which means you don’t have to actually write down the types everywhere, in particular within functions. But types are there nonetheless. So it looks more like dynamic languages, but is not one. [3]

  4. A dynamic feel: This is part of the “modern” aspect of Swift: a move toward concision which appeals to programmers used to dynamic languages, but with the presence of static typing under the hood. This combination of terseness and static typing is something Swift shares with Scala.

    Swift has a REPL and Playgrounds (the interactive demo by Chris Lattner looked impressive), which includes what some other environments call “worksheets” and a bit more. Clearly that’s the direction development tools are taking. All of this is becoming mainstream, which again raises the bar.

  5. Native compilation: Swift is compiled down to native code, like C, C++, Objective-C, Go, and Rust. There is no interpreter or VM, as in Java, JavaScript, C#, Ruby, PHP, or all dynamic languages, besides the small Objective-C runtime. Also, it doesn’t have a real garbage collector: it uses automatic reference counting (ARC).

    Swift is a bit odd in that native compilation and lack of full garbage collection make it closer to systems language, yet it is clearly designed to build applications. I wish the balance had moved more toward the higher level rather than the lower level, but it’s an interesting middle ground.

What’s disappointing

Here are a few aspects of Swift which, at first glance, disappoint me a bit. Keeping in mind that this is a first version of Swift which has room to grow:

  1. Openness: So far Apple has not announced that the Swift compiler would be open source, like the Objective-C compiler. This is a big question mark. It would be the right thing for them to do to open the compiler, and I am hopeful that they will.

  2. Garbage collection: It’s likely that Apple considered that ARC was good enough in most situations, and it makes interoperability with Objective-C (compatibility in terms of memory management) much easier to handle. Still, this would give me trouble. Lack of proper garbage collection means more memory bugs to hunt down.

  3. Concurrency support: Swift doesn’t have async/await, like C#, Scala, and soon JavaScript, or futures and promises. Async support is important in client apps as much as in server apps.

  4. Type system: The type system appears very simple. This might be seen as good or bad. The reference book doesn’t even mention the word “variance”. (I suppose Swift picks a default, but doesn’t allow programmers to control that.)

  5. Persistent data structures: There doesn’t seem to be persistent data structures (which are truly immutable yet can be updated efficiently thanks to structural sharing), as in Clojure and Scala. These are incredible tools which many programmers have now found to be essential. Immutability, in general, gives you much increased confidence about the correctness of your code. I would miss them in Swift.

  6. Well, innovation: Dart, Go, Hack, and Swift show that it is very hard for big companies to come up with something really unique in their programming languages. Academia remains the place where new ideas are born and grow. Still, it would have been nice if there was one or two new things in Swift that would make it special, like for example Scala’s implicits which have turned out to have far-reaching consequences (several of which I really like).

Browser and server

I am curious to see if Swift will see adoption on the server, for services. It might make sense for Apple to use Swift internally for their services, although having a language is not enough: you need proper infrastructure for concurrent and distributed computing. Swift is not there yet. But it could be in the future. This is a bit less important to Apple than the client at this time.

What about the browser? Could one conceivably create a Swift-to-JavaScript compiler? I don’t see why not. JVM languages, from Java to Clojure to Scala, now compile to JavaScript. Swift currently uses ARC, but in a browser environment it could probably work with the JavaScript VM’s garbage collector.

So there might be room, in years to come, for Swift to conquer more environments.

Google!

Where does Google stand with regards to this? It’s curious, but I think now that it’s Google which might have a programming language problem! Android uses Java but, as famous programming languages guy Erik Meijer tweeted, “Swift makes Java look tired.” (To be fair, most languages make Java look tired.)

Google also has Dart, which so far hasn’t been positioned as a language to develop Android or server apps. But maybe that will come. Go is liked by some for certain types of server applications, but is even more of a “systems language” than Swift, and again Google hasn’t committed to bringing it as a language to write Android apps.

Will Google come up with yet another programming language, targeted at Android? The future will tell. If it was me, which of course it isn’t, Scala or a successor would be my choice as a great, forward-looking language for Android. And Google could point their Android developers to Scala and say “Look, it looks very much like Swift which you already know!” ;)

Did I miss anything? Let me know in the comments or on Twitter.


  1. Back in 2009 I even tweeted:  ↩

    MS has Anders Hejlsberg (C#). The JVM world has Martin Odersky (Scala). Apple should work with Odersky on the next language for OS X.

    Obviously it wasn’t Odersky, but Chris Lattner, who got to be the mastermind of Swift.

  2. Good job by Apple, by the way, to have managed to keep it under covers so well since July 2010!  ↩

  3. There is a difference with with languages that have optional types, like Dart and Hack. Dynamic, optionally typed, and statically typed languages can, from a syntax perspective, look very similar. But under the hood some pretty different things take place.  ↩

34 comments:

Nathan Youngman said...

With Android Studio based on IntelliJ, my bet for Android is a rebranded Kotlin.


Go can be used to write (parts of) Android apps, but I don't imagine Google officially supporting it in the near future (as much as I like Go, my current favourite).

Erik Bruchez said...

Could Google make Kotlin a success? Scala is well supported in IntelliJ and is already quite successful and liked in the industry.

Arturo Hernandez said...

I know there will be a Swift v2, if it is not already being worked on. And they will make up for a lot of the shortcomings. To me the biggest news is the strategy. Rather than releasing a better language on the side, like Microsoft did long time ago. Apple is betting it's future on it. Way to go! There are a lot of things I don't like about Apple but this willingness to take the right risks, is commendable.

Erik Bruchez said...

I agree. They put Swift front and center, and developers are kindly asked to learn it and quickly forget about Objective-C. That is pretty bold.

jameskatt said...

Swift is innovative.


From Horace Dediu http://www.asymco.com/2014/04/16/innoveracy-misunderstanding-innovation/
Novelty: Something new
Creation: Something new and valuable
Invention: Something new, having potential value through utility
Innovation: Something new and uniquely useful


Swift is something new. And it is uniquely useful. It is going to be the primary language for Apple software development.


You have to look at Apple's intent when examining Swift:


To improve on Objective-C.
To simplify the readability of the language so that young and new programmers may easily learn to program on iOS and OS X.
To create a language that can do everything from systems software to applications. This is why it does not leave its system software roots for more higher level features
To integrate completely with Objective-C. This is so its programmers do not have to rewrite all of their code. Rather there will be a gradual transition to the complete use of Swift in all Apple OS programming.
A faster rapid development system - e.g. with Playgrounds and RPEL. This increases developer interest in creating apps for Apple
A faster running apps


The good thing is that Swift can and will be upgraded just as Objective-C has been upgraded and maintained by Apple.


Apple is capable of maintaining and growing a language. Since too many competitors simply copy Apple, I tend to see Swift as remaining closed source and owned by Apple. Apple doesn't need to become Oracle/Sun where Google simply took and abused its IP.


Apple can certainly add missing pieces if they serve to improve its purposes. Swift is designed as a general purpose mass-consumption language capable for creating anything from system software to apps. As such, not every favorite feature or idea will be suitable.


Swift is the future for Apple developers. The line has been placed in the sand. And Apple bet the company on it.


And every developer will have to learn it because soon thousands of Swift apps are going to be released by new developers hungry to compete with established developers. In one year, every one of the 9 million Apple developers will know Swift and will be programming in it. Hundreds of books will be soon be written about Swift. Stanford will have courses on Swift. And more programmers will be pulled into the Swift camp. With Apple's new developer tools, Swift will usher a new greater era for Apple app developers and consumers.

Erik Bruchez said...

It's unclear to me whether you disagree with me on anything besides the meaning of "innovative". I for one agree with you on everything you wrote above except perhaps the meaning of that word. I respect Horace Dediu a lot, and I may or may not argue about his taxonomy after thinking more about it, but the point of my post is not to argue about the meaning of that word.

Seashell said...

I decided to check out Swift and it only took me a couple of hours to hit a total blocker -- lack of reflection capabilities for dynamically creating instances. Couple that with the fact that I could crash the Playground env about every 5 minutes, and I decided to hold off for a while.


Admittedly, I am not proficient with XCode, but compared to using the IntelliJ products, its like a Stanley Steamer. Hopefully JetBrains will support Swift soon.


The two things I like least about Swift are the funky named args in function calls (is there a way to avoid that?), and I think its a total design flaw to make an Array or Dictionary mutable by making the reference to it mutable. Apple needs to rethink that NOW before the final release because that will be hard to fix.


In general I am very upbeat about the direction and I really like the fact that mutability is designed in with let/var and I even like the way optional values are handled (except its weird you are required to use "if let ..." to test the optional value).

Erik Bruchez said...

Hopefully they will fix a few things before the final release. I realize that Swift sets to do things such as passing by value which Scala doesn't pretend to do, but I like that in Scala there are notions of mutable/immutable values or references vs. mutable/immutable collections which are distinct. It makes it easier to understand the rules.

Also I have started to appreciate that in Scala, Option is a library feature and behaves like any other algebraic data type. So at the language level there is no funky syntax (although I would appreciate if the compiler could optimize Option!). It seems that Swift went for more syntax instead of less here.

Juan said...

Thanks for this. I was on the fence and now i know its best to wait. Sadly Seiki did not announce a new 39" model for 2014. It seems like everyone was waiting for Seiki to announce a new version of your TV with HDMI 2.0 and support for 4k at 60hz.

Thibault ML said...

I am confused: Why do you think "lack of garbage collection" is a bad thing? When I thought I thought I heard it was GCed, my heart jumped in fear.
Apple deprecated garbage collection in Obj-C long ago (even before ARC), because they knew it was going nowhere good. Instead, they went with ARC, which does the job better than a GC, or even human for most of the time.
What are your arguments about GC (memory management at runtime) being better than ARC (memory management at compile time)?

On concurrency:
While Swift itself doesn't have it integrated in the language, it is able to use GCD with blocks, just like Objective-C.
So yes, the language itself doesn't have concurrency support. But the GCD framework brings support for it.
Similarly, it is my understanding that C# by itself doesn't have concurrency support either actually. Instead, it is available in the .Net framework. Without .Net, no concurrency in C#. In that instance, I think it's unfair to compare a language to frameworks :-).
Objective-C is rarely used without Cocoa (AppKit, Foundation, etc.), C# is rarely used without .Net. I expect Swift to rarely be used without Cocoa similarly :-)

Erik Bruchez said...

> Why do you think "lack of garbage collection" is a bad thing?

You sound like I am the only person on earth in favor of a proper garbage collector, but I am not. Garbage collection and ARC both have benefits and drawbacks. See for example:

http://www.elementswiki.com/en/Automatic_Reference_Counting_vs._Garbage_Collection

I have spent most of my programmer's life with proper garbage collection, and I'd rather not go back to manually work to break cycles. From this point of view I regard reference counting in general and ARC in particular as inferior solutions.

Even John Carmack is in favor of garbage collection in the context of games (jump to about 24 mn in the video):

https://www.youtube.com/watch?v=1PhArSujR_A

Not even talking about games, for most applications I think that proper garbage collection is more desirable because it frees the programmer's mind an extra notch.

I suspect that the main reason Apple introduced ARC and kept that for Swift is that garbage collectors don't play well in environments which contain memory handled by different memory management systems, including C and C++ code. It seems like at some point Apple thought that garbage collection would work and introduced it on OS X for Objective-C, but then realized that interoperability issues with libraries not using garbage collection was problematic. I am not sure what the whole story is here.

There is a performance argument to be made too, and it probably goes both ways.

In short I can see how it makes sense for Apple to have gone with ARC at this point given what they already have in their platform. But it doesn't mean I have to like that Swift doesn't have garbage collection.

> On concurrency

I agree with you that features do not necessarily need to be in the language proper. Scala is a great example of this, in fact one of the languages which pushes this the furthest: futures, immutable and parallel collections, even await/async are not built into Scala-the-language at all. They are all at the level of libraries.

So I didn't mean to say that those features have to be directly embedded into the language. I mean language plus libraries. And at this point Swift doesn't have much in this respect, and I was just lamenting the fact. Of course, I expect Apple will add more concurrency support to their frameworks.

Now still, the language itself needs strong built-in support for a few things to support upcoming libraries. Immutability comes to mind. Swift has some related features, but I don't yet understand know how good that support is really.

Thibault ML said...

Right, I see. Thanks for sharing :-)


I personally have never liked GC. I guess it's because I started with C, where I've always tried to be very rigorous with my memory management, and I found GC to be both "too magical" and slowing down things when I'd rather it doesn't. Slightly too unpredictable, in the end.
ARC, I still have issues using it, for the same reason of "I don't know exactly how it happens". But I know it more, I understand it more, and I can hint at the compiler on how to do things if I know it's not going to do things I want.
Also, Xcode includes Clang's analyser which is really, really good at detecting potential leaks. Not flawless, but still really good.


In my point of view, while GC has advantages over ARC, I still think that ARC is a better, safer solution. But again, that's just my point of view, linked to my preference or understanding how memory is managed :-)


I guess, in the end, whatever suits the developer, but I still would disagree that "not having garbage collection" is a negative side of Swift. Especially given the fact that the language is supposed to run on phones, within memory-limited environments (granted, it's less limited than it used to be, but still).

Raymond E Peck III said...

I have had a Seiki 39" for about 4 months now and haven't noticed the mouse lag problem at all. Granted, I'm mainly a keyboard-driven person, but still. . . I'm using it with a brand new (Feb 2014) 15" MBP Retina, attached through HDMI.

I love the monitor, although it's *slightly* too large. I spend most of my time in emacs and IntelliJ and find myself in emacs primarily working with two buffers side-by-side with the right split being the primary one and the left more for reference: I have to turn my head a bit and sometimes move a bit closer to the monitor to see the power left corner. In IntelliJ the code window is a bit more centered and the left edge is explorer panes, so no issue there.



All that said, I love it and would not go back to my 30" Dell. I'll be replacing the 30" Dell I have at home with a Seiki.


I also have a 22" Dell in portrait mode on one side, primarily for work chat, and the laptop directly under the Seiki. I have my "goof off" windows like my music streaming down there.


It's a pretty insane amount of screen real estate, but until I can work with a super bright 4k projector at 10' I think this is the ultimate for me.

Erik Bruchez said...

I am not in a position to agree or disagree with the set theory underpinnings of Unit, but whatever they might be, I don't think it's helpful to say that "Unit is not really similar to void of C". Most programmers learning Scala will do perfectly fine equating Unit with the void they probably know from C, Java, or other languages.

TAH2 said...

Indeed. My coworker would say that anyone who is stuck on garbage collection, isn't a proper engineer (probably been doing Java since college, and doesn't know know what a pointer is even if it bit him on the nose).


GC does not work in systems programming, real-time systems, embedded systems or anywhere you need determinacy. Phones are one such example. The little you trade-off for ARC, provides a much broader target language.


Keep in mind that objC HAD GC, Apple dropped support for GC in favor of ARC for a reason.

TAH2 said...

I would say it is very innovative is that it is the first real high-level language for native compiled for systems & application development. It blurs the lines between JIT/interpreted languages like ruby/python and compiled languages like C. I've been searching for something that can seriously replace C or C++ in embedded and systems development for years... this is the first thing I seen that could finally unthrone C. I hope they open source it.


Swift is the first serious looking systems language replacement for C, objC or C++ I've seen outside of Rust. Rust is still years from application ready. Apple said in 24 hours they had 200k downloads of the spec, I bet that is more than rust ever has had in it whole history. Chris Lattner + Apples backing + a hoard of iOS devs could turn this into the most serious "new" language in decades.

Erik Bruchez said...

I am not sure if you intended to reply to my comment or to the grandparent. My intent was not to disqualify garbage collection, just to recognize that there exist benefits to ARC. But there are also drawbacks.

It is one thing to recognize the existence of benefits and drawbacks, and another to make sweeping statements as you do. It is not a matter of being "stuck" on garbage collection, and I would say that your coworker is not thinking very far, while along the way insulting millions of extremely knowledgeable people using garbage collected languages every day.

Second, if GC doesn't work on phones, you better tell Google quick that Android phones don't work. Yet there are "one billion active Android users and counting" (http://www.zdnet.com/google-io-android-stands-at-one-billion-active-users-and-counting-7000030881/).

Erik Bruchez said...

I have now decided not to use "innovative" anymore because it just isn't a very useful word. But I agree with your sentiment that the language places itself in in an interesting middle ground, and there is no doubt in my mind that the language will be successful.


By the way I think it would be successful even if it was not very well designed, because it is so much closer from what developers want than Objective-C, and it is going to be the main language to develop on iOS and OS X no matter what. What is unclear is whether the language will conquer spheres outside of Apple.

Valentin Tihomirov said...

> We never blocked our single thread.

Did you exploit other threads? I have missed something. Or, more accurately, your text is missing something.

Erik Bruchez said...

Exploiting other threads is not necessary to illustrate that you can suspend a computation. This is similar to the node.js asynchronous mode of operation, which is entirely single-threaded.

Valentin Tihomirov said...

I guess that there must be some message loop that your server maintains and readByte just delegates the work to some parallel device (called WorkerThread or WindowsOverlappingIoCall), which once that (sub)tasks is finished posts "I am ready" message back to your server. This scheduling has something to do with event (driven) processing but unless you unveil that, I cannot even understand what is happening. Ok, once read finishes, we know that callback (continuation) will be executed. Yet, I do not understand what is happening while "read" is doing its work. There is no way how I can derive "we never blocked our single thread" does not follow from that uncertainty. Thanks.

Erik Bruchez said...

Operating systems provide ways to start operations and be notified when they complete, and libraries built on top of that provide higher-level operations. For example, node's libuv uses "epoll, kqueue, IOCP, event ports" (https://github.com/joyent/libuv). Java's NIO2 provides a higher-level API as well, which I assume uses the same underlying OS mechanisms.

My point was not to explain how an operating system or library might do this, because clearly there are ways. Instead I wanted to show a sensible use case for Scala continuations, because at the time I wrote this post it didn't look like anybody had managed to do this convincingly! So I chose the example of async IO, and tried to show what happens at the level of the user's code when the continuations plugin is used, while giving a rough idea of what an async framework should do.

Note that right now, things have changed quite a bit on the Scala side. The continuations plugin is being deprecated. Instead, Scala now supports async/await (http://docs.scala-lang.org/sips/pending/async.html).

Async/await was introduced first by C#, and many languages are now adding support for this concept. JavaScript, with ES 7, will add support as well, and Dart just announced they are working on it (https://groups.google.com/a/dartlang.org/forum/#!msg/misc/xFj2kuiC0fs/qqt_kD2ZSIQJ).

With async/await, the general idea of continuations remains: computations can be transformed automatically by a compiler so that the user can write seemingly blocking, imperative code, while in fact running non-blocking code.



Unlike with the continuations plugin, Scala seems serious about async/await, and the purpose is clearly to support asynchronous programming. But again, this related to support at the language level. It is the tasks of libraries to actually make something useful with async/await by implementing non-blocking APIs. This is happening, even in the browser with Scala.js, where async/await has already been shown to work nicely.

Valentin Tihomirov said...

Do you see that I am saying that just passing a continuation callback won't prevent the main (service/msg dispatching) thread starvation? You say that read taks long time since it is blocking. We therefore equip it with continuation function, which will receive the argument when read completes. Ok. But what changed? Read the of Eric from Microsoft. Whether you return the result or pass it to consumer function, you do not say that read will be scheduled for asinchronous execution, which means that the indented calls will still be blocking and your reset-continuation won't procede until all continuations finish. On the other hand, if the read will be fulfileed asynchronously, in parallel, then it is multithreading issue: which thread will handle the continuation: your single js thread, which started the read or asynchronous thread, which executed the read? These are key questions. Without clarifying them, you cannot solve the problem of non-blocking read. This means that the only thing that this article explains is that scala can converts a block of text, below the shift, into "continuation" function. The pretext to fulfil reads asynchronously remains incomplete. Without ericlippert's note, that we never return and, thus next continuation can recycle the stack is even better to explain how the stack problem is solved, though, you complement it very good with loop.

Erik Bruchez said...

Clearly:



1. read() must initiate a low-level asynchronous operation, which will be taken care of by the operating system.


2. Until *either* the thread which scheduled returns all the way to a top-level event loop, *or* another thread, if any, is able to handle the continuation, *something* can be blocked.


It is my understanding that with node, for example, there is no parallel execution. So your main thread should never "block" (in the sense of doing a long-running operation), and always return to the top-level event loop as fast as it can.


Again, please understand I was trying to explain how continuations (or async/await) can be of use. That is all I was trying to do. And this is mostly orthogonal to how to implement an async platform properly.

Valentin Tihomirov said...

Before I have elicited that read must start an async request and that request completion will be executed in the request poster thread, everything was looking as uncertain magic. I know that js has anynchronous xml requests and timer wait operation, which, in constrast to normal js functinos, are non-blocking. They are like "special forms" of evaluation. But in the post you did say nothing about how this is related with closure. You post makes to think that simply creating a continuation (or adding a function into the argument list) makes your originally blocking read such special asynchronous form. It was also uncertain which thread will be responsible for processing the completion/continuation. These are continuation user questions. They are not specific to platform designer. They are crucial for me as user.

HelloWorld said...

Ah I had the same issue but after a search crusade online I found a workaround , now no lagging and I am running it on fully 120 hz with my pc

Erik Bruchez said...

My understanding is that in 4K mode, the maximum input refresh rate of this TV is limited to 30 Hz. and for lower resolutions to 60 Hz. Not to be confused with the display's refresh rate, which will be 120 Hz in all cases.

Rafael said...

Hi Erik


Did the new firmwares made any difference reducing the lag? thanks!

Erik Bruchez said...

No, no difference.

Jako said...

It's not the "first real high-level language for native compiled systems".
Look at Nimrod, D, Go, Julia, Rust.

Jako said...

C# does have the async/await keywords.
It's not strictly about concurrency but asynchronous execution, but anyway Swift doesn't have it.

Veeren said...

Its really a good product to use some negatives may be Ignored. check out the all new monitor for your home and business purpose. Its has a great interface! click below

_____
Veeren
computer monitor

Clint O Baxley said...

I feel the same way. At work I have a crappy vid card so only get 14hz and I program on it fine all day with no input lag of any kind. I have the 50 inch one at home and can play video games at 1920x1080 85 hz with dual 7970's. I friggin love the thing for gaming. It is awesome to have a console, debugger, and tall ass browser window up there too. You do have to turn the mouse all of the way up in speed tho because it is like 500 miles from diagonal to diagonal. LOL

Clint O Baxley said...

You can buy the 39" one for just over 300 bucks now. I can think of nothing in the world worth $300 more. I love these screens.