Tuesday, June 3, 2014

Thoughts on the Swift language

What it is

I am not a language designer but I love programming languages, so I can’t resist putting down a few rough thought on Swift, the new programming language announced on Monday by Apple. It is designed to make Objective-C, the main language used to build apps on iOS and OS X, a thing of the past. I think it’s fair to say that this was, for developers, the highlight of Monday’s WWDC keynote.

Objective-C is a dinosaur language, invented in the early 1980s. If you know any relatively more modern higher-level language (pick one, including C#, Scala, even Hack), it is clear that it has too much historical baggage and not enough of the features programmers expect.

John Siracusa captured the general idea in his 2005 Avoiding Copland 2010 article and its revision, Copland 2010 revisited: Apple’s language and API future, and has kept building a really good case since, in various podcasts, that Apple had to get their act together. Something, anything, had to be done. [1]

There was a possibility that Apple would keep patching Objective-C, moving toward a superset of a safe subset of it. But I don’t think that anybody not working at Apple saw Swift coming that, well, swiftly. [2]

Why this is good for programmers

Reactions to Swift so far seem mostly positive. (I don’t tend to take the negative reactions I have seen seriously as they are not argumented.) As Jeff Atwood tweeted: “TIL nobody actually liked Objective-C.”. I share the positive feeling for three reasons:

First, I believe that programming languages matter:

  • they can make developers more or less productive,
  • they can encourage or instead discourage entire classes of errors,
  • they can help or hinder reuse of code,
  • they can make developers more or less happy.

With brute force and billions of dollars, you can overcome many programming languages deficiencies. But it remains a waste of valuable resources to write code in an inferior language. Apple has now shown that it understands that and has acted on it, and they should be commended for it.

Second, concepts which many Objective-C developers might not have been familiar with, like closures, immutable variables, functional programming, generics, pattern matching, and probably more, will now be absorbed and understood. This will lead to better, more maintainable programs. This will also make these developers interested in other languages, like Scala, which push some of these concepts further. The bar will be generally raised.

Finally, arguments over the heavy, ugly syntax of Objective-C, and its lack of modern features can be put to rest: Apple has decided the future path for iOS and OS X developers. That ship has sailed.

Where it fits

What kind of language is Swift? I noticed on Twitter that many had a bit of trouble positioning the language. Did Apple reinvent JavaScript? Or Go? Is Swift functional first? Is it even like Scala? What about C#? Or Clojure or XQuery?

I haven’t seen anything in Swift that is not in other programming languages. In fact, Swift features can be found in dozens of other languages (in Lattner’s own words, “drawing ideas from Objective-C, Rust, Haskell, Ruby, Python, C#, CLU, and far too many others to list”), and that’s why many have found similarities with their language of choice. So Swift is not “innovative”. Instead it is a reasonable mix and match of features which make sense for the Apple ecosystem.

Here are a few essential aspects of Swift which are not language features but which put it in context. These all appear to be essential to Apple:

  1. Owned by Apple: Swift is fully owned by Apple. It does not depend on Oracle (Java/JVM), Microsoft (.NET), or Google.

  2. Objective-C integration: Swift is designed to integrate really well with Objective-C. In fact, this is likely the second most important reason Apple felt they had to create their own language (in addition to ownership). There are precedents: Groovy, Scala, Clojure, Kotlin, Ceylon and others are designed to interoperate well with Java; CoffeeScript with JavaScript; Hack with PHP; Microsoft’s CLR was designed from the get go as a multi-language VM. This is important for initial adoption so that existing code can be reused and the new language progressively introduced. It would have been possible, but much harder, for Apple to pick an existing language.

  3. Static typing: Swift is a statically-typed language. There is type inference, which means you don’t have to actually write down the types everywhere, in particular within functions. But types are there nonetheless. So it looks more like dynamic languages, but is not one. [3]

  4. A dynamic feel: This is part of the “modern” aspect of Swift: a move toward concision which appeals to programmers used to dynamic languages, but with the presence of static typing under the hood. This combination of terseness and static typing is something Swift shares with Scala.

    Swift has a REPL and Playgrounds (the interactive demo by Chris Lattner looked impressive), which includes what some other environments call “worksheets” and a bit more. Clearly that’s the direction development tools are taking. All of this is becoming mainstream, which again raises the bar.

  5. Native compilation: Swift is compiled down to native code, like C, C++, Objective-C, Go, and Rust. There is no interpreter or VM, as in Java, JavaScript, C#, Ruby, PHP, or all dynamic languages, besides the small Objective-C runtime. Also, it doesn’t have a real garbage collector: it uses automatic reference counting (ARC).

    Swift is a bit odd in that native compilation and lack of full garbage collection make it closer to systems language, yet it is clearly designed to build applications. I wish the balance had moved more toward the higher level rather than the lower level, but it’s an interesting middle ground.

What’s disappointing

Here are a few aspects of Swift which, at first glance, disappoint me a bit. Keeping in mind that this is a first version of Swift which has room to grow:

  1. Openness: So far Apple has not announced that the Swift compiler would be open source, like the Objective-C compiler. This is a big question mark. It would be the right thing for them to do to open the compiler, and I am hopeful that they will.

  2. Garbage collection: It’s likely that Apple considered that ARC was good enough in most situations, and it makes interoperability with Objective-C (compatibility in terms of memory management) much easier to handle. Still, this would give me trouble. Lack of proper garbage collection means more memory bugs to hunt down.

  3. Concurrency support: Swift doesn’t have async/await, like C#, Scala, and soon JavaScript, or futures and promises. Async support is important in client apps as much as in server apps.

  4. Type system: The type system appears very simple. This might be seen as good or bad. The reference book doesn’t even mention the word “variance”. (I suppose Swift picks a default, but doesn’t allow programmers to control that.)

  5. Persistent data structures: There doesn’t seem to be persistent data structures (which are truly immutable yet can be updated efficiently thanks to structural sharing), as in Clojure and Scala. These are incredible tools which many programmers have now found to be essential. Immutability, in general, gives you much increased confidence about the correctness of your code. I would miss them in Swift.

  6. Well, innovation: Dart, Go, Hack, and Swift show that it is very hard for big companies to come up with something really unique in their programming languages. Academia remains the place where new ideas are born and grow. Still, it would have been nice if there was one or two new things in Swift that would make it special, like for example Scala’s implicits which have turned out to have far-reaching consequences (several of which I really like).

Browser and server

I am curious to see if Swift will see adoption on the server, for services. It might make sense for Apple to use Swift internally for their services, although having a language is not enough: you need proper infrastructure for concurrent and distributed computing. Swift is not there yet. But it could be in the future. This is a bit less important to Apple than the client at this time.

What about the browser? Could one conceivably create a Swift-to-JavaScript compiler? I don’t see why not. JVM languages, from Java to Clojure to Scala, now compile to JavaScript. Swift currently uses ARC, but in a browser environment it could probably work with the JavaScript VM’s garbage collector.

So there might be room, in years to come, for Swift to conquer more environments.


Where does Google stand with regards to this? It’s curious, but I think now that it’s Google which might have a programming language problem! Android uses Java but, as famous programming languages guy Erik Meijer tweeted, “Swift makes Java look tired.” (To be fair, most languages make Java look tired.)

Google also has Dart, which so far hasn’t been positioned as a language to develop Android or server apps. But maybe that will come. Go is liked by some for certain types of server applications, but is even more of a “systems language” than Swift, and again Google hasn’t committed to bringing it as a language to write Android apps.

Will Google come up with yet another programming language, targeted at Android? The future will tell. If it was me, which of course it isn’t, Scala or a successor would be my choice as a great, forward-looking language for Android. And Google could point their Android developers to Scala and say “Look, it looks very much like Swift which you already know!” ;)

Did I miss anything? Let me know in the comments or on Twitter.

  1. Back in 2009 I even tweeted:  ↩

    MS has Anders Hejlsberg (C#). The JVM world has Martin Odersky (Scala). Apple should work with Odersky on the next language for OS X.

    Obviously it wasn’t Odersky, but Chris Lattner, who got to be the mastermind of Swift.

  2. Good job by Apple, by the way, to have managed to keep it under covers so well since July 2010!  ↩

  3. There is a difference with with languages that have optional types, like Dart and Hack. Dynamic, optionally typed, and statically typed languages can, from a syntax perspective, look very similar. But under the hood some pretty different things take place.  ↩


Nathan Youngman said...

With Android Studio based on IntelliJ, my bet for Android is a rebranded Kotlin.

Go can be used to write (parts of) Android apps, but I don't imagine Google officially supporting it in the near future (as much as I like Go, my current favourite).

Erik Bruchez said...

Could Google make Kotlin a success? Scala is well supported in IntelliJ and is already quite successful and liked in the industry.

Arturo Hernandez said...

I know there will be a Swift v2, if it is not already being worked on. And they will make up for a lot of the shortcomings. To me the biggest news is the strategy. Rather than releasing a better language on the side, like Microsoft did long time ago. Apple is betting it's future on it. Way to go! There are a lot of things I don't like about Apple but this willingness to take the right risks, is commendable.

Erik Bruchez said...

I agree. They put Swift front and center, and developers are kindly asked to learn it and quickly forget about Objective-C. That is pretty bold.

jameskatt said...

Swift is innovative.

From Horace Dediu http://www.asymco.com/2014/04/16/innoveracy-misunderstanding-innovation/
Novelty: Something new
Creation: Something new and valuable
Invention: Something new, having potential value through utility
Innovation: Something new and uniquely useful

Swift is something new. And it is uniquely useful. It is going to be the primary language for Apple software development.

You have to look at Apple's intent when examining Swift:

To improve on Objective-C.
To simplify the readability of the language so that young and new programmers may easily learn to program on iOS and OS X.
To create a language that can do everything from systems software to applications. This is why it does not leave its system software roots for more higher level features
To integrate completely with Objective-C. This is so its programmers do not have to rewrite all of their code. Rather there will be a gradual transition to the complete use of Swift in all Apple OS programming.
A faster rapid development system - e.g. with Playgrounds and RPEL. This increases developer interest in creating apps for Apple
A faster running apps

The good thing is that Swift can and will be upgraded just as Objective-C has been upgraded and maintained by Apple.

Apple is capable of maintaining and growing a language. Since too many competitors simply copy Apple, I tend to see Swift as remaining closed source and owned by Apple. Apple doesn't need to become Oracle/Sun where Google simply took and abused its IP.

Apple can certainly add missing pieces if they serve to improve its purposes. Swift is designed as a general purpose mass-consumption language capable for creating anything from system software to apps. As such, not every favorite feature or idea will be suitable.

Swift is the future for Apple developers. The line has been placed in the sand. And Apple bet the company on it.

And every developer will have to learn it because soon thousands of Swift apps are going to be released by new developers hungry to compete with established developers. In one year, every one of the 9 million Apple developers will know Swift and will be programming in it. Hundreds of books will be soon be written about Swift. Stanford will have courses on Swift. And more programmers will be pulled into the Swift camp. With Apple's new developer tools, Swift will usher a new greater era for Apple app developers and consumers.

Erik Bruchez said...

It's unclear to me whether you disagree with me on anything besides the meaning of "innovative". I for one agree with you on everything you wrote above except perhaps the meaning of that word. I respect Horace Dediu a lot, and I may or may not argue about his taxonomy after thinking more about it, but the point of my post is not to argue about the meaning of that word.

Seashell said...

I decided to check out Swift and it only took me a couple of hours to hit a total blocker -- lack of reflection capabilities for dynamically creating instances. Couple that with the fact that I could crash the Playground env about every 5 minutes, and I decided to hold off for a while.

Admittedly, I am not proficient with XCode, but compared to using the IntelliJ products, its like a Stanley Steamer. Hopefully JetBrains will support Swift soon.

The two things I like least about Swift are the funky named args in function calls (is there a way to avoid that?), and I think its a total design flaw to make an Array or Dictionary mutable by making the reference to it mutable. Apple needs to rethink that NOW before the final release because that will be hard to fix.

In general I am very upbeat about the direction and I really like the fact that mutability is designed in with let/var and I even like the way optional values are handled (except its weird you are required to use "if let ..." to test the optional value).

Erik Bruchez said...

Hopefully they will fix a few things before the final release. I realize that Swift sets to do things such as passing by value which Scala doesn't pretend to do, but I like that in Scala there are notions of mutable/immutable values or references vs. mutable/immutable collections which are distinct. It makes it easier to understand the rules.

Also I have started to appreciate that in Scala, Option is a library feature and behaves like any other algebraic data type. So at the language level there is no funky syntax (although I would appreciate if the compiler could optimize Option!). It seems that Swift went for more syntax instead of less here.

Juan said...

Thanks for this. I was on the fence and now i know its best to wait. Sadly Seiki did not announce a new 39" model for 2014. It seems like everyone was waiting for Seiki to announce a new version of your TV with HDMI 2.0 and support for 4k at 60hz.

Thibault ML said...

I am confused: Why do you think "lack of garbage collection" is a bad thing? When I thought I thought I heard it was GCed, my heart jumped in fear.
Apple deprecated garbage collection in Obj-C long ago (even before ARC), because they knew it was going nowhere good. Instead, they went with ARC, which does the job better than a GC, or even human for most of the time.
What are your arguments about GC (memory management at runtime) being better than ARC (memory management at compile time)?

On concurrency:
While Swift itself doesn't have it integrated in the language, it is able to use GCD with blocks, just like Objective-C.
So yes, the language itself doesn't have concurrency support. But the GCD framework brings support for it.
Similarly, it is my understanding that C# by itself doesn't have concurrency support either actually. Instead, it is available in the .Net framework. Without .Net, no concurrency in C#. In that instance, I think it's unfair to compare a language to frameworks :-).
Objective-C is rarely used without Cocoa (AppKit, Foundation, etc.), C# is rarely used without .Net. I expect Swift to rarely be used without Cocoa similarly :-)

Erik Bruchez said...

> Why do you think "lack of garbage collection" is a bad thing?

You sound like I am the only person on earth in favor of a proper garbage collector, but I am not. Garbage collection and ARC both have benefits and drawbacks. See for example:


I have spent most of my programmer's life with proper garbage collection, and I'd rather not go back to manually work to break cycles. From this point of view I regard reference counting in general and ARC in particular as inferior solutions.

Even John Carmack is in favor of garbage collection in the context of games (jump to about 24 mn in the video):


Not even talking about games, for most applications I think that proper garbage collection is more desirable because it frees the programmer's mind an extra notch.

I suspect that the main reason Apple introduced ARC and kept that for Swift is that garbage collectors don't play well in environments which contain memory handled by different memory management systems, including C and C++ code. It seems like at some point Apple thought that garbage collection would work and introduced it on OS X for Objective-C, but then realized that interoperability issues with libraries not using garbage collection was problematic. I am not sure what the whole story is here.

There is a performance argument to be made too, and it probably goes both ways.

In short I can see how it makes sense for Apple to have gone with ARC at this point given what they already have in their platform. But it doesn't mean I have to like that Swift doesn't have garbage collection.

> On concurrency

I agree with you that features do not necessarily need to be in the language proper. Scala is a great example of this, in fact one of the languages which pushes this the furthest: futures, immutable and parallel collections, even await/async are not built into Scala-the-language at all. They are all at the level of libraries.

So I didn't mean to say that those features have to be directly embedded into the language. I mean language plus libraries. And at this point Swift doesn't have much in this respect, and I was just lamenting the fact. Of course, I expect Apple will add more concurrency support to their frameworks.

Now still, the language itself needs strong built-in support for a few things to support upcoming libraries. Immutability comes to mind. Swift has some related features, but I don't yet understand know how good that support is really.

Thibault ML said...

Right, I see. Thanks for sharing :-)

I personally have never liked GC. I guess it's because I started with C, where I've always tried to be very rigorous with my memory management, and I found GC to be both "too magical" and slowing down things when I'd rather it doesn't. Slightly too unpredictable, in the end.
ARC, I still have issues using it, for the same reason of "I don't know exactly how it happens". But I know it more, I understand it more, and I can hint at the compiler on how to do things if I know it's not going to do things I want.
Also, Xcode includes Clang's analyser which is really, really good at detecting potential leaks. Not flawless, but still really good.

In my point of view, while GC has advantages over ARC, I still think that ARC is a better, safer solution. But again, that's just my point of view, linked to my preference or understanding how memory is managed :-)

I guess, in the end, whatever suits the developer, but I still would disagree that "not having garbage collection" is a negative side of Swift. Especially given the fact that the language is supposed to run on phones, within memory-limited environments (granted, it's less limited than it used to be, but still).

Raymond E Peck III said...

I have had a Seiki 39" for about 4 months now and haven't noticed the mouse lag problem at all. Granted, I'm mainly a keyboard-driven person, but still. . . I'm using it with a brand new (Feb 2014) 15" MBP Retina, attached through HDMI.

I love the monitor, although it's *slightly* too large. I spend most of my time in emacs and IntelliJ and find myself in emacs primarily working with two buffers side-by-side with the right split being the primary one and the left more for reference: I have to turn my head a bit and sometimes move a bit closer to the monitor to see the power left corner. In IntelliJ the code window is a bit more centered and the left edge is explorer panes, so no issue there.

All that said, I love it and would not go back to my 30" Dell. I'll be replacing the 30" Dell I have at home with a Seiki.

I also have a 22" Dell in portrait mode on one side, primarily for work chat, and the laptop directly under the Seiki. I have my "goof off" windows like my music streaming down there.

It's a pretty insane amount of screen real estate, but until I can work with a super bright 4k projector at 10' I think this is the ultimate for me.

Erik Bruchez said...

I am not in a position to agree or disagree with the set theory underpinnings of Unit, but whatever they might be, I don't think it's helpful to say that "Unit is not really similar to void of C". Most programmers learning Scala will do perfectly fine equating Unit with the void they probably know from C, Java, or other languages.

TAH2 said...

Indeed. My coworker would say that anyone who is stuck on garbage collection, isn't a proper engineer (probably been doing Java since college, and doesn't know know what a pointer is even if it bit him on the nose).

GC does not work in systems programming, real-time systems, embedded systems or anywhere you need determinacy. Phones are one such example. The little you trade-off for ARC, provides a much broader target language.

Keep in mind that objC HAD GC, Apple dropped support for GC in favor of ARC for a reason.

TAH2 said...

I would say it is very innovative is that it is the first real high-level language for native compiled for systems & application development. It blurs the lines between JIT/interpreted languages like ruby/python and compiled languages like C. I've been searching for something that can seriously replace C or C++ in embedded and systems development for years... this is the first thing I seen that could finally unthrone C. I hope they open source it.

Swift is the first serious looking systems language replacement for C, objC or C++ I've seen outside of Rust. Rust is still years from application ready. Apple said in 24 hours they had 200k downloads of the spec, I bet that is more than rust ever has had in it whole history. Chris Lattner + Apples backing + a hoard of iOS devs could turn this into the most serious "new" language in decades.

Erik Bruchez said...

I am not sure if you intended to reply to my comment or to the grandparent. My intent was not to disqualify garbage collection, just to recognize that there exist benefits to ARC. But there are also drawbacks.

It is one thing to recognize the existence of benefits and drawbacks, and another to make sweeping statements as you do. It is not a matter of being "stuck" on garbage collection, and I would say that your coworker is not thinking very far, while along the way insulting millions of extremely knowledgeable people using garbage collected languages every day.

Second, if GC doesn't work on phones, you better tell Google quick that Android phones don't work. Yet there are "one billion active Android users and counting" (http://www.zdnet.com/google-io-android-stands-at-one-billion-active-users-and-counting-7000030881/).

Erik Bruchez said...

I have now decided not to use "innovative" anymore because it just isn't a very useful word. But I agree with your sentiment that the language places itself in in an interesting middle ground, and there is no doubt in my mind that the language will be successful.

By the way I think it would be successful even if it was not very well designed, because it is so much closer from what developers want than Objective-C, and it is going to be the main language to develop on iOS and OS X no matter what. What is unclear is whether the language will conquer spheres outside of Apple.

Valentin Tihomirov said...

> We never blocked our single thread.

Did you exploit other threads? I have missed something. Or, more accurately, your text is missing something.

Erik Bruchez said...

Exploiting other threads is not necessary to illustrate that you can suspend a computation. This is similar to the node.js asynchronous mode of operation, which is entirely single-threaded.

Valentin Tihomirov said...

I guess that there must be some message loop that your server maintains and readByte just delegates the work to some parallel device (called WorkerThread or WindowsOverlappingIoCall), which once that (sub)tasks is finished posts "I am ready" message back to your server. This scheduling has something to do with event (driven) processing but unless you unveil that, I cannot even understand what is happening. Ok, once read finishes, we know that callback (continuation) will be executed. Yet, I do not understand what is happening while "read" is doing its work. There is no way how I can derive "we never blocked our single thread" does not follow from that uncertainty. Thanks.

Erik Bruchez said...

Operating systems provide ways to start operations and be notified when they complete, and libraries built on top of that provide higher-level operations. For example, node's libuv uses "epoll, kqueue, IOCP, event ports" (https://github.com/joyent/libuv). Java's NIO2 provides a higher-level API as well, which I assume uses the same underlying OS mechanisms.

My point was not to explain how an operating system or library might do this, because clearly there are ways. Instead I wanted to show a sensible use case for Scala continuations, because at the time I wrote this post it didn't look like anybody had managed to do this convincingly! So I chose the example of async IO, and tried to show what happens at the level of the user's code when the continuations plugin is used, while giving a rough idea of what an async framework should do.

Note that right now, things have changed quite a bit on the Scala side. The continuations plugin is being deprecated. Instead, Scala now supports async/await (http://docs.scala-lang.org/sips/pending/async.html).

Async/await was introduced first by C#, and many languages are now adding support for this concept. JavaScript, with ES 7, will add support as well, and Dart just announced they are working on it (https://groups.google.com/a/dartlang.org/forum/#!msg/misc/xFj2kuiC0fs/qqt_kD2ZSIQJ).

With async/await, the general idea of continuations remains: computations can be transformed automatically by a compiler so that the user can write seemingly blocking, imperative code, while in fact running non-blocking code.

Unlike with the continuations plugin, Scala seems serious about async/await, and the purpose is clearly to support asynchronous programming. But again, this related to support at the language level. It is the tasks of libraries to actually make something useful with async/await by implementing non-blocking APIs. This is happening, even in the browser with Scala.js, where async/await has already been shown to work nicely.

Valentin Tihomirov said...

Do you see that I am saying that just passing a continuation callback won't prevent the main (service/msg dispatching) thread starvation? You say that read taks long time since it is blocking. We therefore equip it with continuation function, which will receive the argument when read completes. Ok. But what changed? Read the of Eric from Microsoft. Whether you return the result or pass it to consumer function, you do not say that read will be scheduled for asinchronous execution, which means that the indented calls will still be blocking and your reset-continuation won't procede until all continuations finish. On the other hand, if the read will be fulfileed asynchronously, in parallel, then it is multithreading issue: which thread will handle the continuation: your single js thread, which started the read or asynchronous thread, which executed the read? These are key questions. Without clarifying them, you cannot solve the problem of non-blocking read. This means that the only thing that this article explains is that scala can converts a block of text, below the shift, into "continuation" function. The pretext to fulfil reads asynchronously remains incomplete. Without ericlippert's note, that we never return and, thus next continuation can recycle the stack is even better to explain how the stack problem is solved, though, you complement it very good with loop.

Erik Bruchez said...


1. read() must initiate a low-level asynchronous operation, which will be taken care of by the operating system.

2. Until *either* the thread which scheduled returns all the way to a top-level event loop, *or* another thread, if any, is able to handle the continuation, *something* can be blocked.

It is my understanding that with node, for example, there is no parallel execution. So your main thread should never "block" (in the sense of doing a long-running operation), and always return to the top-level event loop as fast as it can.

Again, please understand I was trying to explain how continuations (or async/await) can be of use. That is all I was trying to do. And this is mostly orthogonal to how to implement an async platform properly.

Valentin Tihomirov said...

Before I have elicited that read must start an async request and that request completion will be executed in the request poster thread, everything was looking as uncertain magic. I know that js has anynchronous xml requests and timer wait operation, which, in constrast to normal js functinos, are non-blocking. They are like "special forms" of evaluation. But in the post you did say nothing about how this is related with closure. You post makes to think that simply creating a continuation (or adding a function into the argument list) makes your originally blocking read such special asynchronous form. It was also uncertain which thread will be responsible for processing the completion/continuation. These are continuation user questions. They are not specific to platform designer. They are crucial for me as user.

HelloWorld said...

Ah I had the same issue but after a search crusade online I found a workaround , now no lagging and I am running it on fully 120 hz with my pc

Erik Bruchez said...

My understanding is that in 4K mode, the maximum input refresh rate of this TV is limited to 30 Hz. and for lower resolutions to 60 Hz. Not to be confused with the display's refresh rate, which will be 120 Hz in all cases.

Rafael said...

Hi Erik

Did the new firmwares made any difference reducing the lag? thanks!

Erik Bruchez said...

No, no difference.

Jako said...

It's not the "first real high-level language for native compiled systems".
Look at Nimrod, D, Go, Julia, Rust.

Jako said...

C# does have the async/await keywords.
It's not strictly about concurrency but asynchronous execution, but anyway Swift doesn't have it.

Veeren said...

Its really a good product to use some negatives may be Ignored. check out the all new monitor for your home and business purpose. Its has a great interface! click below

computer monitor

Clint O Baxley said...

I feel the same way. At work I have a crappy vid card so only get 14hz and I program on it fine all day with no input lag of any kind. I have the 50 inch one at home and can play video games at 1920x1080 85 hz with dual 7970's. I friggin love the thing for gaming. It is awesome to have a console, debugger, and tall ass browser window up there too. You do have to turn the mouse all of the way up in speed tho because it is like 500 miles from diagonal to diagonal. LOL

Clint O Baxley said...

You can buy the 39" one for just over 300 bucks now. I can think of nothing in the world worth $300 more. I love these screens.

TAH2 said...

First, more "better languages" is better. So I'm not dissing any of these efforts. However as a systems language:

Go is not a very high level language, it is going after the 'C' market of language simplicity, but has worse performance a bigger memory footprint. They totally blundered by selecting a automatic garbage collection system, which disqualifies it from serious embedded use - so it is not a systems lang contender. Go is seeing most use in server development.

D apps can avoid garbage collection, but it's a second-class citizen. The language is more of an evolution of C++ , than a clean rethinking based on newer languages like ruby or haskell. And it relies on garbage collection for dynamic memory management (except scope classes). So use D if all you want is a better C++.

Nimrod isn't very developed, in my opinion it has a single developer and small community. It still compiles to C, which then has to be compiled again. That puts a language like Swift lightyears ahead in toolchain development (which is going to be a fundamental decision point for any serious systems development). Nimrod is also sketchy on the direct memory access, appears to lack reading / writing from arbitrary memory addresses. This is important because standalone hardware may not have a MMU and access is through memory-mapped I/O.

Julia appears to be scientific language, replacement for fortran, and scientific C code. But has been noted by others more knowledgeable than I to have no business in a systems language comparison.

Rust IS very interesting (and quite similar to Swift in its approach of a very complete multi-paradigm language). But many years into it development is still experimental (their term not me). It also seems to be the casualty of open source design-by-committee, which I think is hampering it progress. I suspect Rust will be eclipsed by Swift as Swift has a strong reason d'ĂȘtre, and rust doesn't. Rust also doesn't have the live-coding JIT abilities either, so it looses part of the ability to give a user a Java/Ruby/Python experience with C performance.

Jako said...

You seem to take the position that only a non-GCed language can be called a "systems language".

Well, it depends on which type of "systems" you're referring to.

Any HLL scarifies performance control to gain productivity.

Different languages do this to a different degree.

If you wanted the ultimate control, you'd have to use Assembly. And the latter is indeed the only solution for some embedded use.
But for systems that run on stronger hardware, a GCed language can still give enough control.

I don't see D as "just" an evolution to C++.
It has many powerful feature that inspire from C# and other modern languages, and which have no resemblance in C++.
And it has very fast compile times, due to rethinking of modularization and linking.

In any case, Swift is a propriety language, which depends on the ObjC runtime, also a proprietary piece of technology.

So in what sense could it "unthrone C"?

It can't be used where C is used currently, and I hardly see that changing.

I'll bet on the other open initiatives as a better candidates to replace C (if ever).

TAH2 said...

Systems language typically means a language you could write operating systems, drivers, hardware interfaces, OS plug-ins, modules, embedded systems, etc.

I can't think of many systems level problem where GC latency, RAM, etc is considered acceptable. Somewhat by definition, embedded systems usually including smart phones don't have swap disks because there is no hard-drive. All of a sudden GC memory use is a big damn deal.

As for Swift being proprietary, Apple could prove me wrong, but it has open sourced every other important complier technology it has built in the last 10 years. In fact, Apples compiler tech LLVM & Clang are displacing GCC throughout the industry... and are quite possibly some the most important advances in compilers in the last decade.

Jecht_Sin said...

I think plenty told Google that Android lags indeed. And GC seems to be one of its causes.

Jack Attack said...

I stopped reading when I found out you were programming on a Mac. No matter if your opinion is valid or not, that fact alone makes me not take you seriously.
If you hate mouse lag that much, wait for DP 1.3 and its 120hz 4k capabilities. Don't realy on Apple and their subpar hardware. All you need is Displayport 1.2 to get 60hz, which Macbooks do have a Mini DP on them, they should have had it naturally without MST. 60hz is garbage as well though. 120hz should be the standard in my opinion and 240hz should be for the enthusiasts. Your eye can process up to 300fps after all, so 240hz is damn close.
Also, 30" and up should be the standard for 4k monitors. Since companies don't make that large of monitors, we get stuck with TVs. However, the benefits of 4k aren't truly seen until roughly 28", and aren't fully present until 35" and up. That's given your average distance of consumers and their monitors and the MP the eye can process. Through and through, 5k has a closer amount to the eyes MP in a peripheral vision sense. It's not until 30k that the eye will never differienate it from real life in full magnitude. 4k and 5k are still beauties to behold and stomp on 1080p, but there is much to go.

Erik Bruchez said...

The Seiki was just an early experiment. The hardware was just not ready. I am willing to wait. 30" and larger monitors are unfortunately out of fashion. That notwithstanding, we'll need probably a year or two to have the right mix of digital interfaces, graphic cards, and monitors to make the 4K+ experience smooth.

If I needed something now I would consider a 5K iMac, which is gorgeous, but of course not portable.

The vast majority of programmers in the circles I know use Macs. I would say that your broad dismissal of the Mac as a developer computer is misinformed.

Samantha Atkins said...

Some of this is wrong. Lisp is very old but fully dynamic and produces machine code directly. It is faster than Java. It invented JIT on the fly machine code generation.

That Swift is fully owned by Apple is its biggest detriment. I write middleware and server code. Particularly the latter cannot be for OS X as it has no presence on modern compute clouds. None, zero.

Apple had MacRuby build on top of OS X frameworks, Objective C and Clang in house and threw away the opportunity to get fully behind it. This would have brought the entire Ruby code space into Apple developer tool kits losing very little in speed if tweaked sufficiently. This project was developed and tuned for some years in house. It would also mean that developers would not have to choose between being Apple walled garden experts and building for the broader class of machines and OSes.

ARC sucks for really fluent programming. Most of the anti-GC arguments I have seen from Apple or its apologists are largely spurious.

Don't like straight Java? Well clojure or jruby or scala can be made to work easily enough on Android. Dart is a silly joke though. It has the worse parts of javascript and Java and few of the good parts of either and runs on zero mainline browsers.

Erik Bruchez said...

> Some of this is wrong

I am not sure what is as you don't say in your comment.

> Lisp […] is faster than Java

Is it? Citation (or rather numbers) needed.

> That Swift is fully owned by Apple […]

I am hopeful (but by no means certain) that Swift will be open sourced. That in itself doesn't guarantee success on the server, but if open sourcing happens, Swift could become usable for middleware.

After that, success on the server would depend on the availability of adequate librarie and frameworks (JVM and JavaScript are quite covered in this respect already), if Apple doesn't provide any.

But if I was Apple, I would certainly try to play the server game over time. That, and Swift-to-JavaScript compilation too (as Clojure and Scala do now).

> Apple had MacRuby

It seems that Apple has decided that a statically-typed language was better serving their (and their developers') needs. In that case, Ruby wouldn't have cut it.

> ARC sucks

I agree with this, but also understand the constraint of the existing Objective-C libraries and runtime.

> Dart is a silly joke though

I am not sure it's that bad, but I think that it has too little ambition and caters way too much to Java programmers. However it compiles to JavaScript and therefore run in all browsers (absent one of its main benefits, which was to be better performance, although some languages which compile to JavaScript can actually be faster than JavaScript due to aggressive optimizations).

Jack Attack said...

It is an early experiment and the hardware still needs time to come into complete fruition with its capabilities.

That 5k is not really large enough in my opinion, although the screen is nice, the lack of being able to input your own signal kills it for more.

I'm not misinformed. Macs are overpriced hardware and mainly a symbol of stature with folk who know nothing about computers. You can use an Apple OS for programming, but if you use a Mac, most of the intelligent coders I know, including myself, will think you are some social activist who likes to waste money. They use Intel parts and Nvidia/AMD gpus, which is what "PCs" are defined as being made up of, by all facotrs Apple makes PCs (common misconception to call them "Macs" as the name got tailored to it because of Steve Jobs brilliant marketing tactics). A true programmer will know the basics of the hardware as well and how to use it to their advantage. For example, most Macs come with small cache sizes because they don't use Xeon's or the highest I7 processor, and also lack ECC memory (in all but their top tier lines, but other PC laptops will have them, amongst having better GPUs and more ram as well), which are two very necessary things for programming. But you know, some people don't care to capitalize on hardware to speed up their activity, instead they buy a Mac over the specialized hardware that is much more felicitous for their projects. Unless you get their Mac Pro stuff, then they up the hardware slightly but it's not as good as a custom built machine.Their marketing is very hard to avoid, they labelled the Mac Pro as cheaper than a custom built equivalent.. Except the Fire Pro gpus they used were not even close to their desktop counterparts. Same architecture yeah, but less stream processors and lowered clock rates. But whatever, I hate getting into this, everyone on the internet seems really uninformed on the reality of Apple and their "proprietary" hardware that offers 0 benefit over a custom built equivalent, and I usually just get yelled at and mocked by those that are too stubborn to go and learn the facts.

Davide Schembari said...

I stopped reading about Swift as soon as I saw Optionals. Seriously? A 21st century language should go towards 100% OO, not adding boilerplate code to replace other boilerplate code. Definitely a step forward over Objective-C, at least you don't need an Egyptologist to read it.

Anon Adderlan said...

"Systems language typically means a language you could write operating systems, drivers, hardware interfaces, OS plug-ins, modules, and of course embedded systems, etc.""I can't think of many systems level problems where GC latency & RAM issues are considered acceptable."

Latency, Throughput, and Memory are always a consideration, but unless you need hard real-time response, GCs exist that already provide (at LEAST) soft real-time guarantees. Manual Memory Management can NOT make those guarantees, as it has no way of preventing degenerate destructor cascades with high latency (yes, MMM has latency too). As for memory use, Go ends up being very close to C for the same programs, sometimes using even less memory, despite being Garbage Collected.

Now I'm not saying that any of these methods are better or worse, but this is one of those subjects with high emotional attachment and little genuine analysis. Memory Management (and how it impacts Latency) is much more complicated than just deciding between Manual or Automatic.

Anon Adderlan said...

"I personally have never liked GC."
"But again, that's just my point of view, linked to my preference or understanding how memory is managed :-)"
"I guess, in the end, whatever suits the developer"

Yet another example of how technology decisions are being driven by cultural considerations as opposed to engineering ones, and that's the problem. It's quite possible that GC has all the problems people say it does, but that's not why you're choosing not to use it. As engineers we need to be using the best tools for the job, not the most familiar or comfortable.

Anon Adderlan said...

"My coworker would say that anyone who is stuck on garbage collection, isn't a proper engineer"

And once again issues of status and Scotsmen involve themselves in what would otherwise be a useful technical discussion.

That said, anyone who is religiously stuck on a single particular method over actual results and requirements isn't a proper engineer.

"GC does not work in systems programming, real-time systems, embedded systems or anywhere you need determinacy or high utilization of limited memory."

See my previous response to you.

To add, both Automatic and Manual memory management take the same length of time to free the same amount of memory. The difference is one (theoretically) gives the programmer control over when and in what order it's freed. Thing is, GCs are often better at making those decisions in much the same way that compilers are better at making optimizations. GC is also not one monolithic technique. There are lots of different strategies which can be implemented, some of which ARE pauseless with predictable latency. And GC doesn't relieve you from considering memory either, just that the techniques used to manage it are different.

Personally, I believe a GCed language that gives you the ability to suggest how memory should be managed will get us 99% of what the best programmers in the world can by doing things explicitly. And I see no reason why such a language could not be used for 'systems' programming.

"Apple dropped support for GC in favor of ARC for a reason."

Given the increasingly poor quality of Apple software as of late, I wouldn't put too much weight behind their decisions when it comes to software :)

Anon Adderlan said...

It is.

But as crappy as the Dalvik VM is, GC isn't anywhere near the primary cause of lag on Android, and its effects only become obvious once you reach a certain framerate. This is why certain apps are framerate locked at 30fps. Any lag you see at or under that somewhat ambiguous threshold is due to fundamental flaws in the Android OS itself.

Thibault ML said...

I agree with your point. However, I think that is rather difficultly avoided. Because of different backgrounds, different philosophies, and different needs, "the best tool for the job" will vary. If you're the only developer on one app, you'll use whatever works best for you.
And for that, some developers may be happy to reconsider their point of view (which I can be, re-reading my previous comment, I find them a bit extreme), trying to see if things they don't know might actually work better for what they are trying to do.

But in the end, people think differently, see things differently, we're all different humans, and "what is best for the job" doesn't, sadly, have one unique answer.