These points are all very true, and I admit I haven't done more with Go yet than play around. But there are a few reasons I feel I don't like Go, or at least, there are warts that will probably limit how much I'm going to like it.
Nil pointers are #1 on that list. Tony Hoare called them a "billion dollar mistake"[1]. In Go, as in C, C++, or Java, any pointer can be `nil` (similar to `NULL`) at any point in time. You essentially always have to check; you're constantly just one pointer dereference away from blowing up the program. Since ways of solving this problem are known, I find this hard to swallow in a new language.
Compare Rust where, unless you have explicitly used the unsafe features of the language, all pointers are guaranteed non-nil and valid. Instead of a function returning a nil pointer, it returns an `Option` type which is either `Some ptr` or `None`. The type system guarantees you have considered both possibilities so there's no such thing as a runtime null pointer dereference. Scala has a similar `Option` type, as does Haskell, calling it `Maybe`. In 2013 I don't want to still constantly check for a nil pointer, or have my program blow up at runtime if I forget.
The second disappointment is that when I looked into it, it seemed there are ways to call C functions from Go code, but no working way to call Go from C code. Maybe that wasn't a goal at Google, but it seems like a missed opportunity. As a result, you can't use Go to write an Nginx module, or an audio plugin for a C/C++ host app, or a library that can be used from multiple languages.
I think there is a real unmet need for a higher-level, safer language you can use to write libraries. Imagine if zlib, or libpng, or the reference implementations of codecs (Vorbis, Opus, VP8) could be written in something like Go. Or spelling correction libraries. Currently we have two tiers of libraries: those written in C/C++, which can be used from any language under the sun (Python bindings, Ruby bindings, Perl bindings, PHP bindings...), and those written in high-level dynamic languages (Python, Ruby, Perl, PHP, ...) which can only be used by the same language. We need a middle ground. C isn't expressive or safe enough to deserve to be the lingua franca like this. And Go is tantalizingly close to replacing it, but not quite.
Every interface is stored as a tuple of a type (pointer) and a value (pointer). The interface tuple returned by a() is ([]int, nil), comparing this to nil, which is a (nil,nil) returns false(!).
So, Go doesn't only have null pointers, it has null-pointers which sometimes has a 'hidden' type, which does not compare to the literal nil.
I like the language a lot but that is definitely a part that I dislike a lot. This doesn't feel consistent with the simplicity and obviousness of the rest of the language and it is error prone. Working full time with Go, it already happened 2 or 3 times this got me into an hour of debugging.
> The interface tuple returned by a() is ([]int, nil), comparing this to nil, which is a (nil,nil) returns false(!)
It's certainly a subtlety that can bite you, but I'd say that in this form it is arguably more consistent: assigning a type to an interface and having that be nil is not the same as assigning nil to an interface - those two variables are not equal and should be treated as such.
> those two variables are not equal and should be treated as such.
In that sense. Don't get me wrong: I'm not saying it fixes other problems with nill pointers, and yes, it introduces an extra possibility for dereference problems. Just like you're allowed to call methods on nill pointers if it's a nill pointer to a type that has these methods.
"Compare Rust where, unless you have explicitly used the unsafe features of the language, all pointers are guaranteed non-nil and valid. Instead of a function returning a nil pointer, it returns an `Option` type which is either `Some ptr` or `None`. The type system guarantees you have considered both possibilities so there's no such thing as a runtime null pointer dereference. Scala has a similar `Option` type, as does Haskell, calling it `Maybe`. In 2013 I don't want to still constantly check for a nil pointer, or have my program blow up at runtime if I forget."
So how is nil checking different from when I write a function that pattern matches against a Maybe in a way that I only match against the "Just" case?
We have been using Scala lately for a couple of web services that are now running in production at our startup.
The difference is huge, because Option[T] references are type-checked at compile-time. Whenever a reference can be either Some(value) or None, then you are made aware of it and you are also forced to either handle it (by giving a default value, or throwing a better documented exception) or you can simply pass the value along as is and make it somebody else's problem.
Option[T] in Scala is also a monadic type, as it implements filter(), map() and flatMap(). It's really easy and effective to work with Option[T]. In comparison with "null", which isn't a value that you can work with other than doing equality tests, None on the other hand is an empty container for which you know at compile-time the type of the value it should contain and in Scala it's also an object that knows how to do filter(), map() and flatMap().
My code is basically free of NullPointerExceptions. This doesn't mean that certain errors can't still be triggered by null pointers, but those errors are better documented. What's better? A NullPointerException or a ConfigSettingMissing("db.url")?
Of course, to tell you the truth, Option[T] (or Maybe as it is named in Haskell) is only really useful in a static language. In a dynamic language, such as Clojure, especially if you have multi-methods or something similar, well in such a case Option[T] is less useful. And before you ask, no, Go is not dynamic and people saying that Go feels like a dynamic language, don't really know what they are talking about.
>My code is basically free of NullPointerExceptions. This doesn't mean that certain errors can't still be triggered by null pointers, but those errors are better documented. What's better? A NullPointerException or a ConfigSettingMissing("db.url")?
Almost always it is a matter of 2 seconds to find the source of a nil pointer error. Given that I would almost never forward raw error messages to the user, I cannot really see a gain.
However having a language that combines this Scala feature with Go's exception-free error handling, would be awesome and a true solution that would make software run more reliable and with less crashes.
> Almost always it is a matter of 2 seconds to find the source of a nil pointer error
Either you're some kind of a super-human, or your code bases are really tiny. Yes, you usually can figure out the trigger of a null pointer exception, but not the source that made it happen and with complex software the stack-trace can get a mile long ;-)
The biggest problem with null pointer exceptions is precisely that (1) they get triggered too late in the lifecycle of an app, (2) such errors are unexpected, non-recoverable and sometimes completely untraceable and (3) you need all the help you can get in tracking and fixing it.
Either way, throwing better exceptions is just one of the side-effects of using Option[T], because in 99% of the cases you end up with code that functions properly without throwing or catching exceptions at all. And you completely missed my point focusing only on a small part of it that's also the most insignificant benefit of Option/Maybe.
> However having a language that combines this Scala feature with Go's exception-free error handling
First of all it's not a Scala specific feature, as the Maybe/Option type has been used and is proven from other languages, such as Haskell and ML. That the Go language creators took no inspiration from these languages is unfortunate.
Also, people bitching about Exceptions have yet to give us a more reliable way of dealing with runtime errors. The only thing that comes closest to an alternative is Erlang and yet again, the Go language designers took no inspiration from it.
>Either you're some kind of a super-human, or your code bases are really tiny. Yes, you usually can figure out the trigger of a null pointer exception, but not the source that made it happen and with complex software the stack-trace can get a mile long ;-)
Ok even if the stack trace is 10 miles long, you just need to go to the end, right? :P
Anyway, so an exception gets thrown and Scala forces you to explicitely throw an exception, am I right? How does the other case not crash your program unless you catch it?
>Also, people bitching about Exceptions have yet to give us a more reliable way of dealing with runtime errors. The only thing that comes closest to an alternative is Erlang and yet again, the Go language designers took no inspiration from it.
Go uses panic (Go-speak for exceptions) for really bad errors: out of memory, nil pointer dereference... You can catch them like in other well-known languages.
The only difference: your catch blocks aren't cluttered with handling for non-exceptional errors like file doesn't exist etc. You are forced to handle those explicitely. Why is this good? Except for those truly exceptional errors, the state of your programs is much easier to determine.
For Go? The biggest complaints about PL design are and have always been that its designers ignored or discarded the previous 30 years of PL (theoretical and practical both) when creating it.
Not just Go. But in Go a lot of problems could be solved by doing things the Erlang or ML way... then there's the new problems they've invented, like enforcing things that don't matter instead of things that do.
>then there's the new problems they've invented, like enforcing things that don't matter instead of things that do.
Never heard such complaints from users that have used it for a few months. I assume you are talking about unused imports and variables. Actually it helps a lot because it keeps your code clean and clear.
Pointers can be null, this is a well-known issue. So when you use pointers, you better check they are not null. Even better it is to have some kind of code convention or pattern that you follow to prevent this.
Still I don't understand why you would prefer MyCustomException to crash your catch-less program instead of NullPointerException.
Ok, you checked your pointer to see it isn't null. Then you passed it to function foo. Which passes it to function bar. Do foo and bar have to check it again?
That maintains dead code that will never fire, is untestable, and costs runtime. So you will probably not want to re-check the pointer at every point. However, the compiler doesn't help you here. If you ever decide to call foo or bar from any other point without the NULL check, then you will get a crash.
Type safety can solve this. It does not convert "NullPointerException" to "MyCustomException". It converts "NullPointerException" to a compile-time type error (expected Foo, got Maybe Foo. Or: Unhandled pattern in case statement: Nothing).
The trick is simply to differentiate between a pointer that is guaranteed to not be null and one that isn't. Then, disallow using a nullable pointer as a regular one and force a check.
>Ok, you checked your pointer to see it isn't null. Then you passed it to function foo. Which passes it to function bar. Do foo and bar have to check it again?
I guess not, what I'm saying is therefore: if you use a language that does a lot of stuff, you need to find a convention for your project. One may be: check for nil after assigning variables.
>costs runtime
By all means, no.
Anyway, looks like in need to try Scala and see for myself. (Scala installed: check, Hello World: check)
Checking nil after assigning variables is not helpful for the reason you mentioned earlier: If you check for it and it is nil where it shouldn't be -- you're merely converting one runtime error (null exception) to another (different exception).
If however you use types to distinguish whether it can be nil or not, you simply eliminate the error completely at compile-time.
Glad you're checking it out!
I don't know Scala, I'm a Haskeller myself, but I believe it does get nulls more correctly. It might have bad old null in there too though because of Java interop.
> However having a language that combines this Scala feature with Go's exception-free error handling, would be awesome and a true solution that would make software run more reliable and with less crashes.
Do you realize that this is actually the case? Every Library/API I have seen in Scala until now uses the appropriate abstractions like Option/Either/Try/Validation/... and restrict exceptions to the most exceptional faults.
But anyway, if I had to choose between Go's horribly broken approach of returning multiple values and exceptions, I'll choose exceptions every day. Exceptions are ugly, but at least they are not blatantly wrong like using tuples for error codes.
Indeed, you can end up with an indeterminate state. Tell you one thing: writing the if err != nil boilerplate in Go is isomorphic to writing try { ... } catch { ... } for each function call in a way that your state stays clear. The difference of the former is, it reminds you all the time to do this.
> Indeed, you can end up with an indeterminate state.
No. I think that claim is hysterically funny considering that Go developers almost never check all FOUR states of Go's style of error handling.
The problem Go is solving here wouldn't even exist if they had designed/used a better language in the first place.
Maybe Go people should stop drinking so much Kool-aid, because they sound like all these Node.js-ninja-rock-star kids who think that they revolutionize asynchronous programming while they reinvent threads, badly.
There is a huge difference: null/nil is a valid value for a pointer, but Option[string] is not a valid value for a string argument, so the compiler forces you to deal with it.
How is that any different from always checking it? When you program in C, you essentially always have to check it, when you program in Scala/Haskell/etc you only have to check it once.
In Rust at least, pattern matching on enums must consider all possible cases. Option is just an enum, so it's a compiler error if you don't handle the None case.
Ah, I see. I was confused by Haskell not doing that by default... at least the last time I wrote code on it.
Is there actual data that says that null pointers are actually causing bugs in production software? I always thought they are just a symptom of lazy programmers, and no language can fully protect against that.
Witness my awesome Haskell snippet of code:
foo (Just x) = x+2
foo Nothing = undefined
Ok, so it's somewhat better since the lazyness is now explicit and cannot happen so accidentally.
Anders estimates 50% of the bugs in C# and Java are due to null dereferences.
Your example illustrates the unsafety of undefined, not the unsafety of nulls/Nothing. And you can of course grep for use of partiality in Haskell code and get warnings about partiality in your own functions.
I've seen NULL causing a lot of trouble in production in every setting I've been.
It is very rare to see people mis-handling a Maybe value in Haskell, simply because you have to be explicit about ignoring the Nothing case.
Also, in Haskell, if you get a Maybe value it is a very clear indication that the Nothing case actually exists and you have to handle it. In C, C#, Java, Go, when you get a reference, it is unclear whether it could be null or not in practice. Checking for null when it isn't warranted is dead code you never test. Avoiding checking for null risks missing checks in cases you actually need to check. All of this is simply not a problem when the types don't lie.
I believe graue was referring to the case when you are not using such a type in the code; then you have a guarantee that the value is non-null. The purpose of the Some/Option/Maybe types then becomes to indicate when a value can be null, but outside that wrapper type it is guaranteed to be non-null. So code where it must always be non-null does not have to confirm that that's the case. I think it is less about efficiency and more about not having to worry about it.
To me it's about the existence of the nil pointer itself. If there is a pointer, it is guaranteed to be valid by the type system. The other case (which would be represented by a nil or null pointer in other languages) is represented by the "None" type. There's no null pointer to dereference (and no way to blow up).
I don't know that this is 100% a problem with the language, which cannot be reasonably coped with.
Is it an option to write your own code so that it does not return nil pointers, and only check when you are interfacing which code which might not give you the same courtesy? In other words - in order to avoid driving yourself crazy with paranoid checks everywhere (whether they are done by the runtime or by you) maybe you can defend the invariants at the 'perimeter'.
For that matter, there are going to be many places in many projects where nil pointers are going to be exceptional enough that it is OK to shut down when they occur. There's not always a need to check and handle at points where will be a rare event and not such a big deal when it does happen.
It isn't necessarily a virtue for a program to keep on running when its contracts can't really be fulfilled any more due to the funkiness of its environment or dependencies, so sometimes the best way to handle an error is to just stop and give debug info rather than checking and handling.
I completely agree that it is a huge missed opportunity not to be able to call into Go from C code and I can understand complaints that mandatory gc might prevent Go from replacing C in some domains.
"For that matter, there are going to be many places in many projects where nil pointers are going to be exceptional enough that it is OK to shut down when they occur. There's not always a need to check and handle at points where will be a rare event and not such a big deal when it does happen."
It seems strange to me to create a modern statically typed language that by design doesn't prevent the most common type error (null passed to a function that doesn't handle null), especially when it's so easy to add null safety to the type system.
In C/C++/Java, I need to document in relatively verbose English if a public function/method I write won't handle null. References/pointers that by default aren't nullable are safer, but they also optimize (less documentation and perhaps less machine code) for the common case of functions that don't want to have to deal with null.
I agree that in many many circumstances, nulls are exceptional cases, and I think this is a good thing. At least in the C++ code I write, nulls are rare, so I handle nulls (perhaps by throwing) a small number of places at the borders and then create references from the pointers for internal use. (It's great that null references are undefined behavior in C++.) That way, I get nice stack traces near where unexpected nulls are introduced and I don't have to feel guilty/lazy about not properly handling nulls in my code.
You seem to think non-nullable references force the programmer to use extra checks all over the place. The opposite is true, at least when writing code that someone else might possibly call.
> You seem to think non-nullable references force the programmer to use extra checks all over the place.
No, the other way around. The demand for extra checks all over the place seems to demand non-nullable references. I probably wrote unclearly about this
> I completely agree that it is a huge missed opportunity not to be able to call into Go from C code and I can understand complaints that mandatory gc might prevent Go from replacing C in some domains.
As rsc has pointed out a few times, this problem is just a matter of someone doing the work to make it happen, not a fundamental limitation of the language itself.
>As rsc has pointed out a few times, this problem is just a matter of someone doing the work to make it happen, not a fundamental limitation of the language itself.
Yes, but as neither he nor any other Go designer went ahead and did it the problem exists.
Why would you want do to it anyway? There isonly a handful programs/libraries that do not exist in C.
The only scenario I can think of is risk-minimizing managers. They allow a project to be done in Go but in case things don't work as expected, they don't want to be trapped in Go and be able to reuse that code from their favorite language.
I assume those genuinly new libraries will take some time. Go is over 3 years old and yet there is hardly anything available for Go that is not available for other languages.
In fact the only things that come to my mind are vitess and Skynet - Terminator-future with Go. Being no expert in these areas, I bet there are C equalivalents that perform equally well. Also the vitess equivalent's implementation might be more complex in case it exists.
Because a library written in Go will only be useful in Go (a major shortcoming of Go) and Go can use C libraries, I suspect Go will not be used for library creation for the foreseeable future.
>Because a library written in Go will only be useful in Go (a major shortcoming of Go)
I claim that C, C++ and Java are the only languages whose libraries are heavily used from other languages. For other languages it's often better to interface via some kind of Network socket.
Surely there are Go libraries like there are Ruby libraries. But which C user seriously would want to interface a Ruby library?
Long story short: this is why Go code does not need to be called from C. :-) Different story in 5 years, in case Go is then sufficiently widespread.
What about instead of having non-nullable types - do what objective-c does and 'drop' calls to null objects. There's been a number of times that instead of my application blowing up from a null exception, whatever feature my user was using just didn't work, but the rest of the application ran fine and they could report the bug to me. It's a much nicer user experience to have the car 'not' blow up if the AC button isn't working.
There are very few problem domains where "do something, even if it's the wrong thing" is better than refusing to run. Perl and PHP have both rightly been criticized for this sort of thinking.
I write automated trading software. If I created a bug, I'd much prefer my program just stop working rather than no-op out a hedging routine or no-op out a regulatory compliance routine. There are also plenty of safety-critical applications where a machine would be perfectly safe if it just stopped working, but would kill someone if it no-opped out a routine. I could see a photo hosting site accidentally giving people without accounts access to everyone's private photos because a filtering routine got no-opped out.
I'll grant you that there are a few small domains where this would be desirable behavior, but I think if it's that much of an advantage, those domains should have domain-specific languages. Skipping method calls on null objects is a terrible feature for a general purpose programming language.
This. One of the most important things I've learned over the years is fail early and verbosely. My code is always littered with pre and post condition checks. As a side effect, we discover 99%+ of bugs before production every time.
I think that in the case of unforeseen error in an application, the program should blow up instead of dropping calls to null objects and continuing on as if nothing has gone wrong.
I would not want to work on code that relies on this type of behavior since it would be so easy to overlook. Every time an object gets deferenced I would have to consider that it might no-op and consider how that might affect the rest of the program's execution.
If you really want to 'drop' calls to null objects, be explicit about it and only make the call inside of a conditional that checks that the object is not null. Your fellow programmers will thank you.
Most of the time it is much better that a program crashes instead of silently operating under incorrect assumptions. This is particularly true if the software is doing something critical. But even if it's a casual game, do you want to risk corrupting your costumers' saves?
Nil pointers are #1 on that list. Tony Hoare called them a "billion dollar mistake"[1]. In Go, as in C, C++, or Java, any pointer can be `nil` (similar to `NULL`) at any point in time. You essentially always have to check; you're constantly just one pointer dereference away from blowing up the program. Since ways of solving this problem are known, I find this hard to swallow in a new language.
Compare Rust where, unless you have explicitly used the unsafe features of the language, all pointers are guaranteed non-nil and valid. Instead of a function returning a nil pointer, it returns an `Option` type which is either `Some ptr` or `None`. The type system guarantees you have considered both possibilities so there's no such thing as a runtime null pointer dereference. Scala has a similar `Option` type, as does Haskell, calling it `Maybe`. In 2013 I don't want to still constantly check for a nil pointer, or have my program blow up at runtime if I forget.
The second disappointment is that when I looked into it, it seemed there are ways to call C functions from Go code, but no working way to call Go from C code. Maybe that wasn't a goal at Google, but it seems like a missed opportunity. As a result, you can't use Go to write an Nginx module, or an audio plugin for a C/C++ host app, or a library that can be used from multiple languages.
I think there is a real unmet need for a higher-level, safer language you can use to write libraries. Imagine if zlib, or libpng, or the reference implementations of codecs (Vorbis, Opus, VP8) could be written in something like Go. Or spelling correction libraries. Currently we have two tiers of libraries: those written in C/C++, which can be used from any language under the sun (Python bindings, Ruby bindings, Perl bindings, PHP bindings...), and those written in high-level dynamic languages (Python, Ruby, Perl, PHP, ...) which can only be used by the same language. We need a middle ground. C isn't expressive or safe enough to deserve to be the lingua franca like this. And Go is tantalizingly close to replacing it, but not quite.
[1]: http://qconlondon.com/london-2009/presentation/Null+Referenc...