There are certain classes of errors that it prevents. E.g:
Thread1:
a = 0xFFFFFFFF00000000
Thread2:
a = 0x00000000FFFFFFFF
One might think that the two possible values of a if those are run concurrently are 0xFFFFFFFF00000000 and 0x00000000FFFFFFFF. But actually 0x0000000000000000
and 0xFFFFFFFFFFFFFFFF are also possible because the load itself isnt atomic.
The GIL (AFAICT) will prevent the latter two possibilities.
Most CPUs guarantee that aligned loads and stores up to the register size, i.e. now usually up to 64-bit, are atomic.
The compilers also take care to align most variables.
So while your scenario is not impossible, it would take some effort to force "a" to be not aligned, e.g. by being a member in a structure with inefficient layout.
Normally in a multithreaded program all shared variables should be aligned, which would guarantee atomic loads and stores.
Well, thread safety is exactly about these cases of "well, it's hardly ever a problem".
Real life bugs have come from misapplication of correct parameters for memory barriers, even on x86. Python GIL removes a whole class of potential errors.
Not that I'm against getting rid of the GIL, but I'm more sceptical that it won't trigger bugs.
Though in my opinion python just isn't a good language for large programs for other reasons. But it'd be nice to be able to multithread some 50 line scripts.
Python's integers are arbitrary precision. If large enough, that certainly won't be atomic on any normal CPU. I'm not sure how the arbitrary precision integers work internally, but it's possible they wouldn't be atomic for any value.
"Most" ends up being surprising(ex: ARM has a fairly weak memory model). I've seen a lot of code with aligned access and extensive use of "volatile" in MSVC/x86 explode once it was ported to other architectures.
Older ARM CPUs did not have a well defined memory model, but all 64-bit ARM CPUs have a well defined behavior, which includes the atomicity of any loads and stores whose size is up to 64-bit and which are aligned, i.e. the same as Intel/AMD CPUs.
The current ARM memory model is more relaxed regarding the ordering of loads and stores, but not regarding the atomicity of single loads and stores.
It depends what you mean by 'undefined behavior'. The GIL makes operations atomic on the bytecode instruction level. Critically, this includes loading and storing of objects, meaning that refcounting is atomic. However, this doesn't extend to most other operations, which generally need to pull an object onto the stack, manipulate it, and store it back in separate opcodes.
So with Python concurrency, you can get unpredictable behavior (such as two threads losing values when incrementing a counter), but not undefined behavior in the C sense, such as use-after-free.
No. The L in GIL stands for lock. So only the thread that holds it can write or read from the object, and the behavior is well defined at the C level, because C lock acquire and release operations are defined to be memory barriers.
But when each thread reads the variable, you have no control over which value you see, since you don't control when each thread gets to run. So its undefined in the sense that you don't know which values you will get: a thread might get the value it wrote, or the value the other thread wrote. The threads might not get the same value either.
The GIL exists to protect the interpreters internal data, not your applications data. If you access mutable data from more than one thread, you still need to your own synchronisation.
How is the GIL different from atomics in other languages? There are many cases where atomics are useful.
One example would be incrementing a counter for statistics purposes. If the counter is atomic, and the reader of the value is ok with a slightly out of date value, it's fine. If code is doing this in GIL Python, it's working now, and will break after the GIL is removed.
I know you came to the same conclusion in another comments, but here's a look at it using the `dis` module:
a += 1
turns into
LOAD_FAST 1 (loads b)
LOAD_CONST 1 (loads the constant 1)
INPLACE_ADD (perform the addition)
STORE_FAST 1 (store back into b)
So if the interpreter switches threads between LOAD_CONST and STORE_FAST (so either before or after the INPLACE_ADD), you could clobber the value another thread wrote to `b`.
From your other comment:
> so even though an increment on a loaded int is atomic, loading it and storing it aren't
Its always the loads and stores that are the problem.
But that's the problem with relying on the GIL: it has lock in the name, but it does not protect you, unless you understand the internals and know what you're doing. It protects the interpreter. This isn't much different from programming in other languages without a GIL: if you understand the internals what you're doing you may or may not need locks, because you will know what is and isn't atomic (and even when things are atomic, its still difficult to write thread-safe code! lock-free algorithms are much harder than using mutexes).
Thread safe code requires thinking hard about your code, the GIL does not protect you from that.
Numbers are immutable and incrementing one eg in an attribute is actuay many byte code ops. So doesn't work even currently unless you are fine with losing updates. But a version of this question using another example (eg using a list as a queue) is interesting.
Yep. So in a final answer to the original question of backwards bug-compatibility (https://news.ycombinator.com/item?id=28897534), it seems that it will be retained under the current proposal.
The Python documentation seems misleading to me on this:
> In theory, this means an exact accounting requires an exact understanding of the PVM bytecode implementation. In practice, it means that operations on shared variables of built-in data types (ints, lists, dicts, etc) that “look atomic” really are.
count = 0
def inc():
count += 1
sure "looks atomic" to me, so according to the documentation should be, but isn't.
On the other hand, I think you could build a horribly inefficient actual atomic counter with
I think it's quite likely there's correct and non-horrible code relying on list append and length being atomic. Although it sounds like this might continue working without the GIL:
> A lot of work has gone into the list and dict implementations to make them thread-safe. And so on.
Indeed, the proposal explicitly covers maintaining existing guarantees, which is why I was confused that so many people just assumed it would break them.
My question was about from a program correctness standpoint, not an efficiency standpoint. But it does look there is a difference from a correctness standpoint, as other comments address.
I agree that in principle the GIL is there for the interpreter, but in practice in globally ensuring all Python objects are in a coherent and safe state for each running thread, it makes many things quite thread-safe. For example you could use a normal list as a fifo queue, if you were to rely on the current behaviour.
Sure, but you really need to understand the internals to be able to make these assumptions.
For instance, in your example, I know that the call to the C code to do the list operations is atomic (a single bytecode instruction), but I can't assume that all such calls to C code are safe because of this, unless I know for sure that the C code doesn't itself release the GIL. I assume that simple calls like list append/pop wouldn't have any reason to do this, but I can't assume this for any given function/method call that delegates to C, since some calls do release the GIL.
So, with or without GIL, you either really need to understand what's going on under the hood so you can avoid using locks in your code (GIL or atomics-based lock-free programming), or you use locks (mutex, semaphore, condition variables etc). No matter what you do, to write thread safe programs, you need to understand what you're doing and how things are synchronizing and operating. The GIL doesn't remove that need.
Of course, removing the GIL removes one possible implementation option, I just don't believe the GIL really makes it any easier. Once you know enough internals to know what is and isn't safe with the GIL, you could just as easily do your own syncrhonization.
Not really. If you're doing an atomic write to the same object from two different threads, you're going to have one win the race and the other lose. That may be a bug in your code, but it's not undefined behavior at the language level.
Python doesn't care about these errors (because python's integers are not as simple as 64-bit operations), however if you generalise this error to a "transaction" of two operations (registers in this case), you'd end up with the same ability to see a view of a state that should not be seen by another thread.
AFAIK CPUs implement atomic load and store instructions and the performance overhead of these is very small compared to something like a software busy lock. So I think it's quite possible to take away the GIL while still making it impossible to load only half of a value.
Thread1: a = 0xFFFFFFFF00000000
Thread2: a = 0x00000000FFFFFFFF
One might think that the two possible values of a if those are run concurrently are 0xFFFFFFFF00000000 and 0x00000000FFFFFFFF. But actually 0x0000000000000000 and 0xFFFFFFFFFFFFFFFF are also possible because the load itself isnt atomic.
The GIL (AFAICT) will prevent the latter two possibilities.