Oh I've debugged this before. Native memory allocator had a scavenge function which suspended all other threads. Managed language runtime had a stop the world phase which suspended all mutator threads. They ran at about the same time and ended up suspending each other. To fix this you need to enforce some sort of hierarchy or mutual exclusion for suspension requests.
> Why you should never suspend a thread in your own process.
This sounds like a good general princple but suspending threads in your own process is kind of necessary for e.g. many GC algorithms. Now imagine multiple of those runtimes running in the same process.
> suspending threads in your own process is kind of necessary for e.g. many GC algorithms
I think this is typically done by having the compiler/runtime insert safepoints, which cooperatively yield at specified points to allow the GC to run without mutator threads being active. Done correctly, this shouldn't be subject to the problem the original post highlighted, because it doesn't rely on the OS's ability to suspend threads when they aren't expecting it.
This is a good approach but can be tricky.
E.g. what if your thread spends a lot of time in a tight loop, e.g. doing a big inlined matmul kernel? Since you never hit a function call you don't get safepoints that way -- you can add them to the back-edge of every loop, but that can be a bit unappetizing from a performance perspective.
> suspending threads in your own process is kind of necessary for e.g. many GC algorithms
True. Maybe the more precise rule is “only suspend threads for a short amount of time and don’t acquire any locks while doing it”?
The way the .NET runtime follows this rule is it only suspends threads for a very short time. After suspending, the thread is immediately resumed if it not running managed code (in a random native library or syscall). If the thread is running managed code, the thread is hijacked by replacing either the instruction pointer or the return address with a the address of a function that will wait for the GC to finish. The thread is then immediately resumed. See the details here:
I had the same thought too. I wonder if this his role at Microsoft now? Kind of a human institutional knowledge repository, plus a kind of brand ambassador to the developer community, plus mentor to younger engineers, plus chronicler.
I hope he keeps going, no doubt he could choose to finish up whenever he wants to.
I thought the same thing. It’s usually content that’s well outside my areas of familiarity, often even outside my areas of interest. But I usually find his writing interesting enough to read through anyway, and clear enough that I can usually follow it even without familiarity with the subject matter.
On Linux you'd do this by sending a signal to the thread you want to analyze, and then the signal handler would take the stack trace and send it back to the watchdog.
The tricky part is ensuring that the signal handler code is async-signal-safe (which pretty much boils down to "ensure you're not acquiring any locks and be careful about reentrant code"), but at least that only has to be verified for a self-contained small function.
The 2 implies an older API, its predecessor QueueUserAPC has been around since the XP days.
The older API is less like signals and more like cooperative scheduling in that it waits for the target thread to be in an "alertable" state before it runs (the thread executes a sleep or a wait for something)
Why not recommended? As far as things close to signals go, this is how you implement signals in user land on Windows (along with pause/resume thread). You can even take locks later during the process, as long as you also took them before sending the signal (same exact restrictions as fork actually, but unfortunately atfork hooks are not accessible and often full of fork-unsafe data race and deadlock implementation bugs themselves in my experience with all the popular libc)
I had a support issue once at a well known and big US defense firm. We got kernel hangs consistently in kernel space from normal user-level code. Crazy shit. I opened a support issue which eventually got closed because we used an old compiler. Fun times.
> Corporate software wouldn't care that the UI hung, you're getting paid to sit there and look at it.
The article says the thread had been hung for 5 hours. And if you understand the root cause, once it entered into the hung state, then absent some rather dramatic intervention (e.g. manually resuming the suspended UI thread), it would remain hung indefinitely.
The proper solution, as Raymond Chen notes, is to move the monitoring thread into a separate process, that would avoid this deadlock.
Reminds me of a hang in the Settings UI that was because it would get stuck on an RPC call to some service.
Why was the service holding things up? Because it was waiting on acquiring a lock held by one of its other threads.
What was that other thread doing? It was deadlocked because it tried to recursively acquire an exclusive srwlock (exactly what the docs say will happen if you try).
Why was it even trying to reacquire said lock? Ultimately because of a buffer overrun that ended up overwriting some important structures.
I've been able to get managed code to BSOD my machine by simply having a lot of thread instances that are aggressively communicating with each other (i.e., via Channel<T>). It's probably more of a hardware thing than a software thing. My Spotify fails to keep the audio buffer filled when I've got it fully saturated. I feel like the kernel occasionally panics when something doesn't resolve fast enough with regard to threads across core complexes.
Oh I've debugged this before. Native memory allocator had a scavenge function which suspended all other threads. Managed language runtime had a stop the world phase which suspended all mutator threads. They ran at about the same time and ended up suspending each other. To fix this you need to enforce some sort of hierarchy or mutual exclusion for suspension requests.
> Why you should never suspend a thread in your own process.
This sounds like a good general princple but suspending threads in your own process is kind of necessary for e.g. many GC algorithms. Now imagine multiple of those runtimes running in the same process.
> suspending threads in your own process is kind of necessary for e.g. many GC algorithms
I think this is typically done by having the compiler/runtime insert safepoints, which cooperatively yield at specified points to allow the GC to run without mutator threads being active. Done correctly, this shouldn't be subject to the problem the original post highlighted, because it doesn't rely on the OS's ability to suspend threads when they aren't expecting it.
This is a good approach but can be tricky. E.g. what if your thread spends a lot of time in a tight loop, e.g. doing a big inlined matmul kernel? Since you never hit a function call you don't get safepoints that way -- you can add them to the back-edge of every loop, but that can be a bit unappetizing from a performance perspective.
> suspending threads in your own process is kind of necessary for e.g. many GC algorithms
True. Maybe the more precise rule is “only suspend threads for a short amount of time and don’t acquire any locks while doing it”?
The way the .NET runtime follows this rule is it only suspends threads for a very short time. After suspending, the thread is immediately resumed if it not running managed code (in a random native library or syscall). If the thread is running managed code, the thread is hijacked by replacing either the instruction pointer or the return address with a the address of a function that will wait for the GC to finish. The thread is then immediately resumed. See the details here:
https://github.com/dotnet/runtime/blob/main/docs/design/core...
> Now imagine multiple of those runtimes running in the same process.
Can that possibly reliably work? Sounds messy.
I knew from seeing a title like that on microsoft.com that it was going to be a Raymond Chen post! He writes fascinating stuff.
I had the same thought too. I wonder if this his role at Microsoft now? Kind of a human institutional knowledge repository, plus a kind of brand ambassador to the developer community, plus mentor to younger engineers, plus chronicler.
I hope he keeps going, no doubt he could choose to finish up whenever he wants to.
I thought the same thing. It’s usually content that’s well outside my areas of familiarity, often even outside my areas of interest. But I usually find his writing interesting enough to read through anyway, and clear enough that I can usually follow it even without familiarity with the subject matter.
On Linux you'd do this by sending a signal to the thread you want to analyze, and then the signal handler would take the stack trace and send it back to the watchdog.
The tricky part is ensuring that the signal handler code is async-signal-safe (which pretty much boils down to "ensure you're not acquiring any locks and be careful about reentrant code"), but at least that only has to be verified for a self-contained small function.
Is there anything similar to signals on Windows?
The closest thing is a special APC enqueued via QueueUserAPC2 [1], but that's relatively new functionality in user-mode.
[1] https://learn.microsoft.com/en-us/windows/win32/api/processt...
The 2 implies an older API, its predecessor QueueUserAPC has been around since the XP days.
The older API is less like signals and more like cooperative scheduling in that it waits for the target thread to be in an "alertable" state before it runs (the thread executes a sleep or a wait for something)
Or SetThreadContext() if you want to be hardcore. (not recommended)
Why not recommended? As far as things close to signals go, this is how you implement signals in user land on Windows (along with pause/resume thread). You can even take locks later during the process, as long as you also took them before sending the signal (same exact restrictions as fork actually, but unfortunately atfork hooks are not accessible and often full of fork-unsafe data race and deadlock implementation bugs themselves in my experience with all the popular libc)
I had a support issue once at a well known and big US defense firm. We got kernel hangs consistently in kernel space from normal user-level code. Crazy shit. I opened a support issue which eventually got closed because we used an old compiler. Fun times.
Although I understand nothing from these posts, read Raymond's posts somehow always "tranquil" my inner struggles.
Just curious, is this customer a game studio? I have never done any serious system programming but the gist feels like one.
I would guess it's something corporate. They can afford to pause the UI and ship debugging traces home more than a real-time game might.
Id actually expect a customer facing program more. Corporate software wouldn't care that the UI hung, you're getting paid to sit there and look at it.
> Corporate software wouldn't care that the UI hung, you're getting paid to sit there and look at it.
The article says the thread had been hung for 5 hours. And if you understand the root cause, once it entered into the hung state, then absent some rather dramatic intervention (e.g. manually resuming the suspended UI thread), it would remain hung indefinitely.
The proper solution, as Raymond Chen notes, is to move the monitoring thread into a separate process, that would avoid this deadlock.
The banker trying to close a deal isn't paid by the hour.
Unless the user's boss complained to the programmer's boss
Reminds me of a hang in the Settings UI that was because it would get stuck on an RPC call to some service.
Why was the service holding things up? Because it was waiting on acquiring a lock held by one of its other threads.
What was that other thread doing? It was deadlocked because it tried to recursively acquire an exclusive srwlock (exactly what the docs say will happen if you try).
Why was it even trying to reacquire said lock? Ultimately because of a buffer overrun that ended up overwriting some important structures.
Such a clean breakdown. "Don’t suspend your own threads" should be tattooed on every Windows dev’s arm at this point
Looking at the title, at first I thought “uh?”, but then I saw microsoft and it made sense.
>Naturally, a suspended UI thread is going to manifest itself as a hang.
The correct terminology is 'stopped responding' Raymond. You need to consult the style guide.
Reminds me of a bug that would bluescreen windows if I stopped Visual Studio debugging if it was in the middle of calling the native Ping from C#
I've been able to get managed code to BSOD my machine by simply having a lot of thread instances that are aggressively communicating with each other (i.e., via Channel<T>). It's probably more of a hardware thing than a software thing. My Spotify fails to keep the audio buffer filled when I've got it fully saturated. I feel like the kernel occasionally panics when something doesn't resolve fast enough with regard to threads across core complexes.
Can this happen with Grand Central Dispatch ?
did... did you understand what the bug was?