Conceptually what does it mean when it is said that each thread gets its own stack?
I have been reading Java Concurrency in Practice by Brian Goetz and inside the section Stack Confinement it is mentioned that each thread gets its own stack and so local variables are intrinsically confined to the executing thread; they exist on the executing threads stack, which is not accessible to other threads. What does he mean that each thread has its own execution stack ?
Analogy for Thread Pools
I am working on an application which spawns a new thread per request. Sometimes the number of threads active on the machine at one time is in the high hundreds. It’s suspected that this is causing all sorts of problems. I would like to experiment with using thread pool instead to see if this improves work throughput. But first, I have to convince the powers that be to allow me the time.
Are there deprecated practices for multithread and multiprocessor programming that I should no longer use?
In the early days of FORTRAN and BASIC, essentially all programs were written with GOTO statements. The result was spaghetti code and the solution was structured programming.
Should I build a multi-threaded system that handles events from a game and sorts them, independently, into different threads based on priority?
Can I build a multi-threaded system that handles events from a game and sorts them, independently, into different threads based on priority, and is it a good idea?
Should I take care of race conditions which almost certainly has no chance of occuring?
Let’s consider something like a GUI application where main thread is updating the UI almost instantaneously, and some other thread is polling data over the network or something that is guaranteed to take 5-10 seconds to finish the job.
Can I implement the readers and writers algorithm in OpenMP by replacing counting semaphores with another feature?
After reading about OpenMP and not finding functions to support semaphores, I did an internet search for OpenMP and the readers and writers problem, but found no suitable matches.
Is a 1:* write:read thread system safe?
Theoretically, thread-safe code should fix race conditions. Race conditions, as I understand it, occur because two threads attempt to write to the same location at the same time.
Issues with time slicing
I was trying to see the effect of time slicing, and how it can consume significant amount of time. Actually, I was trying to divide a certain task into a number of threads and see the effect.
Shared FIFO file descriptor
is ok to open fifo with one FD and share it with multiple threads?
or is it better to have multiple fds opened for the same fifo and share these fds with the threads?
BTW, I’ll be doing write and read.
What is a zombie process or thread?
What is a zombie process or thread, and what creates them? Do I just kill them, or can I do something to get diagnostics about how they died?