Is it possible to move an application (or task) to another core to make it run faster and get all of that cores processing power?
The way I understand it is that in an operating system applications share the main CPU and get a shared “slice of time”. So if ten applications are running over the course of one second each application would get a total of one hundred milliseconds. If a computer has multiple cores would an application run faster if you moved it or all of it’s processes to an unused core?
For example, let’s say you create a desktop application called MyNumberCruncher©. In that you have a task in a for loop in that should theoretically should take one second to complete. In fact you time it when no other applications are running. If you have ten applications open that are all doing their own thing the task should take ten seconds to complete, correct? If you move that process to it’s own core that has no processes running on it then it should take one second to complete correct?
This question is asking how multicore CPU works and if it’s possible or a good idea to move an application to a second core and if so, why aren’t operating systems doing that yet (or maybe they are)?
or maybe they are
Honestly, of course they are. No doing so would be a complete waste of a multi-core machine. Occam’s Razor applies here: the simplest explanation is correct.
A single-core CPU is like having just yourself to do whatever work has to be done. If you have four processing-intensive jobs that take a minute each to do, they’ll take four minutes in aggregate to finish. You could work each until it’s finished and go on to the next one or, if there has to be steady progress on each, you could do a bit of the first, a bit of the second, etc., until they’re all done. The problem with the latter method is that you incur some overhead switching between them, but that’s another discussion.
Adding a second core is like hiring someone to help you out. Where you were finishing the four jobs in four minutes alone, the work gets split so you get two of the jobs and your assistant gets the other two. Each of you finishes the two jobs you’re assigned in two minutes and, because you worked in parallel, that’s all the time you need. Add two more assistants for a total of four and the work gets done in a minute.
There is a limitation, and it’s an important one: many jobs are single-threaded, which means they have to be done in order from start to finish. If we’re building a house, I can’t tell you to go paint the kitchen while I pour the foundation; the kitchen isn’t there yet. Once you reach the limit of jobs that can be done in parallel, adding workers (cores) doesn’t help: with four linear jobs, having a dozen people to work on them will leave eight of them mathematically idle, even if everybody shares the work equally.
Multi-threading enters the picture when there’s a job to do that can be broken into parts that can be done in parallel. For example, if I need the walls in my house painted, it makes no difference what order it happens as long as it gets done. I can hire just you to paint them or one painter for each wall if I need the job done quickly.
Operating systems are in the business of managing resources according to a set of rules. One rule that’s very common is that processing resources (cores) are to be utilized as much as possible so programs run quickly. A program that wants to run is a perfect match for a core with nothing to do, and the operating system will pair them up without you having to worry about it. Your single-threaded program may run on just one core or every core in the system by the time it’s finished and you won’t be able to tell the difference unless you’re keeping an eye on that sort of thing.
If you move that process to it’s own core that has no processes running on it then it should take one second to complete correct?
It’s worth noting that this is in fact correct.
Now, most OS’s are smart enough to move threads around between cores (even single threaded applications – as long as they never execute on more than one thread at a time) to maximise performance. However it is possible in many OS’s to instruct programs to use, or not to use certain core/s. If one was to tell all programs not to use a certain core, with the exception of NumberCruncher then that one program could execute in a much faster manner without sharing resources.
It’s worth mentioning that this is not a recommended practise in most OS’s (e.g. Windows). While one program may execute faster, the lack of resources to other programs will slow all other execution down. Furthermore modern OS’s are quite smart and do interesting tricks, for example modern CPU’s sometimes have a ‘Turboboost’ (or similar feature) which increases clock-speed while CPU temp/wattage stay at certain thresholds. By heavily loading one core, while other cores remain idle, you could hit those limits sooner than if you had balanced the load by shifting execution around.