I've split a complex array processing task into a number of threads to take advantage of multi-core processing and am seeing great benefits. Currently, at the start of the task I create the threads, and then wait for them to terminate as they complete their work. I'm typically creating about four times the number of threads as there are cores, as each thread is liable to take a different amount of time, and having extra threads ensures all cores are kept occupied most of the time. I was wondering would there be much of a performance advantage to creating the threads as the program fires up, keeping them idle until required, and using them as I start processing. Put more simply, how long does it take to start and end a new thread above and beyond the processing within the thread? I'm current starting the threads using
CWinThread *pMyThread = AfxBeginThread(CMyThreadFunc,&MyData,THREAD_PRIORITY_NORMAL);
Typically I will be using 32 threads across 8 cores on a 64 bit architecture. The process in question currently takes < 1 second, and is fired up each time the display is refreshed. If starting and ending a thread is < 1ms, the return doesn't justify the effort. I'm having some difficulty profiling this.
A related question here helps but is a bit vague for what I'm after. Any feedback appreciated.
See Question&Answers more detail:os