Threads involved in Request Pipeline


                                                                                           <<Prev             Next>>

Threads available in .NET environment

CLR maintains a pool of threads (to amortize thread’s creation cost) and it should be the place to look for when you need to create more threads. Thread pool maintains 2 types of threads – Worker & I/O. As name implies Worker threads are computational threads while I/O are used for wait (block) of long duration (e.g. when invoking a remote WCF service). A good rule to follow is to ensure your all waits are on I/O thread, provided they are long enough otherwise you would end up degrading your performance due to a context switch

Your entry point to CLR’s thread pool is ThreadPool class. Code below helps you find how many threads are there in thread pool
int wt, iot;
ThreadPool.GetAvailableThreads(out wt, out iot);
Console.WriteLine(“Worker = ” + wt);
Console.WriteLine(“I/O = ” + iot);

One can only request for a worker thread by ThreadPool.QueueUserWorkItem. If all worker threads are occupied your request will be blocked till time a worker thread is available

Unlike worker threads you don’t have any direct API to request for I/O threads. But .NET leverages I/O threads automatically when we use asynchronous programming



I/O threads are set aside as such because they will be doing I/O (as the name implies) and may have to wait for "long" periods of time (hundreds of milliseconds). They also can be optimized and used differently to take advantage of I/O completion port functionality in the Windows kernel. A single I/O thread may be managing multiple completion ports to maintain throughput.

Worker threads are threads upon which regular "work" or just plain code/processing happens. Worker threads are unlikely to block a lot or wait on anything and will be short running and therefore require more aggressive scheduling to maximize processing power and throughput.


How request is processed using various threads?

Asp.Net
When new requests are received by HTTP.sys, it posts the request to an I/O completion port on which IIS listens. IIS picks up the request on one of its thread pool threads and the request is handed over to ASP.NET on an IIS I/O thread. ASP.NET immediately posts the request to the CLR ThreadPool and returns HSE_STATUS_PENDING to IIS. This frees up IIS threads, enabling IIS to serve other requests, such as static files. Posting the request to the CLR Threadpool also acts as a queue. The CLR Threadpool automatically adjusts the number of threads according to the workload, so that if the requests are high throughput there will only be 1 or 2 threads per CPU, and if the requests are high latency there will be potentially far more concurrently executing requests than 1 or 2 per CPU. The queuing provided by the CLR Threadpool is very useful, because while the requests are in the queue there is only a very small amount of memory allocated for the request, and it is all native memory. It’s not until a thread picks up the request and begins to execute that we enter managed code and allocate managed memory.

The IIS thread pool has a maximum thread count of 256. This thread pool is designed in such a way that it does not handle long running tasks well. The recommendation from the IIS team is to switch to another thread if you’re going to do substantial work, such as done by the ASP.NET ISAPI and/or ASP.NET when running in integrated mode on IIS 7. Otherwise you will tie up IIS threads and prevent IIS from picking up completions from HTTP.sys. So for this reason, ASP.NET always returns a pending status to IIS, and calls QueueUserWorkItem to post the request to the CLR ThreadPool. In v2.0, 3.5, and 4.0, ASP.NET initializes the CLR ThreadPool with 100 threads per processor (that’s the default, this is configurable). So on a dual-core server, there will be a maximum of 200 threads in the pool. When a CLR ThreadPool thread becomes available (typically happens immediately), the request is picked up by ASP.NET. Normally the request is executed at this point, but it is possible for it to be inserted into one of the ASP.NET queues (described in earlier post). If all the modules and handlers in the pipeline are synchronous, the request will execute on a single CLR ThreadPool thread. If one or more modules or the handler are asynchronous, the request will not execute on a single thread.

A lot of thread switches take place in order to execute an ASP.NET request. There are always at least 3 thread switches. The first is primarily a transition from kernel mode (HTTP.sys) to user mode (IIS). It also frees up HTTP.sys to pick up more incoming requests and hand them off to their respective listeners—not always IIS. If we tried to execute the entire request on that thread, HTTP.sys wouldn’t be resilient to poorly performing listeners—in fact, a single process could shutdown HTTP on the entire server if that process were to deadlock and hold the existing request threads as well as any new incoming request threads. At the same time, there is a penalty paid for any thread switch. It’s called a context switch, and they’re expensive. However, in this case, the benefit from performing the thread switch (reliability of a kernel mode driver) out weights the cost of the context switch.

The second thread switch is the one from IIS to ASP.NET, where ASP.NET calls QueueUserWorkItem for the CLR ThreadPool. This one is not as critical. this is done primarily as a performance improvement for large corporate workloads that have a mixture of static and dynamic requests. The thread switch for dynamic requests helps improve CPU cache locality for static requests (served by the IIS static file handler) which don't peform a thread switch and instead just execute until completion on the IIS thread.

The third thread switch is the one that occurs when we perform the final send, when we switch from a CLR ThreadPool thread to the thread that send the bytes to the client. This switch is definitely worth the cost of the context switch, because clients with low bandwidth are slow and we don't want our CLR ThreadPool thread to be blocked while the bytes are sent, we want it to return to the pool and execute incoming work items.


WCF
The service model HttpModule gets engaged very early in the lifecycle, handling the PostAuthenticateRequest event of Asp.Net request pipeline and,by default,forwarding the request to the service model. The service model now processes the request on a separate IO Thread and releases the Asp.Net Worker Thread. This makes the WCF throttling settings much more effective and simplifies your performance tuning.

                                                                                                                        <<Prev             Next>>

No comments:

Post a Comment