Äîêóìåíò âçÿò èç êýøà ïîèñêîâîé ìàøèíû. Àäðåñ
îðèãèíàëüíîãî äîêóìåíòà
: http://www.arcetri.astro.it/manual/en/mod/event.html
Äàòà èçìåíåíèÿ: Fri Nov 20 00:43:57 2015 Äàòà èíäåêñèðîâàíèÿ: Sun Apr 10 05:15:56 2016 Êîäèðîâêà: Ïîèñêîâûå ñëîâà: ï ï ï ð ï ð ï ð ï ð ï ð ï ð ï ð ï ð ï ð ï ð ï ð ï ð ï ð ï ð ï ð ï ð ï ð ï ð ï ð ï ð ï |
Apache HTTP Server Version 2.4
Description: | A variant of the worker MPM with the goal
of consuming threads only for connections with active processing |
---|---|
Status: | MPM |
Module Identifier: | mpm_event_module |
Source File: | event.c |
The event
Multi-Processing Module (MPM) is
designed to allow more requests to be served simultaneously by
passing off some processing work to supporting threads, freeing up
the main threads to work on new requests. It is based on the
worker
MPM, which implements a hybrid
multi-process multi-threaded server. Run-time configuration
directives are identical to those provided by
worker
.
To use the event
MPM, add
--with-mpm=event
to the configure
script's arguments when building the httpd
.
This MPM tries to fix the 'keep alive problem' in HTTP. After a client
completes the first request, the client can keep the connection
open, and send further requests using the same socket. This can
save significant overhead in creating TCP connections. However,
Apache HTTP Server traditionally keeps an entire child process/thread waiting
for data from the client, which brings its own disadvantages. To
solve this problem, this MPM uses a dedicated thread to handle both
the Listening sockets, all sockets that are in a Keep Alive state,
and sockets where the handler and protocol filters have done their work
and the only remaining thing to do is send the data to the client. The
status page of mod_status
shows how many connections are
in the mentioned states.
The improved connection handling does not yet work for certain
connection filters, in particular SSL. For SSL connections, this MPM will
fall back to the behaviour of the worker
MPM and
reserve one worker thread per connection.
The MPM assumes that the underlying apr_pollset
implementation is reasonably threadsafe. This enables the MPM to
avoid excessive high level locking, or having to wake up the listener
thread in order to send it a keep-alive socket. This is currently
only compatible with KQueue and EPoll.
This MPM depends on APR's atomic
compare-and-swap operations for thread synchronization. If you are
compiling for an x86 target and you don't need to support 386s, or
you are compiling for a SPARC and you don't need to run on
pre-UltraSPARC chips, add
--enable-nonportable-atomics=yes
to the
configure
script's arguments. This will cause
APR to implement atomic operations using efficient opcodes not
available in older CPUs.
This MPM does not perform well on older platforms which lack good threading, but the requirement for EPoll or KQueue makes this moot.
libkse
(see man libmap.conf
).glibc
has been compiled
with support for EPoll.Description: | Limit concurrent connections per process |
---|---|
Syntax: | AsyncRequestWorkerFactor factor |
Default: | 2 |
Context: | server config |
Status: | MPM |
Module: | event |
Compatibility: | Available in version 2.3.13 and later |
The event MPM handles some connections in an asynchronous way, where request worker threads are only allocated for short periods of time as needed, and other (mostly SSL) connections with one request worker thread reserved per connection. This can lead to situations where all workers are tied up and no worker thread is available to handle new work on established async connections.
To mitigate this problem, the event MPM does two things: Firstly, it limits the number of connections accepted per process, depending on the number of idle request workers. Secondly, if all workers are busy, it will close connections in keep-alive state even if the keep-alive timeout has not expired. This allows the respective clients to reconnect to a different process which may still have worker threads available.
This directive can be used to fine-tune the per-process connection limit. A process will only accept new connections if the current number of connections (not counting connections in the "closing" state) is lower than:
ThreadsPerChild
+
(AsyncRequestWorkerFactor
*
number of idle workers)
This means the absolute maximum numbers of concurrent connections is:
(AsyncRequestWorkerFactor
+ 1) *
MaxRequestWorkers
MaxRequestWorkers
was called
MaxClients
prior to version 2.3.13. The above value
shows that the old name did not accurately describe its meaning for the event MPM.
AsyncRequestWorkerFactor
can take non-integer
arguments, e.g "1.5".