mirror of
https://github.com/frappe/gunicorn.git
synced 2026-01-14 11:09:11 +08:00
Minor fixes in docs and doc's code style (#1361)
This commit is contained in:
parent
0be7996885
commit
c54426c1e4
@ -20,11 +20,12 @@ Gunicorn users.
|
|||||||
The archive for this list can also be `browsed online
|
The archive for this list can also be `browsed online
|
||||||
<http://lists.gunicorn.org/user>`_ .
|
<http://lists.gunicorn.org/user>`_ .
|
||||||
|
|
||||||
Irc
|
IRC
|
||||||
===
|
===
|
||||||
|
|
||||||
The Gunicorn channel is on the `Freenode <http://freenode.net/>`_ IRC
|
The Gunicorn channel is on the `Freenode <http://freenode.net/>`_ IRC
|
||||||
network. You can chat with other on `#gunicorn channel <http://webchat.freenode.net/?channels=gunicorn>`_.
|
network. You can chat with other on `#gunicorn channel
|
||||||
|
<http://webchat.freenode.net/?channels=gunicorn>`_.
|
||||||
|
|
||||||
Issue Tracking
|
Issue Tracking
|
||||||
==============
|
==============
|
||||||
|
|||||||
@ -9,8 +9,8 @@ Custom Application
|
|||||||
Sometimes, you want to integrate Gunicorn with your WSGI application. In this
|
Sometimes, you want to integrate Gunicorn with your WSGI application. In this
|
||||||
case, you can inherit from :class:`gunicorn.app.base.BaseApplication`.
|
case, you can inherit from :class:`gunicorn.app.base.BaseApplication`.
|
||||||
|
|
||||||
Here is a small example where we create a very small WSGI app and load it with a
|
Here is a small example where we create a very small WSGI app and load it with
|
||||||
custom Application:
|
a custom Application:
|
||||||
|
|
||||||
.. literalinclude:: ../../examples/standalone_app.py
|
.. literalinclude:: ../../examples/standalone_app.py
|
||||||
:lines: 11-60
|
:lines: 11-60
|
||||||
|
|||||||
@ -105,11 +105,11 @@ Monitoring
|
|||||||
==========
|
==========
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
Make sure that when using either of these service monitors you do not
|
Make sure that when using either of these service monitors you do not
|
||||||
enable the Gunicorn's daemon mode. These monitors expect that the process
|
enable the Gunicorn's daemon mode. These monitors expect that the process
|
||||||
they launch will be the process they need to monitor. Daemonizing
|
they launch will be the process they need to monitor. Daemonizing will
|
||||||
will fork-exec which creates an unmonitored process and generally just
|
fork-exec which creates an unmonitored process and generally just
|
||||||
confuses the monitor services.
|
confuses the monitor services.
|
||||||
|
|
||||||
Gaffer
|
Gaffer
|
||||||
------
|
------
|
||||||
@ -135,7 +135,7 @@ Create a ``Procfile`` in your project::
|
|||||||
|
|
||||||
You can launch any other applications that should be launched at the same time.
|
You can launch any other applications that should be launched at the same time.
|
||||||
|
|
||||||
Then you can start your Gunicorn application using Gaffer_.::
|
Then you can start your Gunicorn application using Gaffer_::
|
||||||
|
|
||||||
gaffer start
|
gaffer start
|
||||||
|
|
||||||
@ -188,8 +188,10 @@ Another useful tool to monitor and control Gunicorn is Supervisor_. A
|
|||||||
|
|
||||||
Upstart
|
Upstart
|
||||||
-------
|
-------
|
||||||
|
|
||||||
Using Gunicorn with upstart is simple. In this example we will run the app
|
Using Gunicorn with upstart is simple. In this example we will run the app
|
||||||
"myapp" from a virtualenv. All errors will go to /var/log/upstart/myapp.log.
|
"myapp" from a virtualenv. All errors will go to
|
||||||
|
``/var/log/upstart/myapp.log``.
|
||||||
|
|
||||||
**/etc/init/myapp.conf**::
|
**/etc/init/myapp.conf**::
|
||||||
|
|
||||||
@ -320,9 +322,14 @@ utility::
|
|||||||
|
|
||||||
kill -USR1 $(cat /var/run/gunicorn.pid)
|
kill -USR1 $(cat /var/run/gunicorn.pid)
|
||||||
|
|
||||||
.. note:: overriding the LOGGING dictionary requires to set `disable_existing_loggers: False`` to not interfere with the Gunicorn logging.
|
.. note::
|
||||||
|
Overriding the ``LOGGING`` dictionary requires to set
|
||||||
|
``disable_existing_loggers: False`` to not interfere with the Gunicorn
|
||||||
|
logging.
|
||||||
|
|
||||||
.. warning:: Gunicorn error log is here to log errors from Gunicorn, not from another application.
|
.. warning::
|
||||||
|
Gunicorn error log is here to log errors from Gunicorn, not from another
|
||||||
|
application.
|
||||||
|
|
||||||
.. _Nginx: http://www.nginx.org
|
.. _Nginx: http://www.nginx.org
|
||||||
.. _Hey: https://github.com/rakyll/hey
|
.. _Hey: https://github.com/rakyll/hey
|
||||||
|
|||||||
@ -67,17 +67,18 @@ Choosing a Worker Type
|
|||||||
|
|
||||||
The default synchronous workers assume that your application is resource-bound
|
The default synchronous workers assume that your application is resource-bound
|
||||||
in terms of CPU and network bandwidth. Generally this means that your
|
in terms of CPU and network bandwidth. Generally this means that your
|
||||||
application shouldn't do anything that takes an undefined amount of time. An example
|
application shouldn't do anything that takes an undefined amount of time. An
|
||||||
of something that takes an undefined amount of time is a request to the internet.
|
example of something that takes an undefined amount of time is a request to the
|
||||||
At some point the external network will fail in such a way that clients will pile up on your
|
internet. At some point the external network will fail in such a way that
|
||||||
servers. So, in this sense, any web application which makes outgoing requests to
|
clients will pile up on your servers. So, in this sense, any web application
|
||||||
APIs will benefit from an asynchronous worker.
|
which makes outgoing requests to APIs will benefit from an asynchronous worker.
|
||||||
|
|
||||||
This resource bound assumption is why we require a buffering proxy in front of a
|
This resource bound assumption is why we require a buffering proxy in front of
|
||||||
default configuration Gunicorn. If you exposed synchronous workers to the
|
a default configuration Gunicorn. If you exposed synchronous workers to the
|
||||||
internet, a DOS attack would be trivial by creating a load that trickles data to
|
internet, a DOS attack would be trivial by creating a load that trickles data to
|
||||||
the servers. For the curious, Hey_ is an example of this type of load.
|
the servers. For the curious, Hey_ is an example of this type of load.
|
||||||
|
|
||||||
|
|
||||||
Some examples of behavior requiring asynchronous workers:
|
Some examples of behavior requiring asynchronous workers:
|
||||||
|
|
||||||
* Applications making long blocking calls (Ie, external web services)
|
* Applications making long blocking calls (Ie, external web services)
|
||||||
@ -105,8 +106,8 @@ optimal number of workers. Our recommendation is to start with the above guess
|
|||||||
and tune using TTIN and TTOU signals while the application is under load.
|
and tune using TTIN and TTOU signals while the application is under load.
|
||||||
|
|
||||||
Always remember, there is such a thing as too many workers. After a point your
|
Always remember, there is such a thing as too many workers. After a point your
|
||||||
worker processes will start thrashing system resources decreasing the throughput
|
worker processes will start thrashing system resources decreasing the
|
||||||
of the entire system.
|
throughput of the entire system.
|
||||||
|
|
||||||
How Many Threads?
|
How Many Threads?
|
||||||
===================
|
===================
|
||||||
@ -119,13 +120,14 @@ system, using multiple threads, multiple worker processes, or some mixture, may
|
|||||||
yield the best results. For example, CPython may not perform as well as Jython
|
yield the best results. For example, CPython may not perform as well as Jython
|
||||||
when using threads, as threading is implemented differently by each. Using
|
when using threads, as threading is implemented differently by each. Using
|
||||||
threads instead of processes is a good way to reduce the memory footprint of
|
threads instead of processes is a good way to reduce the memory footprint of
|
||||||
Gunicorn, while still allowing for application upgrades using the reload signal,
|
Gunicorn, while still allowing for application upgrades using the reload
|
||||||
as the application code will be shared among workers but loaded only in the
|
signal, as the application code will be shared among workers but loaded only in
|
||||||
worker processes (unlike when using the preload setting, which loads the code in
|
the worker processes (unlike when using the preload setting, which loads the
|
||||||
the master process).
|
code in the master process).
|
||||||
|
|
||||||
.. note:: Under Python 2.x, you need to install the 'futures' package to use
|
.. note::
|
||||||
this feature.
|
Under Python 2.x, you need to install the 'futures' package to use this
|
||||||
|
feature.
|
||||||
|
|
||||||
.. _Greenlets: https://github.com/python-greenlet/greenlet
|
.. _Greenlets: https://github.com/python-greenlet/greenlet
|
||||||
.. _Eventlet: http://eventlet.net
|
.. _Eventlet: http://eventlet.net
|
||||||
|
|||||||
@ -67,7 +67,7 @@ Read the :ref:`design` page for help on the various worker types.
|
|||||||
What types of workers are there?
|
What types of workers are there?
|
||||||
--------------------------------
|
--------------------------------
|
||||||
|
|
||||||
Check out the configuration docs for worker_class_
|
Check out the configuration docs for worker_class_.
|
||||||
|
|
||||||
How can I figure out the best number of worker processes?
|
How can I figure out the best number of worker processes?
|
||||||
---------------------------------------------------------
|
---------------------------------------------------------
|
||||||
@ -94,11 +94,11 @@ Does Gunicorn suffer from the thundering herd problem?
|
|||||||
The thundering herd problem occurs when many sleeping request handlers, which
|
The thundering herd problem occurs when many sleeping request handlers, which
|
||||||
may be either threads or processes, wake up at the same time to handle a new
|
may be either threads or processes, wake up at the same time to handle a new
|
||||||
request. Since only one handler will receive the request, the others will have
|
request. Since only one handler will receive the request, the others will have
|
||||||
been awakened for no reason, wasting CPU cycles. At this time, Gunicorn does not
|
been awakened for no reason, wasting CPU cycles. At this time, Gunicorn does
|
||||||
implement any IPC solution for coordinating between worker processes. You may
|
not implement any IPC solution for coordinating between worker processes. You
|
||||||
experience high load due to this problem when using many workers or threads.
|
may experience high load due to this problem when using many workers or
|
||||||
However `a work has been started <https://github.com/benoitc/gunicorn/issues/792>`_
|
threads. However `a work has been started
|
||||||
to remove this issue.
|
<https://github.com/benoitc/gunicorn/issues/792>`_ to remove this issue.
|
||||||
|
|
||||||
.. _worker_class: settings.html#worker-class
|
.. _worker_class: settings.html#worker-class
|
||||||
.. _`number of workers`: design.html#how-many-workers
|
.. _`number of workers`: design.html#how-many-workers
|
||||||
@ -113,8 +113,8 @@ In version R20, Gunicorn logs to the console by default again.
|
|||||||
Kernel Parameters
|
Kernel Parameters
|
||||||
=================
|
=================
|
||||||
|
|
||||||
When dealing with large numbers of concurrent connections there are a handful of
|
When dealing with large numbers of concurrent connections there are a handful
|
||||||
kernel parameters that you might need to adjust. Generally these should only
|
of kernel parameters that you might need to adjust. Generally these should only
|
||||||
affect sites with a very large concurrent load. These parameters are not
|
affect sites with a very large concurrent load. These parameters are not
|
||||||
specific to Gunicorn, they would apply to any sort of network server you may be
|
specific to Gunicorn, they would apply to any sort of network server you may be
|
||||||
running.
|
running.
|
||||||
@ -137,8 +137,8 @@ How can I increase the maximum socket backlog?
|
|||||||
----------------------------------------------
|
----------------------------------------------
|
||||||
|
|
||||||
Listening sockets have an associated queue of incoming connections that are
|
Listening sockets have an associated queue of incoming connections that are
|
||||||
waiting to be accepted. If you happen to have a stampede of clients that fill up
|
waiting to be accepted. If you happen to have a stampede of clients that fill
|
||||||
this queue new connections will eventually start getting dropped.
|
up this queue new connections will eventually start getting dropped.
|
||||||
|
|
||||||
::
|
::
|
||||||
|
|
||||||
|
|||||||
@ -68,8 +68,8 @@ upgrading to a new version or adding/removing server modules), you can
|
|||||||
do it without any service downtime - no incoming requests will be
|
do it without any service downtime - no incoming requests will be
|
||||||
lost. Preloaded applications will also be reloaded.
|
lost. Preloaded applications will also be reloaded.
|
||||||
|
|
||||||
First, replace the old binary with a new one, then send the **USR2** signal to the
|
First, replace the old binary with a new one, then send the **USR2** signal to
|
||||||
master process. It executes a new binary whose .pid file is
|
the master process. It executes a new binary whose .pid file is
|
||||||
postfixed with .2 (e.g. /var/run/gunicorn.pid.2),
|
postfixed with .2 (e.g. /var/run/gunicorn.pid.2),
|
||||||
which in turn starts a new master process and the new worker processes::
|
which in turn starts a new master process and the new worker processes::
|
||||||
|
|
||||||
@ -89,14 +89,17 @@ incoming requests together. To phase the old instance out, you have to
|
|||||||
send the **WINCH** signal to the old master process, and its worker
|
send the **WINCH** signal to the old master process, and its worker
|
||||||
processes will start to gracefully shut down.
|
processes will start to gracefully shut down.
|
||||||
|
|
||||||
At this point you can still revert to the old server because it hasn't closed its listen sockets yet, by following these steps:
|
At this point you can still revert to the old server because it hasn't closed
|
||||||
|
its listen sockets yet, by following these steps:
|
||||||
|
|
||||||
- Send the HUP signal to the old master process - it will start the worker processes without reloading a configuration file
|
- Send the HUP signal to the old master process - it will start the worker
|
||||||
- Send the TERM signal to the new master process to gracefully shut down its worker processes
|
processes without reloading a configuration file
|
||||||
|
- Send the TERM signal to the new master process to gracefully shut down its
|
||||||
|
worker processes
|
||||||
- Send the QUIT signal to the new master process to force it quit
|
- Send the QUIT signal to the new master process to force it quit
|
||||||
|
|
||||||
If for some reason the new worker processes do not quit, send the KILL signal to
|
If for some reason the new worker processes do not quit, send the KILL signal
|
||||||
them after the new master process quits, and everything is exactly as before
|
to them after the new master process quits, and everything is exactly as before
|
||||||
the upgrade attempt.
|
the upgrade attempt.
|
||||||
|
|
||||||
If an update is successful and you want to keep the new server, send
|
If an update is successful and you want to keep the new server, send
|
||||||
|
|||||||
@ -647,7 +647,7 @@ class WorkerThreads(Setting):
|
|||||||
You'll want to vary this a bit to find the best for your particular
|
You'll want to vary this a bit to find the best for your particular
|
||||||
application's work load.
|
application's work load.
|
||||||
|
|
||||||
If it is not defined, the default is 1.
|
If it is not defined, the default is ``1``.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user