mirror of
https://github.com/frappe/gunicorn.git
synced 2026-01-14 11:09:11 +08:00
Minor fixes in docs and doc's code style (#1361)
This commit is contained in:
parent
0be7996885
commit
c54426c1e4
@ -20,11 +20,12 @@ Gunicorn users.
|
||||
The archive for this list can also be `browsed online
|
||||
<http://lists.gunicorn.org/user>`_ .
|
||||
|
||||
Irc
|
||||
IRC
|
||||
===
|
||||
|
||||
The Gunicorn channel is on the `Freenode <http://freenode.net/>`_ IRC
|
||||
network. You can chat with other on `#gunicorn channel <http://webchat.freenode.net/?channels=gunicorn>`_.
|
||||
network. You can chat with other on `#gunicorn channel
|
||||
<http://webchat.freenode.net/?channels=gunicorn>`_.
|
||||
|
||||
Issue Tracking
|
||||
==============
|
||||
|
||||
@ -9,8 +9,8 @@ Custom Application
|
||||
Sometimes, you want to integrate Gunicorn with your WSGI application. In this
|
||||
case, you can inherit from :class:`gunicorn.app.base.BaseApplication`.
|
||||
|
||||
Here is a small example where we create a very small WSGI app and load it with a
|
||||
custom Application:
|
||||
Here is a small example where we create a very small WSGI app and load it with
|
||||
a custom Application:
|
||||
|
||||
.. literalinclude:: ../../examples/standalone_app.py
|
||||
:lines: 11-60
|
||||
|
||||
@ -105,11 +105,11 @@ Monitoring
|
||||
==========
|
||||
|
||||
.. note::
|
||||
Make sure that when using either of these service monitors you do not
|
||||
enable the Gunicorn's daemon mode. These monitors expect that the process
|
||||
they launch will be the process they need to monitor. Daemonizing
|
||||
will fork-exec which creates an unmonitored process and generally just
|
||||
confuses the monitor services.
|
||||
Make sure that when using either of these service monitors you do not
|
||||
enable the Gunicorn's daemon mode. These monitors expect that the process
|
||||
they launch will be the process they need to monitor. Daemonizing will
|
||||
fork-exec which creates an unmonitored process and generally just
|
||||
confuses the monitor services.
|
||||
|
||||
Gaffer
|
||||
------
|
||||
@ -135,7 +135,7 @@ Create a ``Procfile`` in your project::
|
||||
|
||||
You can launch any other applications that should be launched at the same time.
|
||||
|
||||
Then you can start your Gunicorn application using Gaffer_.::
|
||||
Then you can start your Gunicorn application using Gaffer_::
|
||||
|
||||
gaffer start
|
||||
|
||||
@ -188,8 +188,10 @@ Another useful tool to monitor and control Gunicorn is Supervisor_. A
|
||||
|
||||
Upstart
|
||||
-------
|
||||
|
||||
Using Gunicorn with upstart is simple. In this example we will run the app
|
||||
"myapp" from a virtualenv. All errors will go to /var/log/upstart/myapp.log.
|
||||
"myapp" from a virtualenv. All errors will go to
|
||||
``/var/log/upstart/myapp.log``.
|
||||
|
||||
**/etc/init/myapp.conf**::
|
||||
|
||||
@ -320,9 +322,14 @@ utility::
|
||||
|
||||
kill -USR1 $(cat /var/run/gunicorn.pid)
|
||||
|
||||
.. note:: overriding the LOGGING dictionary requires to set `disable_existing_loggers: False`` to not interfere with the Gunicorn logging.
|
||||
.. note::
|
||||
Overriding the ``LOGGING`` dictionary requires to set
|
||||
``disable_existing_loggers: False`` to not interfere with the Gunicorn
|
||||
logging.
|
||||
|
||||
.. warning:: Gunicorn error log is here to log errors from Gunicorn, not from another application.
|
||||
.. warning::
|
||||
Gunicorn error log is here to log errors from Gunicorn, not from another
|
||||
application.
|
||||
|
||||
.. _Nginx: http://www.nginx.org
|
||||
.. _Hey: https://github.com/rakyll/hey
|
||||
|
||||
@ -67,17 +67,18 @@ Choosing a Worker Type
|
||||
|
||||
The default synchronous workers assume that your application is resource-bound
|
||||
in terms of CPU and network bandwidth. Generally this means that your
|
||||
application shouldn't do anything that takes an undefined amount of time. An example
|
||||
of something that takes an undefined amount of time is a request to the internet.
|
||||
At some point the external network will fail in such a way that clients will pile up on your
|
||||
servers. So, in this sense, any web application which makes outgoing requests to
|
||||
APIs will benefit from an asynchronous worker.
|
||||
application shouldn't do anything that takes an undefined amount of time. An
|
||||
example of something that takes an undefined amount of time is a request to the
|
||||
internet. At some point the external network will fail in such a way that
|
||||
clients will pile up on your servers. So, in this sense, any web application
|
||||
which makes outgoing requests to APIs will benefit from an asynchronous worker.
|
||||
|
||||
This resource bound assumption is why we require a buffering proxy in front of a
|
||||
default configuration Gunicorn. If you exposed synchronous workers to the
|
||||
This resource bound assumption is why we require a buffering proxy in front of
|
||||
a default configuration Gunicorn. If you exposed synchronous workers to the
|
||||
internet, a DOS attack would be trivial by creating a load that trickles data to
|
||||
the servers. For the curious, Hey_ is an example of this type of load.
|
||||
|
||||
|
||||
Some examples of behavior requiring asynchronous workers:
|
||||
|
||||
* Applications making long blocking calls (Ie, external web services)
|
||||
@ -105,8 +106,8 @@ optimal number of workers. Our recommendation is to start with the above guess
|
||||
and tune using TTIN and TTOU signals while the application is under load.
|
||||
|
||||
Always remember, there is such a thing as too many workers. After a point your
|
||||
worker processes will start thrashing system resources decreasing the throughput
|
||||
of the entire system.
|
||||
worker processes will start thrashing system resources decreasing the
|
||||
throughput of the entire system.
|
||||
|
||||
How Many Threads?
|
||||
===================
|
||||
@ -119,13 +120,14 @@ system, using multiple threads, multiple worker processes, or some mixture, may
|
||||
yield the best results. For example, CPython may not perform as well as Jython
|
||||
when using threads, as threading is implemented differently by each. Using
|
||||
threads instead of processes is a good way to reduce the memory footprint of
|
||||
Gunicorn, while still allowing for application upgrades using the reload signal,
|
||||
as the application code will be shared among workers but loaded only in the
|
||||
worker processes (unlike when using the preload setting, which loads the code in
|
||||
the master process).
|
||||
Gunicorn, while still allowing for application upgrades using the reload
|
||||
signal, as the application code will be shared among workers but loaded only in
|
||||
the worker processes (unlike when using the preload setting, which loads the
|
||||
code in the master process).
|
||||
|
||||
.. note:: Under Python 2.x, you need to install the 'futures' package to use
|
||||
this feature.
|
||||
.. note::
|
||||
Under Python 2.x, you need to install the 'futures' package to use this
|
||||
feature.
|
||||
|
||||
.. _Greenlets: https://github.com/python-greenlet/greenlet
|
||||
.. _Eventlet: http://eventlet.net
|
||||
|
||||
@ -67,7 +67,7 @@ Read the :ref:`design` page for help on the various worker types.
|
||||
What types of workers are there?
|
||||
--------------------------------
|
||||
|
||||
Check out the configuration docs for worker_class_
|
||||
Check out the configuration docs for worker_class_.
|
||||
|
||||
How can I figure out the best number of worker processes?
|
||||
---------------------------------------------------------
|
||||
@ -94,11 +94,11 @@ Does Gunicorn suffer from the thundering herd problem?
|
||||
The thundering herd problem occurs when many sleeping request handlers, which
|
||||
may be either threads or processes, wake up at the same time to handle a new
|
||||
request. Since only one handler will receive the request, the others will have
|
||||
been awakened for no reason, wasting CPU cycles. At this time, Gunicorn does not
|
||||
implement any IPC solution for coordinating between worker processes. You may
|
||||
experience high load due to this problem when using many workers or threads.
|
||||
However `a work has been started <https://github.com/benoitc/gunicorn/issues/792>`_
|
||||
to remove this issue.
|
||||
been awakened for no reason, wasting CPU cycles. At this time, Gunicorn does
|
||||
not implement any IPC solution for coordinating between worker processes. You
|
||||
may experience high load due to this problem when using many workers or
|
||||
threads. However `a work has been started
|
||||
<https://github.com/benoitc/gunicorn/issues/792>`_ to remove this issue.
|
||||
|
||||
.. _worker_class: settings.html#worker-class
|
||||
.. _`number of workers`: design.html#how-many-workers
|
||||
@ -113,8 +113,8 @@ In version R20, Gunicorn logs to the console by default again.
|
||||
Kernel Parameters
|
||||
=================
|
||||
|
||||
When dealing with large numbers of concurrent connections there are a handful of
|
||||
kernel parameters that you might need to adjust. Generally these should only
|
||||
When dealing with large numbers of concurrent connections there are a handful
|
||||
of kernel parameters that you might need to adjust. Generally these should only
|
||||
affect sites with a very large concurrent load. These parameters are not
|
||||
specific to Gunicorn, they would apply to any sort of network server you may be
|
||||
running.
|
||||
@ -137,8 +137,8 @@ How can I increase the maximum socket backlog?
|
||||
----------------------------------------------
|
||||
|
||||
Listening sockets have an associated queue of incoming connections that are
|
||||
waiting to be accepted. If you happen to have a stampede of clients that fill up
|
||||
this queue new connections will eventually start getting dropped.
|
||||
waiting to be accepted. If you happen to have a stampede of clients that fill
|
||||
up this queue new connections will eventually start getting dropped.
|
||||
|
||||
::
|
||||
|
||||
|
||||
@ -68,8 +68,8 @@ upgrading to a new version or adding/removing server modules), you can
|
||||
do it without any service downtime - no incoming requests will be
|
||||
lost. Preloaded applications will also be reloaded.
|
||||
|
||||
First, replace the old binary with a new one, then send the **USR2** signal to the
|
||||
master process. It executes a new binary whose .pid file is
|
||||
First, replace the old binary with a new one, then send the **USR2** signal to
|
||||
the master process. It executes a new binary whose .pid file is
|
||||
postfixed with .2 (e.g. /var/run/gunicorn.pid.2),
|
||||
which in turn starts a new master process and the new worker processes::
|
||||
|
||||
@ -89,14 +89,17 @@ incoming requests together. To phase the old instance out, you have to
|
||||
send the **WINCH** signal to the old master process, and its worker
|
||||
processes will start to gracefully shut down.
|
||||
|
||||
At this point you can still revert to the old server because it hasn't closed its listen sockets yet, by following these steps:
|
||||
At this point you can still revert to the old server because it hasn't closed
|
||||
its listen sockets yet, by following these steps:
|
||||
|
||||
- Send the HUP signal to the old master process - it will start the worker processes without reloading a configuration file
|
||||
- Send the TERM signal to the new master process to gracefully shut down its worker processes
|
||||
- Send the HUP signal to the old master process - it will start the worker
|
||||
processes without reloading a configuration file
|
||||
- Send the TERM signal to the new master process to gracefully shut down its
|
||||
worker processes
|
||||
- Send the QUIT signal to the new master process to force it quit
|
||||
|
||||
If for some reason the new worker processes do not quit, send the KILL signal to
|
||||
them after the new master process quits, and everything is exactly as before
|
||||
If for some reason the new worker processes do not quit, send the KILL signal
|
||||
to them after the new master process quits, and everything is exactly as before
|
||||
the upgrade attempt.
|
||||
|
||||
If an update is successful and you want to keep the new server, send
|
||||
|
||||
@ -647,7 +647,7 @@ class WorkerThreads(Setting):
|
||||
You'll want to vary this a bit to find the best for your particular
|
||||
application's work load.
|
||||
|
||||
If it is not defined, the default is 1.
|
||||
If it is not defined, the default is ``1``.
|
||||
"""
|
||||
|
||||
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user