mirror of
https://github.com/frappe/gunicorn.git
synced 2026-01-14 11:09:11 +08:00
max_accept on gevent worker-class to 1 when workers > 1
We've had really terrible tail latencies with gevent and gunicorn under load.
Inspecting our services with strace we see the following:
```
23:11:01.651529 accept4(5, {sa_family=AF_UNIX}, [110->2], SOCK_CLOEXEC) = 223 <0.000015>
..{18 successful calls to accept4}...
23:11:01.652590 accept4(5, {sa_family=AF_UNIX}, [110->2], SOCK_CLOEXEC) = 249 <0.000010>
23:11:01.652647 accept4(5, 0x7ffcd46c09d0, [110], SOCK_CLOEXEC) = -1 EAGAIN (Resource temporarily unavailable) <0.000012>
23:11:01.657622 getsockname(5, {sa_family=AF_UNIX, sun_path="/run/gunicorn/gunicorn.sock"}, [110->30]) = 0 <0.000009>
23:11:01.657682 recvfrom(223, "XXX"..., 8192, 0, NULL, NULL) = 511 <0.000011>
..{16 calls to recvfrom}...
23:11:01.740726 recvfrom(243, "XXX"..., 8192, 0, NULL, NULL) = 511 <0.000012>
23:11:01.746074 getsockname(5, {sa_family=AF_UNIX, sun_path="/run/gunicorn/gunicorn.sock"}, [110->30]) = 0 <0.000013>
23:11:01.746153 recvfrom(246, "XXX"..., 8192, 0, NULL, NULL) = 511 <0.000014>
23:11:01.751540 getsockname(5, {sa_family=AF_UNIX, sun_path="/run/gunicorn/gunicorn.sock"}, [110->30]) = 0 <0.000010>
23:11:01.751599 recvfrom(249, "XXX"..., 8192, 0, NULL, NULL) = 511 <0.000013>
```
Notice we see a flury of 20 `accept4`s followed by 20 calls to to `recvfrom`. Each call to `recvfrom` happens 5ms after the previous,
so the last `recvfrom` is called ~100ms after the call to `accept4` for that fd.
gevent suggest setting `max_accept` to a lower value when there's multiple working processes on the same listening socket: 785b7b5546/src/gevent/baseserver.py (L89-L102)
gevent sets `max_accept` to `1` when `wsgi.multiprocess` is True: 9d27d269ed/src/gevent/pywsgi.py (L1470-L1472)
gunicorn does in fact set this when the number of workers is > 1: e4e20f273e/gunicorn/http/wsgi.py (L73)
and this gets passed to `gevent.pywsgi.WSGIServer`: e4e20f273e/gunicorn/workers/ggevent.py (L67-L75)
However, when `worker-class` is `gevent` we directly create a `gevent.server.StreamServer`: e4e20f273e/gunicorn/workers/ggevent.py (L77-L78)
Fixing this dropped the p50 response time on an especially probelmatic benchmark from 250ms to 115ms.
Gunicorn
--------
.. image:: https://img.shields.io/pypi/v/gunicorn.svg?style=flat
:alt: PyPI version
:target: https://pypi.python.org/pypi/gunicorn
.. image:: https://img.shields.io/pypi/pyversions/gunicorn.svg
:alt: Supported Python versions
:target: https://pypi.python.org/pypi/gunicorn
.. image:: https://travis-ci.org/benoitc/gunicorn.svg?branch=master
:alt: Build Status
:target: https://travis-ci.org/benoitc/gunicorn
Gunicorn 'Green Unicorn' is a Python WSGI HTTP Server for UNIX. It's a pre-fork
worker model ported from Ruby's Unicorn_ project. The Gunicorn server is broadly
compatible with various web frameworks, simply implemented, light on server
resource usage, and fairly speedy.
Feel free to join us in `#gunicorn`_ on Freenode_.
Documentation
-------------
The documentation is hosted at http://docs.gunicorn.org.
Installation
------------
Gunicorn requires **Python 3.x >= 3.5**.
Install from PyPI::
$ pip install gunicorn
Usage
-----
Basic usage::
$ gunicorn [OPTIONS] APP_MODULE
Where ``APP_MODULE`` is of the pattern ``$(MODULE_NAME):$(VARIABLE_NAME)``. The
module name can be a full dotted path. The variable name refers to a WSGI
callable that should be found in the specified module.
Example with test app::
$ cd examples
$ gunicorn --workers=2 test:app
Contributing
------------
See `our complete contributor's guide <CONTRIBUTING.md>`_ for more details.
License
-------
Gunicorn is released under the MIT License. See the LICENSE_ file for more
details.
.. _Unicorn: https://bogomips.org/unicorn/
.. _`#gunicorn`: https://webchat.freenode.net/?channels=gunicorn
.. _Freenode: https://freenode.net/
.. _LICENSE: https://github.com/benoitc/gunicorn/blob/master/LICENSE
Languages
Python
99.9%