mirror of
https://github.com/frappe/gunicorn.git
synced 2026-01-14 11:09:11 +08:00
Merge branch 'master' of github.com:benoitc/gunicorn
This commit is contained in:
commit
0fbb94e8c6
@ -83,7 +83,7 @@
|
||||
(tutorial) $ cat myapp.py
|
||||
|
||||
def app(environ, start_response):
|
||||
data = "Hello, World!\n"
|
||||
data = b"Hello, World!\n"
|
||||
start_response("200 OK", [
|
||||
("Content-Type", "text/plain"),
|
||||
("Content-Length", str(len(data)))
|
||||
|
||||
@ -11,7 +11,7 @@ Although there are many HTTP proxies available, we strongly advise that you
|
||||
use Nginx_. If you choose another proxy server you need to make sure that it
|
||||
buffers slow clients when you use default Gunicorn workers. Without this
|
||||
buffering Gunicorn will be easily susceptible to denial-of-service attacks.
|
||||
You can use slowloris_ to check if your proxy is behaving properly.
|
||||
You can use Boom_ to check if your proxy is behaving properly.
|
||||
|
||||
An `example configuration`_ file for fast clients with Nginx_:
|
||||
|
||||
@ -266,7 +266,7 @@ utility::
|
||||
kill -USR1 $(cat /var/run/gunicorn.pid)
|
||||
|
||||
.. _Nginx: http://www.nginx.org
|
||||
.. _slowloris: http://ha.ckers.org/slowloris/
|
||||
.. _Boom: https://github.com/rakyll/boom
|
||||
.. _`example configuration`: http://github.com/benoitc/gunicorn/blob/master/examples/nginx.conf
|
||||
.. _runit: http://smarden.org/runit/
|
||||
.. _`example service`: http://github.com/benoitc/gunicorn/blob/master/examples/gunicorn_rc
|
||||
|
||||
@ -75,7 +75,7 @@ servers.
|
||||
This resource bound assumption is why we require a buffering proxy in front of a
|
||||
default configuration Gunicorn. If you exposed synchronous workers to the
|
||||
internet, a DOS attack would be trivial by creating a load that trickles data to
|
||||
the servers. For the curious, Slowloris_ is an example of this type of load.
|
||||
the servers. For the curious, Boom_ is an example of this type of load.
|
||||
|
||||
Some examples of behavior requiring asynchronous workers:
|
||||
|
||||
@ -126,5 +126,5 @@ the master process).
|
||||
.. _Greenlets: https://github.com/python-greenlet/greenlet
|
||||
.. _Eventlet: http://eventlet.net
|
||||
.. _Gevent: http://gevent.org
|
||||
.. _Slowloris: http://ha.ckers.org/slowloris/
|
||||
.. _Boom: http://ha.ckers.org/slowloris/
|
||||
.. _aiohttp: https://github.com/KeepSafe/aiohttp
|
||||
|
||||
@ -27,8 +27,12 @@ You can gracefully reload by sending HUP signal to gunicorn::
|
||||
How might I test a proxy configuration?
|
||||
---------------------------------------
|
||||
|
||||
The Slowloris_ script is a great way to test that your proxy is correctly
|
||||
buffering responses for the synchronous workers.
|
||||
The Boom_ program is a great way to test that your proxy is correctly
|
||||
buffering responses for the synchronous workers::
|
||||
|
||||
$ boom -n 10000 -c 100 http://127.0.0.1:5000/
|
||||
|
||||
This runs a benchmark of 10000 requests with 100 running concurrently.
|
||||
|
||||
How can I name processes?
|
||||
-------------------------
|
||||
@ -47,7 +51,7 @@ HTTP/1.0 with its upstream servers. If you want to deploy Gunicorn to
|
||||
handle unbuffered requests (ie, serving requests directly from the internet)
|
||||
you should use one of the async workers.
|
||||
|
||||
.. _slowloris: http://ha.ckers.org/slowloris/
|
||||
.. _Boom: https://github.com/rakyll/boom
|
||||
.. _setproctitle: http://pypi.python.org/pypi/setproctitle
|
||||
.. _proc_name: settings.html#proc-name
|
||||
|
||||
|
||||
@ -379,7 +379,7 @@ class Response(object):
|
||||
sockno = self.sock.fileno()
|
||||
sent = 0
|
||||
|
||||
for m in range(0, nbytes, BLKSIZE):
|
||||
while sent != nbytes:
|
||||
count = min(nbytes - sent, BLKSIZE)
|
||||
sent += sendfile(sockno, fileno, offset + sent, count)
|
||||
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user