Merge branch 'master' of https://github.com/benoitc/gunicorn into statsd-logger

This commit is contained in:
Alexis Le-Quoc 2014-06-16 23:05:38 -04:00
commit 80b10182ee
31 changed files with 1188 additions and 111 deletions

View File

@ -8,6 +8,7 @@ python:
install:
- "pip install -r requirements_dev.txt"
- "python setup.py install"
- if [[ $TRAVIS_PYTHON_VERSION == 3* ]]; then pip install aiohttp; fi
script: py.test -x tests/
branches:
only:

View File

@ -3,3 +3,7 @@ Paul J. Davis <paul.joseph.davis@gmail.com>
Randall Leeds <randall.leeds@gmail.com>
Konstantin Kapustin <sirkonst@gmail.com>
Kenneth Reitz <me@kennethreitz.com>
Nikolay Kim <fafhrd91@gmail.com>
Andrew Svetlov <andrew.svetlov@gmail.com>
Stéphane Wirtel <stephane@wirtel.be>
Berker Peksağ <berker.peksag@gmail.com>

View File

@ -33,7 +33,7 @@ Or from Pypi::
You may also want to install Eventlet_ or Gevent_ if you expect that your
application code may need to pause for extended periods of time during
request processing. Check out the FAQ_ for more information on when you'll
request processing. If you're on Python 3 you may also consider one othe Asyncio_ workers. Check out the FAQ_ for more information on when you'll
want to consider one of the alternate worker types.
To install eventlet::
@ -63,7 +63,7 @@ Commonly Used Arguments
to run. You'll definitely want to read the `production page`_ for the
implications of this parameter. You can set this to ``egg:gunicorn#$(NAME)``
where ``$(NAME)`` is one of ``sync``, ``eventlet``, ``gevent``, or
``tornado``. ``sync`` is the default.
``tornado``, ``gthread``, ``gaiohttp`. ``sync`` is the default.
* ``-n APP_NAME, --name=APP_NAME`` - If setproctitle_ is installed you can
adjust the name of Gunicorn process as they appear in the process system
table (which affects tools like ``ps`` and ``top``).
@ -182,6 +182,7 @@ details.
.. _freenode: http://freenode.net
.. _Eventlet: http://eventlet.net
.. _Gevent: http://gevent.org
.. _Asyncio: https://docs.python.org/3/library/asyncio.html
.. _FAQ: http://docs.gunicorn.org/en/latest/faq.html
.. _libev: http://software.schmorp.de/pkg/libev.html
.. _libevent: http://monkey.org/~provos/libevent

View File

@ -16,7 +16,7 @@
<div class="logo-div">
<div class="latest">
Latest version: <strong><a
href="http://docs.gunicorn.org/en/18.0/news.html#id1">18.0</a></strong>
href="http://docs.gunicorn.org/en/19.0/news.html#id1">19.0</a></strong>
</div>
<div class="logo"><img src="images/logo.jpg" ></div>

View File

@ -1,8 +1,123 @@
Changelog - 2014
================
18.2 / unreleased
19.0 / 2014-06-12
-----------------
- new: logging to syslog now includes the access log.
Gunicorn 19.0 is a major release with new features and fixes. This
version improve a lot the usage of Gunicorn with python 3 by adding two
new workers to it: `gthread` a fully threaded async worker using futures
and `gaiohttp` a worker using asyncio.
Breaking Changes
~~~~~~~~~~~~~~~~
Switch QUIT and TERM signals
++++++++++++++++++++++++++++
With this change, when gunicorn receives a QUIT all the workers are
killed immediately and exit and TERM is used for the graceful shutdown.
Note: the old behaviour was based on the NGINX but the new one is more
correct according the following doc:
https://www.gnu.org/software/libc/manual/html_node/Termination-Signals.html
also it is complying with the way the signals are sent by heroku:
https://devcenter.heroku.com/articles/python-faq#what-constraints-exist-when-developing-applications-on-heroku
Deprecations
+++++++++++++
`run_gunicorn`, `gunicorn_django` and `gunicorn_paster` are now
completely deprecated and will be removed in the next release. Use the
`gunicorn` command instead.
Changes:
~~~~~~~~
core
++++
- add aiohttp worker named `gaiohttp` using asyncio. Full async worker
on python 3.
- fix HTTP-violating excess whitespace in write_error output
- fix: try to log what happened in the worker after a timeout, add a
`worker_abort` hook on SIGABRT signal.
- fix: save listener socket name in workers so we can handle buffered
keep-alive requests after the listener has closed.
- add on_exit hook called just before exiting gunicorn.
- add support for python 3.4
- fix: do not swallow unexpected errors when reaping
- fix: remove incompatible SSL option with python 2.6
- add new async gthread worker and `--threads` options allows to set multiple
threads to listen on connection
- deprecate `gunicorn_django` and `gunicorn_paster`
- switch QUIT and TERM signal
- reap workers in SIGCHLD handler
- add universal wheel support
- use `email.utils.formatdate` in gunicorn.util.http_date
- deprecate the `--debug` option
- fix: log exceptions that occur after response start …
- allows loading of applications from `.pyc` files (#693)
- fix: issue #691, raw_env config file parsing
- use a dynamic timeout to wait for the optimal time. (Reduce power
usage)
- fix python3 support when notifying the arbiter
- add: honor $WEB_CONCURRENCY environment variable. Useful for heroku
setups.
- add: include tz offset in access log
- add: include access logs in the syslog handler.
- add --reload option for code reloading
- add the capability to load `gunicorn.base.Application` without the loading of the arguments of the command line. It allows you to [embed gunicorn in your own application](http://docs.gunicorn.org/en/latest/custom.html).
- improve: set wsgi.multithread to True for async workers
- fix logging: make sure to redirect wsgi.errors when needed
- add: syslog logging can now be done to a unix socket
- fix logging: don't try to redirect stdout/stderr to the logfile.
- fix logging: don't propagate log
- improve logging: file option can be overriden by the gunicorn options `--error-logfile` and `--access-logfile` if they are given.
- fix: dont' override SERVER_* by the Host header
- fix: handle_error
- add more option to configure SSL
- fix: sendfile with SSL
- add: worker_int callback (to react on SIGTERM)
- fix: don't depend on entry point for internal classes, now absolute
modules path can be given.
- fix: Error messages are now encoded in latin1
- fix: request line length check
- improvement: proxy_allow_ips: Allow proxy protocol if "*" specified
- fix: run worker's `setup` method before setting num_workers
- fix: FileWrapper inherit from `object` now
- fix: Error messages are now encoded in latin1
- fix: don't spam the console on SIGWINCH.
- fix: logging -don't stringify T and D logging atoms (#621)
- add support for the latest django version
- deprecate `run_gunicorn` django option
- fix: sys imported twice
gevent worker
+++++++++++++
- fix: make sure to stop all listeners
- fix: monkey patching is now done in the worker
- fix: "global name 'hub' is not defined"
- fix: reinit `hub` on old versions of gevent
- support gevent 1.0
- fix: add subprocess in monket patching
- fix: add support for multiple listener
eventlet worker
+++++++++++++++
- fix: merge duplicate EventletWorker.init_process method (fixes #657)
- fix: missing errno import for eventlet sendfile patch
- fix: add support for multiple listener
tornado worker
++++++++++++++
- add gracefull stop support

View File

@ -10,52 +10,7 @@ Sometimes, you want to integrate Gunicorn with your WSGI application. In this
case, you can inherit from :class:`gunicorn.app.base.BaseApplication`.
Here is a small example where we create a very small WSGI app and load it with a
custom Application::
custom Application:
#!/usr/bin/env python
import gunicorn.app.base
def handler_app(environ, start_response):
response_body = 'Works fine'
status = '200 OK'
response_headers = [
('Content-Type', 'text/plain'),
('Content-Length', str(len(response_body)))
]
start_response(status, response_headers)
return [response_body]
class StandaloneApplication(gunicorn.app.base.BaseApplication):
def __init__(self, app, options=None):
self.options = dict(options or {})
self.application = app
super(StandaloneApplication, self).__init__()
def load_config(self):
tmp_config = map(
lambda item: (item[0].lower(), item[1]),
self.options.iteritems()
)
config = dict(
(key, value)
for key, value in tmp_config
if key in self.cfg.settings and value is not None
)
for key, value in config.iteritems():
self.cfg.set(key.lower(), value)
def load(self):
return self.application
if __name__ == '__main__':
options = {
'bind': '%s:%s' % ('127.0.0.1', '8080'),
'workers': 4,
# 'pidfile': pidfile,
}
StandaloneApplication(handler_app, options).run()
.. literalinclude:: ../../examples/standalone_app.py
:lines: 11-60

View File

@ -262,7 +262,7 @@ systemd:
WorkingDirectory=/home/urban/gunicorn/bin
ExecStart=/home/someuser/gunicorn/bin/gunicorn -p /home/urban/gunicorn/gunicorn.pid- test:app
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s QUIT $MAINPID
ExecStop=/bin/kill -s TERM $MAINPID
PrivateTmp=true
**gunicorn.socket**::

View File

@ -49,6 +49,19 @@ There's also a Tornado worker class. It can be used to write applications using
the Tornado framework. Although the Tornado workers are capable of serving a
WSGI application, this is not a recommended configuration.
AsyncIO Workers
---------------
These workers are compatible with python3. You have two kind of workers.
The worker `gthread` is a threaded worker. It accepts connections in the
main loop, accepted connections are are added to the thread pool as a
connection job. On keepalive connections are put back in the loop
waiting for an event. If no event happen after the keep alive timeout,
the connection is closed.
The worker `gaiohttp` is a full asyncio worker using aiohttp_.
Choosing a Worker Type
======================
@ -94,7 +107,24 @@ Always remember, there is such a thing as too many workers. After a point your
worker processes will start thrashing system resources decreasing the throughput
of the entire system.
How Many Threads?
===================
Since Gunicorn 19, a threads option can be used to process requests in multiple
threads. Using threads assumes use of the sync worker. One benefit from threads
is that requests can take longer than the worker timeout while notifying the
master process that it is not frozen and should not be killed. Depending on the
system, using multiple threads, multiple worker processes, or some mixture, may
yield the best results. For example, CPython may not perform as well as Jython
when using threads, as threading is implemented differently by each. Using
threads instead of processes is a good way to reduce the memory footprint of
Gunicorn, while still allowing for application upgrades using the reload signal,
as the application code will be shared among workers but loaded only in the
worker processes (unlike when using the preload setting, which loads the code in
the master process).
.. _Greenlets: https://github.com/python-greenlet/greenlet
.. _Eventlet: http://eventlet.net
.. _Gevent: http://gevent.org
.. _Slowloris: http://ha.ckers.org/slowloris/
.. _aiohttp: https://github.com/KeepSafe/aiohttp

View File

@ -82,6 +82,16 @@ To decrease the worker count by one::
$ kill -TTOU $masterpid
Does Gunicorn suffer from the thundering herd problem?
------------------------------------------------------
The thundering herd problem occurs when many sleeping request handlers, which
may be either threads or processes, wake up at the same time to handle an new
request. Since only one handler will receive the request, the others will have
been awakened for no reaon, wasting CPU cycles. At this time, Gunicorn does not
implement any IPC solution for coordinating between worker processes. You may
experience high load due to this problem when using many workers or threads.
.. _worker_class: configure.html#worker-class
.. _`number of workers`: design.html#how-many-workers

View File

@ -1,12 +1,152 @@
Changelog
=========
18.2 / unreleased
19.1 / unreleased
Changes
~~~~~~~
Core
++++
- fix #785: handle binary type address given to a client socket address
- fix graceful shutdown. make sure QUIT and TERMS signals are switched
everywhere.
Tornado worker
++++++++++++++
- fix #783: x_headers error. The x-forwarded-headers option has been removed
in `c4873681299212d6082cd9902740eef18c2f14f1
<https://github.com/benoitc/gunicorn/commit/c4873681299212d6082cd9902740eef18c2f14f1>`_. The discussion is
available on `#633 <https://github.com/benoitc/gunicorn/pull/633>`_.
19.0 / 2014-06-12
-----------------
- new: logging to syslog now includes the access log.
Gunicorn 19.0 is a major release with new features and fixes. This
version improve a lot the usage of Gunicorn with python 3 by adding `two
new workers <http://docs.gunicorn.org/en/latest/design.html#asyncio-workers>`_ to it: `gthread` a fully threaded async worker using futures
and `gaiohttp` a worker using asyncio.
Breaking Changes
~~~~~~~~~~~~~~~~
Switch QUIT and TERM signals
++++++++++++++++++++++++++++
With this change, when gunicorn receives a QUIT all the workers are
killed immediately and exit and TERM is used for the graceful shutdown.
Note: the old behaviour was based on the NGINX but the new one is more
correct according the following doc:
https://www.gnu.org/software/libc/manual/html_node/Termination-Signals.html
also it is complying with the way the signals are sent by heroku:
https://devcenter.heroku.com/articles/python-faq#what-constraints-exist-when-developing-applications-on-heroku
Deprecations
+++++++++++++
`run_gunicorn`, `gunicorn_django` and `gunicorn_paster` are now
completely deprecated and will be removed in the next release. Use the
`gunicorn` command instead.
Changes:
~~~~~~~~
core
++++
- add aiohttp worker named `gaiohttp` using asyncio. Full async worker
on python 3.
- fix HTTP-violating excess whitespace in write_error output
- fix: try to log what happened in the worker after a timeout, add a
`worker_abort` hook on SIGABRT signal.
- fix: save listener socket name in workers so we can handle buffered
keep-alive requests after the listener has closed.
- add on_exit hook called just before exiting gunicorn.
- add support for python 3.4
- fix: do not swallow unexpected errors when reaping
- fix: remove incompatible SSL option with python 2.6
- add new async gthread worker and `--threads` options allows to set multiple
threads to listen on connection
- deprecate `gunicorn_django` and `gunicorn_paster`
- switch QUIT and TERM signal
- reap workers in SIGCHLD handler
- add universal wheel support
- use `email.utils.formatdate` in gunicorn.util.http_date
- deprecate the `--debug` option
- fix: log exceptions that occur after response start …
- allows loading of applications from `.pyc` files (#693)
- fix: issue #691, raw_env config file parsing
- use a dynamic timeout to wait for the optimal time. (Reduce power
usage)
- fix python3 support when notifying the arbiter
- add: honor $WEB_CONCURRENCY environment variable. Useful for heroku
setups.
- add: include tz offset in access log
- add: include access logs in the syslog handler.
- add --reload option for code reloading
- add the capability to load `gunicorn.base.Application` without the loading of
the arguments of the command line. It allows you to :ref:`embed gunicorn in
your own application <custom>`.
- improve: set wsgi.multithread to True for async workers
- fix logging: make sure to redirect wsgi.errors when needed
- add: syslog logging can now be done to a unix socket
- fix logging: don't try to redirect stdout/stderr to the logfile.
- fix logging: don't propagate log
- improve logging: file option can be overriden by the gunicorn options
`--error-logfile` and `--access-logfile` if they are given.
- fix: dont' override SERVER_* by the Host header
- fix: handle_error
- add more option to configure SSL
- fix: sendfile with SSL
- add: worker_int callback (to react on SIGTERM)
- fix: don't depend on entry point for internal classes, now absolute
modules path can be given.
- fix: Error messages are now encoded in latin1
- fix: request line length check
- improvement: proxy_allow_ips: Allow proxy protocol if "*" specified
- fix: run worker's `setup` method before setting num_workers
- fix: FileWrapper inherit from `object` now
- fix: Error messages are now encoded in latin1
- fix: don't spam the console on SIGWINCH.
- fix: logging -don't stringify T and D logging atoms (#621)
- add support for the latest django version
- deprecate `run_gunicorn` django option
- fix: sys imported twice
gevent worker
+++++++++++++
- fix: make sure to stop all listeners
- fix: monkey patching is now done in the worker
- fix: "global name 'hub' is not defined"
- fix: reinit `hub` on old versions of gevent
- support gevent 1.0
- fix: add subprocess in monket patching
- fix: add support for multiple listener
eventlet worker
+++++++++++++++
- fix: merge duplicate EventletWorker.init_process method (fixes #657)
- fix: missing errno import for eventlet sendfile patch
- fix: add support for multiple listener
tornado worker
++++++++++++++
- add gracefull stop support
18.0 / 2013-08-26
-----------------
@ -226,6 +366,7 @@ History
.. toctree::
:titlesonly:
2014-news
2013-news
2012-news
2011-news

View File

@ -1,3 +1,4 @@
.. _settings:
Settings
@ -103,6 +104,22 @@ alternative syntax will load the gevent class:
``gunicorn.workers.ggevent.GeventWorker``. Alternatively the syntax
can also load the gevent class with ``egg:gunicorn#gevent``
threads
~~~~~~~
* ``--threads INT``
* ``1``
The number of worker threads for handling requests.
Run each worker with the specified number of threads.
A positive integer generally in the 2-4 x $(NUM_CORES) range. You'll
want to vary this a bit to find the best for your particular
application's work load.
If it is not defined, the default is 1.
worker_connections
~~~~~~~~~~~~~~~~~~
@ -276,7 +293,7 @@ chdir
~~~~~
* ``--chdir``
* ``/Users/benoitc/work/gunicorn_env/src/gunicorn/docs``
* ``/home/benoitc/work/gunicorn/env_py3/src/gunicorn/docs``
Chdir to specified directory before apps loading.
@ -329,7 +346,7 @@ user
~~~~
* ``-u USER, --user USER``
* ``501``
* ``1000``
Switch worker processes to run as this user.
@ -341,7 +358,7 @@ group
~~~~~
* ``-g GROUP, --group GROUP``
* ``20``
* ``1000``
Switch worker process to run as this group.
@ -379,7 +396,7 @@ temporary directory.
secure_scheme_headers
~~~~~~~~~~~~~~~~~~~~~
* ``{'X-FORWARDED-PROTOCOL': 'ssl', 'X-FORWARDED-PROTO': 'https', 'X-FORWARDED-SSL': 'on'}``
* ``{'X-FORWARDED-SSL': 'on', 'X-FORWARDED-PROTO': 'https', 'X-FORWARDED-PROTOCOL': 'ssl'}``
A dictionary containing headers and values that the front-end proxy
uses to indicate HTTPS requests. These tell gunicorn to set
@ -504,7 +521,7 @@ syslog_addr
~~~~~~~~~~~
* ``--log-syslog-to SYSLOG_ADDR``
* ``unix:///var/run/syslog``
* ``udp://localhost:514``
Address to send syslog messages.
@ -710,7 +727,22 @@ worker_int
def worker_int(worker):
pass
Called just after a worker exited on SIGINT or SIGTERM.
Called just after a worker exited on SIGINT or SIGQUIT.
The callable needs to accept one instance variable for the initialized
Worker.
worker_abort
~~~~~~~~~~~~
* ::
def worker_abort(worker):
pass
Called when a worker received the SIGABRT signal.
This call generally happen on timeout.
The callable needs to accept one instance variable for the initialized
Worker.
@ -782,6 +814,18 @@ two integers of number of workers after and before change.
If the number of workers is set for the first time, old_value would be
None.
on_exit
~~~~~~~
* ::
def on_exit(server):
pass
Called just before exiting gunicorn.
The callable needs to accept a single instance variable for the Arbiter.
Server Mechanics
----------------

View File

@ -20,9 +20,9 @@ Master process
not preloaded (using the ``--preload`` option), Gunicorn will also load the
new version.
- **TTIN**: Increment the number of processes by one
- **TTOU**: Decrement the nunber of processes by one
- **TTOU**: Decrement the number of processes by one
- **USR1**: Reopen the log files
- **USR2**: Upgrade the Gunicorn on the fly. A separate **QUIT** signal should
- **USR2**: Upgrade the Gunicorn on the fly. A separate **TERM** signal should
be used to kill the old process. This signal can also be used to use the new
versions of pre-loaded applications.
- **WINCH**: Gracefully shutdown the worker processes when gunicorn is
@ -91,7 +91,7 @@ incoming requests together. To phase the old instance out, you have to
send **WINCH** signal to the old master process, and its worker
processes will start to gracefully shut down.
t this point you can still revert to the old server because it hasn't closed its listen sockets yet, by following these steps:
At this point you can still revert to the old server because it hasn't closed its listen sockets yet, by following these steps:
- Send HUP signal to the old master process - it will start the worker processes without reloading a configuration file
- Send TERM signal to the new master process to gracefully shut down its worker processes

View File

@ -199,10 +199,10 @@ def pre_exec(server):
server.log.info("Forked child, re-executing.")
def when_ready(server):
server.log.info("Server is ready. Spwawning workers")
server.log.info("Server is ready. Spawning workers")
def worker_int(worker):
worker.log.info("worker received INT or TERM signal")
worker.log.info("worker received INT or QUIT signal")
## get traceback info
import threading, sys, traceback
@ -217,3 +217,6 @@ def worker_int(worker):
if line:
code.append(" %s" % (line.strip()))
worker.log.debug("\n".join(code))
def worker_abort(worker):
worker.log.info("worker received SIGABRT signal")

View File

@ -8,21 +8,25 @@
# This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information.
import gunicorn
import gunicorn.app.base
from __future__ import unicode_literals
import multiprocessing
import gunicorn.app.base
from gunicorn.six import iteritems
def number_of_workers():
return (multiprocessing.cpu_count() * 2) + 1
def handler_app(environ, start_response):
response_body = 'Works fine'
response_body = b'Works fine'
status = '200 OK'
response_headers = [
('Content-Type', 'text/plain'),
('Content-Lenfth', str(len(response_body))),
]
start_response(status, response_headers)
@ -31,24 +35,16 @@ def handler_app(environ, start_response):
class StandaloneApplication(gunicorn.app.base.BaseApplication):
def __init__(self, app, options=None):
self.options = dict(options or {})
self.options = options or {}
self.application = app
super(StandaloneApplication, self).__init__()
def load_config(self):
tmp_config = map(
lambda item: (item[0].lower(), item[1]),
self.options.iteritems()
)
config = dict(
(key, value)
for key, value in tmp_config
if key in self.cfg.settings and value is not None
)
for key, value in config.iteritems():
config = dict([(key, value) for key, value in iteritems(self.options)
if key in self.cfg.settings and value is not None])
for key, value in iteritems(config):
self.cfg.set(key.lower(), value)
def load(self):

View File

@ -16,7 +16,7 @@ def app(environ, start_response):
"""Simplest possible application object"""
errors = environ['wsgi.errors']
pprint.pprint(('ENVIRON', environ), stream=errors)
# pprint.pprint(('ENVIRON', environ), stream=errors)
data = b'Hello, World!\n'
status = '200 OK'

22
examples/timeout.py Normal file
View File

@ -0,0 +1,22 @@
# -*- coding: utf-8 -
#
# This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information.
import sys
import time
def app(environ, start_response):
"""Application which pauses 35 seconds before responding. the worker
will timeout in default case."""
data = b'Hello, World!\n'
status = '200 OK'
response_headers = [
('Content-type','text/plain'),
('Content-Length', str(len(data))) ]
sys.stdout.write('request will timeout')
sys.stdout.flush()
time.sleep(35)
start_response(status, response_headers)
return iter([data])

View File

@ -29,7 +29,7 @@ class MemoryWatch(threading.Thread):
if self.memory_usage(pid) > self.max_mem:
self.server.log.info("Pid %s killed (memory usage > %s)",
pid, self.max_mem)
self.server.kill_worker(pid, signal.SIGQUIT)
self.server.kill_worker(pid, signal.SIGTERM)
time.sleep(self.timeout)

View File

@ -3,6 +3,6 @@
# This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information.
version_info = (19, 0, 0)
version_info = (19, 1, 0)
__version__ = ".".join([str(v) for v in version_info])
SERVER_SOFTWARE = "gunicorn/%s" % __version__

View File

@ -131,8 +131,7 @@ class Arbiter(object):
listeners_str = ",".join([str(l) for l in self.LISTENERS])
self.log.debug("Arbiter booted")
self.log.info("Listening at: %s (%s)", listeners_str, self.pid)
self.log.info("Using worker: %s",
self.cfg.settings['worker_class'].get())
self.log.info("Using worker: %s", self.cfg.worker_class_str)
self.cfg.when_ready(self)
@ -230,7 +229,7 @@ class Arbiter(object):
raise StopIteration
def handle_quit(self):
"SIGTERM handling"
"SIGQUIT handling"
self.stop(False)
raise StopIteration
@ -274,7 +273,7 @@ class Arbiter(object):
if self.cfg.daemon:
self.log.info("graceful stop of workers")
self.num_workers = 0
self.kill_workers(signal.SIGQUIT)
self.kill_workers(signal.SIGTERM)
else:
self.log.debug("SIGWINCH ignored. Not daemonized")
@ -296,6 +295,7 @@ class Arbiter(object):
self.log.info("Reason: %s", reason)
if self.pidfile is not None:
self.pidfile.unlink()
self.cfg.on_exit(self)
sys.exit(exit_status)
def sleep(self):
@ -432,8 +432,12 @@ class Arbiter(object):
except ValueError:
continue
self.log.critical("WORKER TIMEOUT (pid:%s)", pid)
self.kill_worker(pid, signal.SIGKILL)
if not worker.aborted:
self.log.critical("WORKER TIMEOUT (pid:%s)", pid)
worker.aborted = True
self.kill_worker(pid, signal.SIGABRT)
else:
self.kill_worker(pid, signal.SIGKILL)
def reap_workers(self):
"""\
@ -476,7 +480,7 @@ class Arbiter(object):
workers = sorted(workers, key=lambda w: w[1].age)
while len(workers) > self.num_workers:
(pid, _) = workers.pop(0)
self.kill_worker(pid, signal.SIGQUIT)
self.kill_worker(pid, signal.SIGTERM)
self.log.info("{0} workers".format(len(workers)),
extra={"metric": "gunicorn.workers",

View File

@ -83,14 +83,34 @@ class Config(object):
return parser
@property
def worker_class_str(self):
uri = self.settings['worker_class'].get()
## are we using a threaded worker?
is_sync = uri.endswith('SyncWorker') or uri == 'sync'
if is_sync and self.threads > 1:
return "threads"
return uri
@property
def worker_class(self):
uri = self.settings['worker_class'].get()
## are we using a threaded worker?
is_sync = uri.endswith('SyncWorker') or uri == 'sync'
if is_sync and self.threads > 1:
uri = "gunicorn.workers.gthread.ThreadWorker"
worker_class = util.load_class(uri)
if hasattr(worker_class, "setup"):
worker_class.setup()
return worker_class
@property
def threads(self):
return self.settings['threads'].get()
@property
def workers(self):
return self.settings['workers'].get()
@ -547,6 +567,26 @@ class WorkerClass(Setting):
can also load the gevent class with ``egg:gunicorn#gevent``
"""
class WorkerThreads(Setting):
name = "threads"
section = "Worker Processes"
cli = ["--threads"]
meta = "INT"
validator = validate_pos_int
type = int
default = 1
desc = """\
The number of worker threads for handling requests.
Run each worker with the specified number of threads.
A positive integer generally in the 2-4 x $(NUM_CORES) range. You'll
want to vary this a bit to find the best for your particular
application's work load.
If it is not defined, the default is 1.
"""
class WorkerConnections(Setting):
name = "worker_connections"
@ -1338,7 +1378,27 @@ class WorkerInt(Setting):
default = staticmethod(worker_int)
desc = """\
Called just after a worker exited on SIGINT or SIGTERM.
Called just after a worker exited on SIGINT or SIGQUIT.
The callable needs to accept one instance variable for the initialized
Worker.
"""
class WorkerAbort(Setting):
name = "worker_abort"
section = "Server Hooks"
validator = validate_callable(1)
type = six.callable
def worker_abort(worker):
pass
default = staticmethod(worker_abort)
desc = """\
Called when a worker received the SIGABRT signal.
This call generally happen on timeout.
The callable needs to accept one instance variable for the initialized
Worker.
@ -1431,6 +1491,21 @@ class NumWorkersChanged(Setting):
None.
"""
class OnExit(Setting):
name = "on_exit"
section = "Server Hooks"
validator = validate_callable(1)
def on_exit(server):
pass
default = staticmethod(on_exit)
desc = """\
Called just before exiting gunicorn.
The callable needs to accept a single instance variable for the Arbiter.
"""
class ProxyProtocol(Setting):
name = "proxy_protocol"

View File

@ -159,6 +159,8 @@ def create(req, sock, client, server, cfg):
# http://www.ietf.org/rfc/rfc3875
if isinstance(client, string_types):
environ['REMOTE_ADDR'] = client
elif isinstance(client, binary_type):
environ['REMOTE_ADDR'] = str(client)
else:
environ['REMOTE_ADDR'] = client[0]
environ['REMOTE_PORT'] = str(client[1])

View File

@ -357,7 +357,11 @@ if PY3:
print_ = getattr(builtins, "print")
def execfile_(fname, *args):
return exec_(_get_codeobj(fname), *args)
if fname.endswith(".pyc"):
code = _get_codeobj(fname)
else:
code = compile(open(fname, 'rb').read(), fname, 'exec')
return exec_(code, *args)
del builtins
@ -382,7 +386,9 @@ else:
def execfile_(fname, *args):
""" Overriding PY2 execfile() implementation to support .pyc files """
return exec_(_get_codeobj(fname), *args)
if fname.endswith(".pyc"):
return exec_(_get_codeobj(fname), *args)
return execfile(fname, *args)
def print_(*args, **kwargs):

View File

@ -269,7 +269,6 @@ def set_non_blocking(fd):
flags = fcntl.fcntl(fd, fcntl.F_GETFL) | os.O_NONBLOCK
fcntl.fcntl(fd, fcntl.F_SETFL, flags)
def close(sock):
try:
sock.close()
@ -338,8 +337,7 @@ def write_error(sock, status_int, reason, mesg):
Content-Type: text/html\r
Content-Length: %d\r
\r
%s
""") % (str(status_int), reason, len(html), html)
%s""") % (str(status_int), reason, len(html), html)
write_nonblock(sock, http.encode('latin1'))

View File

@ -3,6 +3,8 @@
# This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information.
import sys
# supported gunicorn workers.
SUPPORTED_WORKERS={
"sync": "gunicorn.workers.sync.SyncWorker",
@ -12,3 +14,7 @@ SUPPORTED_WORKERS={
"gevent_pywsgi": "gunicorn.workers.ggevent.GeventPyWSGIWorker",
"tornado": "gunicorn.workers.gtornado.TornadoWorker"}
if sys.version_info >= (3, 3):
# gaiohttp worker can be used with Python 3.3+ only.
SUPPORTED_WORKERS["gaiohttp"] = "gunicorn.workers.gaiohttp.AiohttpWorker"

View File

@ -32,9 +32,10 @@ class AsyncWorker(base.Worker):
try:
parser = http.RequestParser(self.cfg, client)
try:
listener_name = listener.getsockname()
if not self.cfg.keepalive:
req = six.next(parser)
self.handle_request(listener, req, client, addr)
self.handle_request(listener_name, req, client, addr)
else:
# keepalive loop
while True:
@ -43,7 +44,7 @@ class AsyncWorker(base.Worker):
req = six.next(parser)
if not req:
break
self.handle_request(listener, req, client, addr)
self.handle_request(listener_name, req, client, addr)
except http.errors.NoMoreData as e:
self.log.debug("Ignored premature client disconnection. %s", e)
except StopIteration as e:
@ -78,14 +79,14 @@ class AsyncWorker(base.Worker):
finally:
util.close(client)
def handle_request(self, listener, req, sock, addr):
def handle_request(self, listener_name, req, sock, addr):
request_start = datetime.now()
environ = {}
resp = None
try:
self.cfg.pre_request(self, req)
resp, environ = wsgi.create(req, sock, addr,
listener.getsockname(), self.cfg)
listener_name, self.cfg)
environ["wsgi.multithread"] = True
self.nr += 1
if self.alive and self.nr >= self.max_requests:
@ -113,6 +114,8 @@ class AsyncWorker(base.Worker):
respiter.close()
if resp.should_close():
raise StopIteration()
except StopIteration:
raise
except Exception:
if resp and resp.headers_sent:
# If the requests have already been sent, we should close the

View File

@ -23,7 +23,7 @@ from gunicorn.six import MAXSIZE
class Worker(object):
SIGNALS = [getattr(signal, "SIG%s" % x) \
for x in "HUP QUIT INT TERM USR1 USR2 WINCH CHLD".split()]
for x in "ABRT HUP QUIT INT TERM USR1 USR2 WINCH CHLD".split()]
PIPE = []
@ -40,6 +40,7 @@ class Worker(object):
self.timeout = timeout
self.cfg = cfg
self.booted = False
self.aborted = False
self.nr = 0
self.max_requests = cfg.max_requests or MAXSIZE
@ -82,7 +83,7 @@ class Worker(object):
if self.cfg.reload:
def changed(fname):
self.log.info("Worker reloading: %s modified", fname)
os.kill(self.pid, signal.SIGTERM)
os.kill(self.pid, signal.SIGQUIT)
raise SystemExit()
Reloader(callback=changed).start()
@ -127,10 +128,12 @@ class Worker(object):
signal.signal(signal.SIGINT, self.handle_quit)
signal.signal(signal.SIGWINCH, self.handle_winch)
signal.signal(signal.SIGUSR1, self.handle_usr1)
# Don't let SIGQUIT and SIGUSR1 disturb active requests
signal.signal(signal.SIGABRT, self.handle_abort)
# Don't let SIGTERM and SIGUSR1 disturb active requests
# by interrupting system calls
if hasattr(signal, 'siginterrupt'): # python >= 2.6
signal.siginterrupt(signal.SIGQUIT, False)
signal.siginterrupt(signal.SIGTERM, False)
signal.siginterrupt(signal.SIGUSR1, False)
def handle_usr1(self, sig, frame):
@ -138,13 +141,18 @@ class Worker(object):
def handle_exit(self, sig, frame):
self.alive = False
# worker_int callback
self.cfg.worker_int(self)
def handle_quit(self, sig, frame):
self.alive = False
# worker_int callback
self.cfg.worker_int(self)
sys.exit(0)
def handle_abort(self, sig, frame):
self.alive = False
self.cfg.worker_abort(self)
sys.exit(1)
def handle_error(self, req, client, addr, exc):
request_start = datetime.now()
addr = addr or ('', -1) # unix socket case

View File

@ -0,0 +1,131 @@
# -*- coding: utf-8 -
#
# This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information.
__all__ = ['AiohttpWorker']
import asyncio
import functools
import os
import gunicorn.workers.base as base
try:
from aiohttp.wsgi import WSGIServerHttpProtocol
except ImportError:
raise RuntimeError("You need aiohttp installed to use this worker.")
class AiohttpWorker(base.Worker):
def __init__(self, *args, **kw): # pragma: no cover
super().__init__(*args, **kw)
self.servers = []
self.connections = {}
def init_process(self):
# create new event_loop after fork
asyncio.get_event_loop().close()
self.loop = asyncio.new_event_loop()
asyncio.set_event_loop(self.loop)
super().init_process()
def run(self):
self._runner = asyncio.async(self._run(), loop=self.loop)
try:
self.loop.run_until_complete(self._runner)
finally:
self.loop.close()
def wrap_protocol(self, proto):
proto.connection_made = _wrp(
proto, proto.connection_made, self.connections)
proto.connection_lost = _wrp(
proto, proto.connection_lost, self.connections, False)
return proto
def factory(self, wsgi, host, port):
proto = WSGIServerHttpProtocol(
wsgi, loop=self.loop,
log=self.log,
debug=self.cfg.debug,
keep_alive=self.cfg.keepalive,
access_log=self.log.access_log,
access_log_format=self.cfg.access_log_format)
return self.wrap_protocol(proto)
def get_factory(self, sock, host, port):
return functools.partial(self.factory, self.wsgi, host, port)
@asyncio.coroutine
def close(self):
try:
if hasattr(self.wsgi, 'close'):
yield from self.wsgi.close()
except:
self.log.exception('Process shutdown exception')
@asyncio.coroutine
def _run(self):
for sock in self.sockets:
factory = self.get_factory(sock.sock, *sock.cfg_addr)
self.servers.append(
(yield from self.loop.create_server(factory, sock=sock.sock)))
# If our parent changed then we shut down.
pid = os.getpid()
try:
while self.alive or self.connections:
self.notify()
if (self.alive and
pid == os.getpid() and self.ppid != os.getppid()):
self.log.info("Parent changed, shutting down: %s", self)
self.alive = False
# stop accepting requests
if not self.alive:
if self.servers:
self.log.info(
"Stopping server: %s, connections: %s",
pid, len(self.connections))
for server in self.servers:
server.close()
self.servers.clear()
# prepare connections for closing
for conn in self.connections.values():
if hasattr(conn, 'closing'):
conn.closing()
yield from asyncio.sleep(1.0, loop=self.loop)
except KeyboardInterrupt:
pass
if self.servers:
for server in self.servers:
server.close()
yield from self.close()
class _wrp:
def __init__(self, proto, meth, tracking, add=True):
self._proto = proto
self._id = id(proto)
self._meth = meth
self._tracking = tracking
self._add = add
def __call__(self, *args):
if self._add:
self._tracking[self._id] = self._proto
elif self._id in self._tracking:
del self._tracking[self._id]
conn = self._meth(*args)
return conn

345
gunicorn/workers/gthread.py Normal file
View File

@ -0,0 +1,345 @@
# -*- coding: utf-8 -
#
# This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information.
# design:
# a threaded worker accepts connections in the main loop, accepted
# connections are are added to the thread pool as a connection job. On
# keepalive connections are put back in the loop waiting for an event.
# If no event happen after the keep alive timeout, the connectoin is
# closed.
from collections import deque
from datetime import datetime
import errno
from functools import partial
import os
import operator
import socket
import ssl
import sys
import time
from .. import http
from ..http import wsgi
from .. import util
from . import base
from .. import six
try:
import concurrent.futures as futures
except ImportError:
raise RuntimeError("""
You need 'concurrent' installed to use this worker with this python
version.
""")
try:
from asyncio import selectors
except ImportError:
try:
from trollius import selectors
except ImportError:
raise RuntimeError("""
You need 'trollius' installed to use this worker with this python
version.
""")
class TConn(object):
def __init__(self, cfg, listener, sock, addr):
self.cfg = cfg
self.listener = listener
self.sock = sock
self.addr = addr
self.timeout = None
self.parser = None
# set the socket to non blocking
self.sock.setblocking(False)
def init(self):
self.sock.setblocking(True)
if self.parser is None:
# wrap the socket if needed
if self.cfg.is_ssl:
self.sock = ssl.wrap_socket(client, server_side=True,
**self.cfg.ssl_options)
# initialize the parser
self.parser = http.RequestParser(self.cfg, self.sock)
return True
return False
def set_timeout(self):
# set the timeout
self.timeout = time.time() + self.cfg.keepalive
def __lt__(self, other):
return self.timeout < other.timeout
__cmp__ = __lt__
class ThreadWorker(base.Worker):
def __init__(self, *args, **kwargs):
super(ThreadWorker, self).__init__(*args, **kwargs)
self.worker_connections = self.cfg.worker_connections
# initialise the pool
self.tpool = None
self.poller = None
self.futures = deque()
self._keep = deque()
def _wrap_future(self, fs, conn):
fs.conn = conn
self.futures.append(fs)
fs.add_done_callback(self.finish_request)
def init_process(self):
self.tpool = futures.ThreadPoolExecutor(max_workers=self.cfg.threads)
self.poller = selectors.DefaultSelector()
super(ThreadWorker, self).init_process()
def accept(self, listener):
try:
client, addr = listener.accept()
conn = TConn(self.cfg, listener, client, addr)
# wait for the read event to handle the connection
self.poller.register(client, selectors.EVENT_READ,
partial(self.handle_client, conn))
except socket.error as e:
if e.args[0] not in (errno.EAGAIN,
errno.ECONNABORTED, errno.EWOULDBLOCK):
raise
def handle_client(self, conn, client):
# unregister the client from the poller
self.poller.unregister(client)
# submit the connection to a worker
fs = self.tpool.submit(self.handle, conn)
self._wrap_future(fs, conn)
def murder_keepalived(self):
now = time.time()
while True:
try:
conn = self._keep.popleft()
except IndexError:
break
delta = conn.timeout - now
if delta > 0:
self._keep.appendleft(conn)
break
else:
# remove the connection from the queue
conn = self._keep.popleft()
# remove the socket from the poller
self.poller.unregister(conn.sock)
# close the socket
util.close(conn.sock)
def run(self):
# init listeners, add them to the event loop
for s in self.sockets:
s.setblocking(False)
self.poller.register(s, selectors.EVENT_READ, self.accept)
timeout = self.cfg.timeout or 0.5
while self.alive:
# If our parent changed then we shut down.
if self.ppid != os.getppid():
self.log.info("Parent changed, shutting down: %s", self)
return
# notify the arbiter we are alive
self.notify()
events = self.poller.select(0.2)
for key, mask in events:
callback = key.data
callback(key.fileobj)
# hanle keepalive timeouts
self.murder_keepalived()
# if we more connections than the max number of connections
# accepted on a worker, wait until some complete or exit.
if len(self.futures) >= self.worker_connections:
res = futures.wait(self.futures, timeout=timeout)
if not res:
self.log.info("max requests achieved")
break
# shutdown the pool
self.poller.close()
self.tpool.shutdown(False)
# wait for the workers
futures.wait(self.futures, timeout=self.cfg.graceful_timeout)
# if we have still fures running, try to close them
while True:
try:
fs = self.futures.popleft()
except IndexError:
break
sock = fs.conn.sock
# the future is not running, cancel it
if not fs.done() and not fs.running():
fs.cancel()
# make sure we close the sockets after the graceful timeout
util.close(sock)
def finish_request(self, fs):
try:
(keepalive, conn) = fs.result()
# if the connection should be kept alived add it
# to the eventloop and record it
if keepalive:
# flag the socket as non blocked
conn.sock.setblocking(False)
# register the connection
conn.set_timeout()
self._keep.append(conn)
# add the socket to the event loop
self.poller.register(conn.sock, selectors.EVENT_READ,
partial(self.handle_client, conn))
else:
util.close(conn.sock)
except:
# an exception happened, make sure to close the
# socket.
util.close(fs.conn.sock)
finally:
# remove the future from our list
try:
self.futures.remove(fs)
except ValueError:
pass
def handle(self, conn):
if not conn.init():
# connection kept alive
try:
self._keep.remove(conn)
except ValueError:
pass
keepalive = False
req = None
try:
req = six.next(conn.parser)
if not req:
return (False, conn)
# handle the request
keepalive = self.handle_request(req, conn)
if keepalive:
return (keepalive, conn)
except http.errors.NoMoreData as e:
self.log.debug("Ignored premature client disconnection. %s", e)
except StopIteration as e:
self.log.debug("Closing connection. %s", e)
except ssl.SSLError as e:
if e.args[0] == ssl.SSL_ERROR_EOF:
self.log.debug("ssl connection closed")
conn.sock.close()
else:
self.log.debug("Error processing SSL request.")
self.handle_error(req, conn.sock, conn.addr, e)
except socket.error as e:
if e.args[0] not in (errno.EPIPE, errno.ECONNRESET):
self.log.exception("Socket error processing request.")
else:
if e.args[0] == errno.ECONNRESET:
self.log.debug("Ignoring connection reset")
else:
self.log.debug("Ignoring connection epipe")
except Exception as e:
self.handle_error(req, conn.sock, conn.addr, e)
return (False, conn)
def handle_request(self, req, conn):
environ = {}
resp = None
try:
self.cfg.pre_request(self, req)
request_start = datetime.now()
resp, environ = wsgi.create(req, conn.sock, conn.addr,
conn.listener.getsockname(), self.cfg)
environ["wsgi.multithread"] = True
self.nr += 1
if self.alive and self.nr >= self.max_requests:
self.log.info("Autorestarting worker after current request.")
resp.force_close()
self.alive = False
if not self.cfg.keepalive:
resp.force_close()
respiter = self.wsgi(environ, resp.start_response)
try:
if isinstance(respiter, environ['wsgi.file_wrapper']):
resp.write_file(respiter)
else:
for item in respiter:
resp.write(item)
resp.close()
request_time = datetime.now() - request_start
self.log.access(resp, req, environ, request_time)
finally:
if hasattr(respiter, "close"):
respiter.close()
if resp.should_close():
self.log.debug("Closing connection.")
return False
except socket.error:
exc_info = sys.exc_info()
# pass to next try-except level
six.reraise(exc_info[0], exc_info[1], exc_info[2])
except Exception:
if resp and resp.headers_sent:
# If the requests have already been sent, we should close the
# connection to indicate the error.
self.log.exception("Error handling request")
try:
conn.sock.shutdown(socket.SHUT_RDWR)
conn.sock.close()
except socket.error:
pass
raise StopIteration()
raise
finally:
try:
self.cfg.post_request(self, req, environ, resp)
except Exception:
self.log.exception("Exception in post_request hook")
return True

View File

@ -93,7 +93,6 @@ class TornadoWorker(Worker):
server._sockets[s.fileno()] = s
server.no_keep_alive = self.cfg.keepalive <= 0
server.xheaders = bool(self.cfg.x_forwarded_for_header)
server.start(num_processes=1)
self.ioloop.start()

View File

@ -10,6 +10,7 @@ import sys
from gunicorn import __version__
CLASSIFIERS = [
'Development Status :: 4 - Beta',
'Environment :: Other Environment',
@ -62,6 +63,8 @@ class PyTest(Command):
raise SystemExit(errno)
REQUIREMENTS = []
setup(
name = 'gunicorn',
version = __version__,
@ -81,6 +84,8 @@ setup(
tests_require = tests_require,
cmdclass = {'test': PyTest},
install_requires = REQUIREMENTS,
entry_points="""
[console_scripts]

173
tests/test_009-gaiohttp.py Normal file
View File

@ -0,0 +1,173 @@
# -*- coding: utf-8 -
#
# This file is part of gunicorn released under the MIT license.
# See the NOTICE for more information.
import unittest
import pytest
aiohttp = pytest.importorskip("aiohttp")
from aiohttp.wsgi import WSGIServerHttpProtocol
import asyncio
from gunicorn.workers import gaiohttp
from gunicorn.config import Config
from unittest import mock
class WorkerTests(unittest.TestCase):
def setUp(self):
self.loop = asyncio.new_event_loop()
asyncio.set_event_loop(None)
self.worker = gaiohttp.AiohttpWorker('age',
'ppid',
'sockets',
'app',
'timeout',
Config(),
'log')
def tearDown(self):
self.loop.close()
@mock.patch('gunicorn.workers.gaiohttp.asyncio')
def test_init_process(self, m_asyncio):
try:
self.worker.init_process()
except TypeError:
# to mask incomplete initialization of AiohttWorker instance:
# we pass invalid values for ctor args
pass
self.assertTrue(m_asyncio.get_event_loop.return_value.close.called)
self.assertTrue(m_asyncio.new_event_loop.called)
self.assertTrue(m_asyncio.set_event_loop.called)
@mock.patch('gunicorn.workers.gaiohttp.asyncio')
def test_run(self, m_asyncio):
self.worker.loop = mock.Mock()
self.worker.run()
self.assertTrue(m_asyncio.async.called)
self.assertTrue(self.worker.loop.run_until_complete.called)
self.assertTrue(self.worker.loop.close.called)
def test_factory(self):
self.worker.wsgi = mock.Mock()
self.worker.loop = mock.Mock()
self.worker.log = mock.Mock()
self.worker.cfg = mock.Mock()
f = self.worker.factory(
self.worker.wsgi, 'localhost', 8080)
self.assertIsInstance(f, WSGIServerHttpProtocol)
@mock.patch('gunicorn.workers.gaiohttp.asyncio')
def test__run(self, m_asyncio):
self.worker.ppid = 1
self.worker.alive = True
self.worker.servers = []
sock = mock.Mock()
sock.cfg_addr = ('localhost', 8080)
self.worker.sockets = [sock]
self.worker.wsgi = mock.Mock()
self.worker.log = mock.Mock()
self.worker.notify = mock.Mock()
loop = self.worker.loop = mock.Mock()
loop.create_server.return_value = asyncio.Future(loop=self.loop)
loop.create_server.return_value.set_result(sock)
self.loop.run_until_complete(self.worker._run())
self.assertTrue(self.worker.log.info.called)
self.assertTrue(self.worker.notify.called)
def test__run_connections(self):
conn = mock.Mock()
self.worker.ppid = 1
self.worker.alive = False
self.worker.servers = [mock.Mock()]
self.worker.connections = {1: conn}
self.worker.sockets = []
self.worker.wsgi = mock.Mock()
self.worker.log = mock.Mock()
self.worker.loop = self.loop
self.worker.loop.create_server = mock.Mock()
self.worker.notify = mock.Mock()
def _close_conns():
self.worker.connections = {}
self.loop.call_later(0.1, _close_conns)
self.loop.run_until_complete(self.worker._run())
self.assertTrue(self.worker.log.info.called)
self.assertTrue(self.worker.notify.called)
self.assertFalse(self.worker.servers)
self.assertTrue(conn.closing.called)
@mock.patch('gunicorn.workers.gaiohttp.os')
@mock.patch('gunicorn.workers.gaiohttp.asyncio.sleep')
def test__run_exc(self, m_sleep, m_os):
m_os.getpid.return_value = 1
m_os.getppid.return_value = 1
self.worker.servers = [mock.Mock()]
self.worker.ppid = 1
self.worker.alive = True
self.worker.sockets = []
self.worker.log = mock.Mock()
self.worker.loop = mock.Mock()
self.worker.notify = mock.Mock()
slp = asyncio.Future(loop=self.loop)
slp.set_exception(KeyboardInterrupt)
m_sleep.return_value = slp
self.loop.run_until_complete(self.worker._run())
self.assertTrue(m_sleep.called)
self.assertTrue(self.worker.servers[0].close.called)
def test_close_wsgi_app(self):
self.worker.ppid = 1
self.worker.alive = False
self.worker.servers = [mock.Mock()]
self.worker.connections = {}
self.worker.sockets = []
self.worker.log = mock.Mock()
self.worker.loop = self.loop
self.worker.loop.create_server = mock.Mock()
self.worker.notify = mock.Mock()
self.worker.wsgi = mock.Mock()
self.worker.wsgi.close.return_value = asyncio.Future(loop=self.loop)
self.worker.wsgi.close.return_value.set_result(1)
self.loop.run_until_complete(self.worker._run())
self.assertTrue(self.worker.wsgi.close.called)
self.worker.wsgi = mock.Mock()
self.worker.wsgi.close.return_value = asyncio.Future(loop=self.loop)
self.worker.wsgi.close.return_value.set_exception(ValueError())
self.loop.run_until_complete(self.worker._run())
self.assertTrue(self.worker.wsgi.close.called)
def test_wrp(self):
conn = object()
tracking = {}
meth = mock.Mock()
wrp = gaiohttp._wrp(conn, meth, tracking)
wrp()
self.assertIn(id(conn), tracking)
self.assertTrue(meth.called)
meth = mock.Mock()
wrp = gaiohttp._wrp(conn, meth, tracking, False)
wrp()
self.assertNotIn(1, tracking)
self.assertTrue(meth.called)